this should do it downloading now
safe files download ~/beryllium-1-amd64.hybrid.iso a62e87882724b7e8b79cbe861d3770ff06b96444349fd5e50a502a0e296acb9e
this should do it downloading now
safe files download ~/beryllium-1-amd64.hybrid.iso a62e87882724b7e8b79cbe861d3770ff06b96444349fd5e50a502a0e296acb9e
got it in 9 minutes
ubuntu@safe-hamilton:~$ time safe files download ~/beryllium-1-amd64.hybrid.iso a62e87882724b7e8b79cbe861d3770ff06b96444349fd5e50a502a0e296acb9e
Removed old logs from directory: "/tmp/safe-client"
Logging to directory: "/tmp/safe-client"
Current build's git commit hash: 0be2ef056215680b02ca8ec8be4388728bd0ce7c
š Connected to the Network Downloading file "./beryllium-1-amd64.hybrid.iso" with address a62e87882724b7e8b79cbe861d3770ff06b96444349fd5e50a502a0e296acb9e
Successfully got file ./beryllium-1-amd64.hybrid.iso!
Writing 1584496640 bytes to "/home/ubuntu/.safe/client/./beryllium-1-amd64.hybrid.iso"
real 9m18.344s
user 4m49.110s
sys 2m32.253s
on an ssh box running ubuntu server on virgin fiber broad band
ubuntu@safe-hamilton:~$ speedtest-cli
Retrieving speedtest.net configuration...
Testing from Virgin Media (xx.xx.xx.xx.xx)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by PebbleHost (Coventry) [132.65 km]: 34.21 ms
Testing download speed................................................................................
Download: 326.99 Mbit/s
Testing upload speed......................................................................................................
Upload: 53.06 Mbit/s
only thing i may be misunderstanding is looking at iftop which i started at the same time as the download began it looks like it took 12Gb of data transferred tx and rx totaly to download a 1.6Gb file ?
@qi_ma maybe worth a look or I could be misunderstanding something
12m30s to download to my Hetzner node
safe@ubuntu-2gb-nbg1-1:~$ time safe files download beryllium-1-amd64.hybrid.iso a62e87882724b7e8b79cbe861d3770ff06b96444349fd5e50a502a0e296acb9e
Removed old logs from directory: "/tmp/safe-client"
Logging to directory: "/tmp/safe-client"
Current build's git commit hash: 0be2ef056215680b02ca8ec8be4388728bd0ce7c
⢠Connecting to The SAFE Network... The client still does not know enough network nodes.
š Connected to the Network Downloading file "beryllium-1-amd64.hybrid.iso" with address a62e87882724b7e8b79cbe861d3770ff06b96444349fd5e50a502a0e296acb9e
Killed
real 12m30.710s
user 2m56.910s
sys 2m14.127s
Here we go ā¦
Dunno whats up
System didnt like my last post
New release
Lets get this latest release downloaded and see if I can connect to ReplicationNet
I can create and edit registers, canāt upload yet
I see there has been 2 more commits and version nos are jumping so Iāll try again.
Eager to break it?
4.0K /tmp/safenodedata/record_store
8.0K /tmp/safenodedata
38M /tmp/safenodedata2/record_store
38M /tmp/safenodedata2
2.0M /tmp/safenodedata3/record_store
2.0M /tmp/safenodedata3
4.0K /tmp/safenodedata4/record_store
8.0K /tmp/safenodedata4
4.0K /tmp/safenodedata5/record_store
8.0K /tmp/safenodedata5
Only 2 nodes taking chunks. Iād be curious to see what happens if Maidsafe took, say 10% of their nodes downā¦
Yes I have a few nodes myself that are empty so far
20 nodes running on Hetzners smallest
safe@ubuntu-2gb-nbg1-1:~/.safe/node$ du -h
134M ./safenode_4/record_store
149M ./safenode_4
91M ./safenode_1/record_store
106M ./safenode_1
4.0K ./safenode_2/record_store
13M ./safenode_2
45M ./safenode_19/record_store
55M ./safenode_19
9.9M ./safenode_5/record_store
25M ./safenode_5
4.0M ./safenode_17/record_store
15M ./safenode_17
65M ./safenode_14/record_store
74M ./safenode_14
7.2M ./safenode_6/record_store
22M ./safenode_6
122M ./safenode_8/record_store
137M ./safenode_8
508K ./safenode_3/record_store
14M ./safenode_3
61M ./safenode_11/record_store
70M ./safenode_11
119M ./safenode_13/record_store
128M ./safenode_13
4.0K ./safenode_9/record_store
13M ./safenode_9
5.0M ./safenode_16/record_store
15M ./safenode_16
65M ./safenode_12/record_store
75M ./safenode_12
5.2M ./safenode_18/record_store
16M ./safenode_18
4.0K ./safenode_10/record_store
14M ./safenode_10
134M ./safenode_20/record_store
144M ./safenode_20
112M ./safenode_7/record_store
129M ./safenode_7
5.7M ./safenode_15/record_store
16M ./safenode_15
1.2G .
The release process is still a work in progress at the moment.
They will have binaries attached when itās fully working.
Quickly charted out the json metrics provided by safenode
pid running inside an Alpine LXC above.
CPU is holding steady on the safenode
pid.
Memory on the safenode
pid seems to be ever so slowly increasing over the past 36+ hrs if zoomed in?
I will be curious if this ran for a couple more days, where will it end up at? Will it flat line at ± some %?
Out of curiosity, I charted out memory usage vs total bytes written of safenode
pid, hmm.
Note: Overall, the CPU and Memory Usage as others have mentioned is darn low, which is great .
Thanks everyone for testing things out so far! Weāll be leaving this up for a while more (over the weekend I guess so more folk can have a go!).
One real interesting thing weāve seen is that some nodes (amongst our DO nodes), do not appear to get chunks for a looong time. One went 16 hours without anything and then sprung to life and now has a more normal amount of chunks.
This might well be something other folk are seeing on here, too. I suspect this is to do with our network discovery mechanisms, but thus far I donāt see anything obvious in the logs that preventing it joining, or sparking the chunk influx
That is without chunks, right?
Iāve seen similar. Sometimes a node sits there for hours before chunks arrive, other times it happens pretty much instantly. As data should be reasonably randomly distributed it seems thereās a blockage somewhere.
That feels likely, intuitively. Almost as if only a few nodes are aware of the new nodeās existence.
Quick question: when a new node arrives, if it is in the closest 8(?) to a chunk based on XOR should it immediately get a copy of the chunk, or do we wait for churn to happen?
That would be the churn that should trigger replication
So possibly thereās nothing wrong for this testnet. All the nodes are currently on cloud instances and should be stable unless deliberately taken down, and if a node arrives close to a stable group thereāll be nothing for it to do.
It should be trivial to check, by searching problematic node id among logs of all MaidSafe nodes.
I think it make sense to stop this network slowly.
And watch if data will flow to chunkless nodes.
No, at the the time the earlier charts were created, there was about ~120MB of data in ./record_store/
.
At this time, its 163MB with 414 files in ./record_store/
.
I wonder why Total Bytes Read is 4.29 kB then.
When user upload something during this test, he usually download it afterwards.
Which means these 120MB should be read at least partially.
However, it may be that some amount of chunks is stored also in RAM, so there is no need to read them from the disk.
I donāt know why Total Bytes Read is so low, however, its not true that folks will download what they are uploading right away.
I for one have a current script that is uploading 5120x10MB files from a completely different LXC hosted on another physical machine, where the upload still hasnāt completed yet (been running for hours). I may or may not at a future time choose to download those files.
Since Total Bytes Read was near 0MB, this is why I charted Total Bytes Written vs Memory Usage to see if there was any correlation with the ever so slight memory rise off safenode
over many hours.
Yes.
However for JPLās case, there is other log lines on debug , which for sure the missing of commit hash is due to old release.
Ah, yes, I received it. thank you very much.
Unfortunately, the log on info
level only proves it got connected to the network and keeps discovering/being discovered other peers.
Cannāt tell whether there is any issue in the replication flow that may result in no in-flow chunks
.
Ideally, it will be appreciated if the node can be restarted and with trace
level logging turned on.
Itās the ReplicationFactor
(8 currently) plays the magic. i.e. for each chunk client asking for, itās actually max of 8 copies will be fetched from the network.
It might got optimised later on.
Thatās a fab diagram you got, really appreciate your wok.
As an answer in general to why the memory usage keeps increasing linear to the bytes written: there are some cache tables, for example chunk_names kept by this node or the peers routing_table kept within libp2p are kept in memory and with a lazy pruning
, i.e. they wonāt get cleaned up eagerly and will keeps growing with more peers discovered and more chunks flows in.
I doubt that no-chunk state will reproduce again, but ok, Iāll restart it.