ReplicationNet [June 7 Testnet 2023] [Offline]

Thank you, do you have any thoughts on why some of us see no chunks?

2 Likes

According to the statistics collected by @joshuef earlier today acorss the droplet nodes, and the posts from the community, I’d say the Replication itself is a success in general so far.
There is no report of high cpu/mem usage observed, neither a loss of uploaded file so far. (I just see one report saying failed to download a file with a lost of around 3 chunks among 1.3k chunks. But the file got downloaded back sucessfully later on with a second try.)

Regarding some reports of no chunk files got created, it’s waiting for a further investigatioin. Obviously the distribution of chunks is not evenly but somehow gaussian distribution, hence maybe some node with 0 copy could happen. Anyway, we are looking for the full log of such empty node to confirm the behaviour.

There are also some other problems/enhancements reported/suggested and they got recorded.
A longer time to keep the testnet running will give us more valuable learn outs.

23 Likes

Wow guys this sounds very promising! No time for me to test unfotunately as I am packing for a camping weekend, so I hope there is something left to test when I am back.
Keep up the good work!:clap:

9 Likes

here is the log for one of my failed mp3’s

if any feels like taking a look the failure rate is really holding back my uploading :frowning:

safenode.log.zip (64.7 KB)

9 Likes

What is the largest single file anyone has successfully uploaded?

I have a pass and a fail

12kb is fine, 362Mb is not

willie@gagarin:~/projects/maidsafe/safe_network-ReplicationNet$ safe files upload ~/willie03.ics 
Removed old logs from directory: "/tmp/safe-client"
Logging to directory: "/tmp/safe-client"
Current build's git commit hash: HEAD
🔗 Connected to the Network                                                                                                                                                  Storing file "willie03.ics" of 12455 bytes..
Successfully stored file "willie03.ics" to c81b3e79766f8d0b610471aa058ad41cb07452830c1a92d8b1c21a6dd7471573
Writing 60 bytes to "/home/willie/.safe/client/uploaded_files/file_names_2023-06-08_13-01-48"
willie@gagarin:~/projects/maidsafe/safe_network-ReplicationNet$ safe files upload ~/Videos/cooking/madhur/madhur05.avi 
Removed old logs from directory: "/tmp/safe-client"
Logging to directory: "/tmp/safe-client"
Current build's git commit hash: HEAD
🔗 Connected to the Network                                                                                                                                                  Storing file "madhur05.avi" of 362742180 bytes..
Did not store file "madhur05.avi" to all nodes in the close group! Network Error Outbound Error.
Writing 8 bytes to "/home/willie/.safe/client/uploaded_files/file_names_2023-06-08_13-02-17"
3 Likes

@mav succeeded in uploading a 1.8G file and downloaded it back, though the speed is bit slow.

9 Likes

Hi, @aatonnomicc , the log you shared seems only got info level turned on.
if possible, could you please turn on the trace level logging and carry out an upload again and share the log files?

Also, seems there is no log of Current build's git commit hash: ... , which seems using an old client ?

thank you very much

8 Likes

This testnet is fantastic! Time to celebrate. :champagne:

I have a suggestion. Giving these testnets cool names is a great idea. But also have a number with it too so people know what the order of release is. That way when there are a lot of them, it will be easier to know how old the testnet is.

Edit: Also it will be a subtle way to remind everyone of just how many testnets Maidsafe has produced.

8 Likes

My node was not receiving chunks. I sent the logs to @Qi_ma and he noticed the git commit was missing and there were loads of “connection closed to peer” errors.

I downloaded safenode again and started a new node and checked that line was there in the log. This time it was

Current build's git commit hash: 0be2ef056215680b02ca8ec8be4388728bd0ce7c

Within 10 mins this new node was blessed with a chunk. :tada:

Qi also suggests using SN_LOG=all RUST_LOG=trace for a belt and braces approach to logging.

16 Likes

thanks @qi_ma i think this is it with trace enabled

version

ubuntu@safe-byres:~$ safe -V
sn_cli 0.77.0

and log of a failed upload
safenode.log.zip (79.1 KB)

8 Likes

A big :+1: to all! We are getting stable! Someday in about 2 years, I will join the test nets once more, or if I find the time (still working on my own software project right now) although I feel long before then, we will have a functioning network anyway. Cheers

10 Likes

Hi, @aatonnomicc , thx for the log.
The logs shows the upload failed due to timeout when sending copy to the network nodes.

May I know your connection speed for upload? thx

7 Likes

this is a speed test from my laptop on WIFI the box that I am uploading from is hard wired to the router

1 Like

Hi, @aatonnomicc , thx for the shared connection info.
It seems bit tricky then.

Your speed test showing an upload speed of 3.41 mbps.
Meanwhile, the log you shared shows within the period of 15s, around 8.5MB data got transmitted:

[13:51:08.872114] ...,"network":{"interface_name":"eno1",...,"total_mb_transmitted":78545.59},...
[13:51:23.880305] ...,"network":{"interface_name":"eno1",...,"total_mb_transmitted":78564.016},...

8.5MB*8/15s = 4.53mbps , which is higher than your speed test.
Note this usage is whole OS usage. the client usage at the same time is "total_mb_written":0.565248 , which is 0.56MB and containing writing to log files.

This seems there is other process consuming the network traffic and choking the client upload ?

1 Like

I started over, I see Current build's git commit hash: 0be2ef056215680b02ca8ec8be4388728bd0ce7c but chunks evade me.

1 Like

seem strange indeed that box is only for test nets.

I tried uploading on the same internet connection from my laptop over wifi and it was the same failure rate.

i made 5g hot spot from my phone and connected the laptop to that and it uploaded just fine with 100% success rate.

2 Likes

They’re not exactly pouring. I only have 10 chunks after 2 hours

1 Like

3rd time lucky :four_leaf_clover: can move on with my day now!

root@SAFE:/tmp/safenodedata# du -sh record_store
90M     record_store
1 Like

I have the similar issue with visiting github pages sometime:
visiting via wireless connection from my laptop at home failed due to time out
but when use my phone’s 4G and shared hotspot, visit will succeeded.
That time out stuff really depends on how many gateways the flow passes, and sometime the router at home could be a main choker.

1 Like

yea true im going to have a look at router and see if there are any updates available.

looking to the future I don’t have the best broadband in the world but its probably better than 80% of the worlds population if it wont work on my set up then there could be trouble for mass adoption.

the set up was exactly the same for previous test nets and never had problems before. but i used to set a high time out is there any way to increase the time out on this latest client ?

1 Like