If ~2000 is the minimum for a viable network why turn nodes off? Unless the community have 2000 up we just go below the threshold.
I think it was about Kademlia not really behaving well with low number of nodes.
This is due to a change in the logfile messagesā¦
@roland, in this commit you removed the count of connected peers which was here:
info!("New peer added to routing table: {peer:?}, now we have #{connected_peers} connected peers");
Can the count be re-instated in the logs (or is it available elsewhere)? Iām using that log message to display number of connections for a node in vdash
? If not, let me know and Iāll remove the count from vdash
. Thanks.
What do you mean here?
@Toivo corrected this and he is right. Under 2000 nodes gives very uneven and unreliable behaviour.
Shrinkage? If you mean the remaining nodes will try and store all the data, but cannot as they fill up, then you are correct.
When swapping of nodes is not tested, it is hard to say that network have no dependency on first nodes.
Uneven - this is what I mentioned.
But with which size it breaks (can no longer operate)?
For example, in conditions close to perfect, when no much nodes are joining and leaving.
I mostly mean conditions when nodes are not 100% filled, but for some reasons lots of them become offline.
As for 100% fill scenario, is network made in such way that it is not theoretically possible to continue operation or such feature just not implemented?
Just to let folks know, I have not given up
I still have my two nodes online, and Iām trying to upload a 60MB file. This is how it goes:
Connected to the Network Chunking 1 files...
Input was split into 115 chunks
Will now attempt to upload them...
Uploaded 115 chunks in 3 minutes 45 seconds
**************************************
* Payment Details *
**************************************
Made payment of 0.000000000 for 115 chunks
New wallet balance: 299.999990969
**************************************
* Verification *
**************************************
115 chunks to be checked and repaid if required
Verified 115 chunks in 19.241969696s
62 chunks were not stored. Repaying them in batches.
ā [00:03:24] [####################>-------------------] 60/115
62 chunks to be checked and repaid if required
Verified 62 chunks in 12.606774537s
61 chunks were not stored. Repaying them in batches.
ā [00:03:27] [######################################>-] 60/62 Repaid and re-uploaded 62 chunks in 207.191046061s
61 chunks to be checked and repaid if required
Verified 61 chunks in 12.037679902s
59 chunks were not stored. Repaying them in batches.
ā [00:02:36] [#####################################>--] 57/61 Repaid and re-uploaded 61 chunks in 156.657327161s
59 chunks to be checked and repaid if required
Verified 59 chunks in 34.056230296s
59 chunks were not stored. Repaying them in batches.
ā ¤ [00:04:20] [######################################>-] 57/59 Repaid and re-uploaded 59 chunks in 260.449517741s
59 chunks to be checked and repaid if required
Verified 59 chunks in 19.487866469s
59 chunks were not stored. Repaying them in batches.
ā [00:02:30] [#####################################>--] 56/59 Repaid and re-uploaded 59 chunks in 150.34703527s
Seems to be getting stuck at 59 chunks.
(I donāt know why Iām doing this⦠)
This looks completely broken.
It would be nice to know what exactly prevents it from working.
By the way, how many records your node have?
Is network in state described above by @dirvine - close to 100% fill or not?
@Toivo I was talking about amount of files in record_store
directory.
But it may correspond to 235 PUTS from your screenshot.
If I remember correctly, 100% fill is 2048 records.
So now network have approximately (235 * 100 / 2048) 11% fill.
Looks like ācannot as they fill upā is not the case right now.
Yes, one has 235 and the other 293.
I donāt know if it really is possible to track the issues, as the network at the moment is so small. Unfortunately I donāt have time to dig any deeper now, but I remember seeing in the logs something about 20 nodes required for some basic action, and a the time the amount of nodes available was 14. I donāt know how many nodes are left at the moment, maybe even less.
Sure! That would not be a problem.
It was removed because, I assumed that .count()
on an iter would be costly. But we do that in other places frequently as well.
I think we should cache that somewhere later down the line.
Same thing today. But it finds the initial 5 peers rather quickly. I wonder who are the other morons still having their nodes running?
Iām gonna kill my nodes now.
Funny thing thereās about one GET per second going on, according to Vdash. And it seems to reflect the network activity graphs in my system monitor. OK, letās see what happens when I kill my nodes⦠Yep, it stopped the upload. So my nodes were really sending chunks somewhere.
Iām afraid itās impossible without people publishing their peer Ids, because default node/client behavior is to connect to Maidsafeās nodes, which are offline. So, this prevents anybody to connect to the network with fresh install.