By Royal decree from the King of the Internet the fines issued by the testnet cops has been overturned and the year of the testnets has been allowed to continue.
Yay for the continuance of The Year of The TestNets with a right royal of a testnet to celebrate
I was able to successfully upload three batches of files about seven hours ago. Trying now to upload, but stuck on āconnecting to the safe networkā.
Yupppp, it certainly seems to have had a bad time still. I suspect at the moment until we get those fixes from lib2p in a release weāll have to disable royalty payment forwarding.
Edit: bringing this down, looks like the gossip msgs are potentially looping here.
Are we maybe having a network without Maidsafe nodes now? The one causing the gossip is gone ā memory down?
This is a CALL TO ACTION, COMMUNITY, letās keep this alive!
EDIT:
Hmmm⦠Vdash shows I have 0 connections, yet the number of GETs keeps growing at pace. And the total of PUTs is not growing, so my nodes are not pingponging together. Paging @happybeing
Well, the download is not working for me. The upload skipped the verifications:
Connected to the Network Chunking 1 files...
Input was split into 4 chunks
Will now attempt to upload them...
Uploaded 4 chunks in 38 seconds
**************************************
* Payment Details *
**************************************
Made payment of 0.000000000 for 4 chunks
New wallet balance: 299.999990988
**************************************
* Uploaded Files *
**************************************
Uploaded topibott.jpg to f178aeac1905a08df608d747cad66c4749f1afb83e46b6e02cf4e397baa2b81e
Download:
time safe files download topibott.jpg f178aeac1905a08df608d747cad66c4749f1afb83e46b6e02cf4e397baa2b81e
(...)
Connected to the Network
Downloading topibott.jpg from f178aeac1905a08df608d747cad66c4749f1afb83e46b6e02cf4e397baa2b81e
Error downloading "topibott.jpg": Chunks error Chunk could not be retrieved from the network: 31a65a(00110001)...
I think thatās expected. I remember @dirvine mentioning some minimum of nodes that is needed for stability, it was about 2000 if I remember right. I/we have now maybe 14.
If anyone wants to connect I can DM my node connection info.
If we didnāt try we wouldnāt know, thanks for the effort Josh et al.
May not affect many but perhaps mention when you expect something may be off compared to other tests.
For instance I (and I am sure others) calculate what we could reasonably run based on previous tests.
So I spun up 300 nodes which went very poorly for my machines.
Had there been a note warning that memory may be higher we could take that into account and be more prudent with the numbers and hopefully avoid large numbers of nodes going offline rapidly.
Good point. Having seen last time my 20 nodes ran the distance so I started with 30 and some were killed all right away. So I killed them all to start again with 20.
Definitely been thinking about doing this but since this one didnāt run very well with even with original nodes think we should try this with the next one performance depending.
It was mentioned in context of having more equal distribution of chunks.
Which is probably wrong.
Most likely, network can work with small amount of nodes, but itās a shrinkage which is not implemented.
Also there may be some sort of centralization which is left semi-intentionally.
Why there were no tests with slowly turning off initial nodes? It was proposed many times if I remember correctly.