Sounds like the end of the road, should I hold off getting nodes up?
other thing I did was started a load of nodes then quickly shut them down to restart with less nodes per vps to allow for 300 per node my script originally allowed for 100mb per node.
just incase that could have upset a young network
What holes here mean?
Dead nodes? Empty nodes?
I’m asking because my node is chunkless again (empty record_store
).
nodes that do not yet have records.
Updated the title atm, marking this as paused while we poke at the records stored and confirm if it’s a bug in prune. (Or indeed, if we are actually full of relevant data)
I did some testing with the client, no problems with any downloads or uploads with royalty payments (including 1.4GB upload). So thats a positive🙂
Althought uploads still are painfully slow (especially upload repayments). Hopefully the upload/download speeds will continue to get faster as we march towards the beta/launch!
It does look like we’re rejecting valid data at the moment when full (ie, not pruning irrelevant data).
I thiiink we’ve not seen this as prior to recent pay-one node, replication was less frequent, and so we never hit the roof. (And on the more recent test-nets we just died real fast so didn’t get to see this at all).
We’re working up some changes/fixes and tests to further verify this atm.
Can’t get tokens:
Failed to get tokens from faucet, server responded with: Failed to send tokens: Transfer Error Failed to send tokens due to The transfer was not successfully registered in the network: CouldNotSendMoney(“Network Error Could not retrieve the record after storing it: 5a72b6f2a80d774e061b22e99f4db7aa4a20cbaef30bf842110ac6466ac2a314(f086b83202bddfb6fb208dbd67b057e03a9f64fdf35cda1ccf42b52d998edfa5).”).
weird error message too.
Okay, i’ll likely bring this one down soon, it’s a bit of a dead duck now. We’ll be gathering a few more logs on a smaller testnet to verify more before we re-up.
Wrong place for this I know but @moderators threads are not updating elsewhere too, dunno if this is within your powers to correct.
Works for me.
Can you try different browser?
same on my phone albeit on the same browser, but I will give a different browser a bash.
OMG @Vort where’s your dark theme … my eyes are burned out!
Seriously though, I’ve had to reload the main page to get it to update a couple of times this week. New bug maybe? I’m on Brave browser, but was working fine in the past and I haven’t updated it.
I’m getting notification by e-mail by the way.
Mail arrives approximately 10 minutes after testnet topic is created.
This one maybe?
Need to do a ctrl F5
I had this yesterday and only ^F5 fixed it
Wee update here.
We’ve exonerated the record store from this. Our nodes really were full. But they should not have been! Their local knowledge is out of whack with the real network topology, leading to inability to store data.
I can only speculate as to why we’re seeing this now (the fact we can put more data?), but essentially it looks like our bootstrap process (via libp2p) isn’t as effective as we’d thought, tending towrads further nodes rather than closer…
So we’re looking for some tweaks to this to help discover the network more reliable and consistently across the space.
Hey Josh, I am working on log management for ntracking and have a network running to test against. I have uploaded nothing to it, but many nodes have a single chunk in record store. Any idea why? I only ask here as I wonder if it has anything to do with the issue at hand.
The current main there is actually getting the parent folder content count, not the record store. this PR changes that: more logs no cns by joshuef · Pull Request #40 · maidsafe/sn-testnet-deploy · GitHub
So likely just that, I think? (ie, not a big issue)