[Offline] Another day another testnet

image

5 Likes

Another possibility is that you was able to join because some of initial nodes failed.

3 Likes

I’m still getting chunks!

Hmm… how do I stop the ongoing joining process? I tried CTRL+C and after doing the safe networks sections command, it just keeps trying to join again, giving:

The network is not accepting nodes right now.
2 Likes

#methree

ubuntu@safe:~/safe/Mp3's/10mb/1$ safe files ls safe://hyryyryip96f9ojodzifniqn1zcu3wbqnhsiordj7o51jh5do86c596yhqhnra
Error:
   0: NetDataError: Failed to GET file: NotEnoughChunksRetrieved { expected: 3, retrieved: 0 }

Location:
   sn_cli/src/subcommands/files.rs:355

Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.

try pkill -e sn_node

1 Like

I’m getting serious pining for the fijords vibes here, lads :frowning:

No response from safe wallet create either from home or the Hetzner nodes. No response either on putting tiny files like ~/.bash_logout either

it took us a while but we managed to bork it hopefully there will be some useful info in the logs for the team

7 Likes

Yep, this is an ex testnet. It has ceased to be, shuffled off to join the other ex-testnets

willie@node02-public-testnet:~/to-upload$ safe files put ~/.bash_history 
Error: 
   0: Failed to connect with read-only access
   1: ConnectionError: Failed to connect to the SAFE Network: NetworkContacts("failed to make initial contact with network to bootstrap to after 3 attempts, result in last attempt: Err(NoResponse { msg_id: MsgId(1163..ba9e), peers: [Peer { name: b62694(10110110).., addr: 161.35.42.143:44107 }, Peer { name: 38608b(00111000).., addr: 142.93.38.111:35572 }, Peer { name: a1dcf8(10100001).., addr: 104.248.167.4:43678 }] })")
4 Likes

Eventually my node joined as well, at 2022-12-17T07:26:08 GMT

Very plausible.

My node stats:

95M chunks/
992K register/

(only)

Puts or gets no more

RIP

3 Likes

They are but DO bandwidth is getting pulverised and we are getting loads of alerts.

I think setting higher limits with no payments means folk are just gonna hammer the hell out of them till they fill.

I wonder is it better testing them with real data and sharing links etc. work with DBCs and so on to see it all works properly. Then as it fills up we get to add nodes, mind you there is a fix we need to put in place to stop the bugs there and that is the NAT detection thing and likely the joining loop mem hog.

9 Likes

I think we may have DO killing nodes as well now.

Hard to work out how best to test this. I think we need a better testing overview of each testnet to have everyone test particular parts. Not easy though as folk will do what they want and we kinda want that too.

Need to think on this a wee bit more

11 Likes

definitely agree with testing with real data how about dropping down to 20GB per node to try and avoid the draw to upload random data to beat the Join Wall ?

7 Likes

How far are we away from paying for uploads?

A testnet is prepared and its DBC instantiated.
The devs decide in advance what the max total data the testnet will accept
Us punters apply via a dedicated thread on here for permission to upload with their wallet address.
Testcoins are sent to those granted permission to upload.
Useful data -ie non-random verifiable content, links articles, mp3s, videos (once we get past the 10Mb limit) gets added at a (somewhat) controlled rate.

maybe grant testcoins on showing evidence of having verified some earlier data?

Its a good bit more work for the devs of course… #JustAThott

8 Likes

How do I know this is not just you trying to steal my nodes place in the network? :wink:

Seriously though, what is that command supposed to do? Can you explain a bit? I’m still getting chunks, and I would not like to make any premature damage to my node or the network. My plan is to let the node live as long as anything moves.

At least if there is any chance to get your node in, then it seems to light that spark to move towards that goal, no matter what the other goals of the testnet might be. I mean if this testnet would have been set up with absolutely no chance to have a node from home, maybe the hammering would have been a bit more on the easy side,

3 Likes

Indubitably.

I tend to use the -e switch out of habit now, it shows the PID of each process being killed.

2 Likes

Did it.The result:

sn_node killed (pid 1310)
sn_node killed (pid 18638)

The latter was the pid I got for the node when it joined:

And it seems I’m not getting chunks anymore. Thanks anyway @Southside, I highly appreciate you always helping me and others. I needed to do something, because it seemed to me that memory usage was creeping up again, and I didn’t want to leave the situation as it was, because that way my whole machine could have been frozen by the morning. Your advice was the best I got, once again.

Good night everyone!

6 Likes

I think that just makes it an easier join wall. It may be best just straight to disallow joins until we have data security and working 100% with no mem leaks. I don’t think that is far away, but we will need to also test for churn, which means allowing joins. It’s a good problem to have at this stage though, all very encouraging.

I suspect code in the pay for data and having data signed is maybe a step we need to take soon.

9 Likes

One option I wonder would be to automate the data but allow nodes to join… perhaps a tmp add on in the node install that pushes and pulls data randomly but consistently over the whole that is less user volatile. Put and Get proofs at a steady pace to some cap, then see that the network can handle that volume in a stable way, while a few other new nodes join. Still, liable to on/off but perhaps tracking how many nodes are up will flag if that is occurring. 2¢

2 Likes

Using real data would be a bit of a pain I think. Modified my script to slow things down and use variable sized random files - also probably should only run one or two instances of it and not as many as you can.

#!/bin/sh
while true
do
filesize="$(shuf -i 10000-900000000 -n 1)"
filename="$(cat /dev/urandom | tr -cd 'a-f0-9' | head -c 16)"
head -c ${filesize} </dev/urandom >"${filename}"
safe files put "${filename}" | tee -a putlog.txt
rm "${filename}"
sleep $(shuf -i 1-10 -n 1)s
done
exit 0

haven’t tested it yet.

Yes but no point in testing PUTs without GETs.
Real data can be verified. Unless you want to send checksums with each random file and keep track of them.

Thee is a part to play in the coming testnets for those who cannot run a node for whatever reason but can verify our uploads.
We can stress test it later :slight_smile:

2 Likes