I’ve been doing a bit too much random to be comfortable…
so, safe auth stop and restarting, coupled with switching to new terminals; creating new accounts, losing track of what is reliably uploaded and too many hanging terminal queries, for being tempted to do more… so, holding off uploading large files etc, for not knowing also where the limits lie - balances and size of file that are practical - not tested yet large file versus ram usage. All good but evidently a few glitches still that need hammering.
I haven’t had a chance to try it yet - hopefully at the weekend - but on Windows with Baby-fleming that step sometimes took a couple of goes requiring restarting authd and killing the nodes. Also at one stage in Windows the terminal doesn’t show any output so it appears to have hung, but you just need to open a new terminal and carry on from there (from memory I think that’s the authorisation stage, but I could be wrong)
Yes we do have a resource proof thing here as well. So joining nodes need to do some work, not a huge amount, but some.
I think this is where we must live with it, the worst case we can make the queue a certain length, say 100 joining nodes, but then back to thestart. I am sure we will play around more with this part though.
Isn’t it nice to be able to discuss this stuff now. It’s a great thing about feature complete testnet, there’s nothing else to consider
IMO, as a natural defense, clients should be expected to perform more work than whatever the network would perform. Ideally the work is useful to the network, however that’s not necessary to dissuade spam/flood attacks. To enqueue, perhaps it’s a calculation. To write data, it’s a SNT fee.
Short of that, what’s to prevent an attacker from spinning up a bot farm to spam joins and either blowing out the queue’s storage or starving the network of capacity growth by being unable to find legitimate nodes to enlist?
Obviously security vulnerabilities in designs are more appropriate later, however it’s been raised so I can’t resist.
I removed those options several hours ago and now succesfully watching TryJoinLater flooding the logs
But I still think it is a bug in Safe Network.
It was ok to make configuration change for me, but it was hard to understand what it going on.
So since it is a testnet, such problem is not needed to be fixed ASAP, but people should know about its existance I think.
Yes this is the case for the resource_proof use. A new node needs to do a bunch of work in excess of that done by the network to hold him. It’s all though so subtle in many ways (how much work to hold him in a queue, it could be a lot or almost none)
Could client-initiated network workload, from first hello through to writes, be classified into budget tiers based upon the scale of the work, and those budget tiers assigned resource_proof tasks commensurate with the cost so as to not leave the network in a resource deficit?
We don’t have any mechanism to allocate work to Infant nodes (nodes waiting to join) as the network needs to “take them in” and that means work. What we really want is them willing to hang around (not just join then leave) and do some work. The work right now is just random stuff (hashcash leading zeros matching).
I agree that Sybil attack prevention is needed for network now.
But I think it is more preferable to use as much user provided resources as possible.
If there are lots of people who want to contribute and network do not need all their resources, it can focus on resources quality then - select the best ping / largest storage / RAM / CPU power nodes.
It will be better than selecting some random guy with poor network connection and small HDD just because he is lucky.
However it is good to have “bad” resources available too - just in case of people with better resources go offline for some reason.
This is an interesting point… if the random selection is the algorithm, what’s to stop a 3-letter from continuously signing up a bargeload of nodes all over the world and then degrading/corrupting output if/when they are selected? Conversely if only the highest quality nodes can be selected, what would prevent Amazon/Google from taking over?
What if every queue selection for node hosting required burning e.g. 10 safe to the network as an anti-spam measure?
As for useful resource_proof ideas, would checksumming a random data/block qualify? I’m thinking of it like a ‘free,’ externally performed and validated, continuous ZFS scrub. Using the scenario of a node joining the queue, the node is given some URL and a hash which can either be what’s expected or a rand() fake, the node performs a read of the URL, calculates the hash of its data, returns whether the hashes match, and if the answer is correct the node is entered to the queue.
FWIW, if the network read is a curious addition, my rationale is that there’s value in beginning to establish basic functional compatibility.
To get an agreement (consensus) on that and at what level is quite a bit of work. i.e. the nodes will come and go, they may lie, so we need provable tests seen by supermajority and so on. So this kind of thing is not insignificant (and I mean extremely hard). Then nodes can proxy for others, pass the tests and swap, so you have to protect against that and then it’s way down the rabbit hole
Not saying this is wrong, but it simplifies a significant process that does not really exist, I know filecoin played with vector attributions and RSA commitments to try some of that, but I have not seen the results. I have seen their hardware requirements and it might be related? That was only though for space available, not cpu etc.