How would we enforce that?
Im not saying its not desirable but I suspect unenforceable.
35GB came about cos 32GB for storage, 3GB for logs. I’m seeing many times more disk space used for logs than for chunks right now.
How would we enforce that?
Im not saying its not desirable but I suspect unenforceable.
35GB came about cos 32GB for storage, 3GB for logs. I’m seeing many times more disk space used for logs than for chunks right now.
Your ISP connection at home usually restricts you to 3 or 4 separate IP addresses. To get more IP addresses it means signing up for a business account and that costs more money.
Alternatively one could run their nodes from a few IP addresses at colocation facility hosting their equipment where bandwdith is big and relatively chepa, if you have the node count.
I think @rreive is posting ai generated text that may or may not be true … and in case of the IP dependent rewards is certainly wrong …
lol, no its me, I just thikn long and hard before posting, spelling mistakes included (the AI test)
the IP dependency is still wrong ![]()
Then enlighten me will ya ![]()
I dunno, write some code to check a nodes capacity is >35GB before allowing it to connect to network? Some wizardry?
well - for all I know rewards are strictly proportional to network share …
(and that was in line with what my nodes did earn as long as I was running them)
pretty costly stuff for 20m nodes! Is it worth it?
Well the imperative always was (still is?) to make the node code as lean and mean as poss and put as much of the load onto the client as possible.
Im not sure how you could write that free HDD space check code such that it could not easily be subverted - while not increasing node complexity significantly.
I understood it to be a SC running on Arbitrum spraying rewards at Wallet Addresses listed on Arbitrum, that said those wallet addresses map to unique IP addresses, non which are associatged to a group of nodes?
may you link the SC where this happens?
I highly doubt the part with IP addresses is the case
If someone starts to upload 700 petabytes, I’m quite sure that capacity is there before the upload ends…
Yeah, I’m not sure either but I doubt its impossible, undoubtedly it adds a lot of complexity though. Having >35gb per node does enforce a kind of…dare I say it…proof of stake minimum investment into the network. Its a fair chunk of resources, but not excessive for most people. For a whale with 20m nodes it would be quite costly.
Yes, I agree that it would be good to assure real capacity, but I’m not sure if it can be done in a way it could not be faked.
as I understood it there’s a “active node sampling service” that queries quotes for random data pieces every now and then across the day (getting the info about the wallet where to pay too) and when it hits e.g. 3x your wallet and 1x my wallet you get 3x my rewards when the rewards get distributed
…but as this is just something I think I heard somewhere I have no link or source I could point to …
AIUI that was(is) a nice-to-have feature that would go in as and when time permits.
Fairly certain its not in the code ATM, but as ever I could be wrong.
and I’m pretty positive someone capable of running 100s of thousands of nodes is not able to modify the source code a bit and use a modified binary version of nodes ![]()
Then it should be (I assume it has been) evaluated if it is worth it…
btw, just a quick peek, but should the tester also have the full data stored, ie tester need to store 35G to test if a node has 35G space? Well, maybe it is off topic for this thread and maybe it has already been addressed…
As I understood it the tester can get the proof deterministically (compute it). The node cannot without the user key used to create the hash of test data. But how long would it take for the tester to compute? Probably having all the stored data would be quicker.