I think this is only true for constant ‘store rate’ combined with data query rate proportional to stores chunk count
Not sure this is a fair comparison…
Imho we’re talking about 100 chunks delivered for 1 payment vs 200 chunks for 2 payments
I think this is only true for constant ‘store rate’ combined with data query rate proportional to stores chunk count
Not sure this is a fair comparison…
Imho we’re talking about 100 chunks delivered for 1 payment vs 200 chunks for 2 payments
That’s not too bad an idea. That wouldn’t need lots of data to be transferred because it could be written locally. It doesn’t do much for using the unused capacity to increase network resilience though.
If nodes are 35Gb and the network keeps this approx 50% full then nodes have to get 17Gb of data and start supplying that straight away. After a while they will store a chunk and get paid. Then they supply again for a while and then get paid etc.This wheel keeps turning.
So the point I am making is nodes are not able to just jump on, store stuff, get paid and leave, they are in fact required to work and the payments should be infrequent enough to ensure payment is in arrears for good work done.I hope that makes more sense and helps frame the issue??
This sounds like an elegant solution. It’s a bit of a change of approach though. Is there time to assess, program, test and implement it before launch though? Or could it be done as an upgrade later?
It’s how it currently works. These tests have no apps running and no real clients, but normally that is the case. If nodes do not deliver data they are shunned. So right now this process is in place