ANT Token - Price & Trading topic

If someone starts to upload 700 petabytes, I’m quite sure that capacity is there before the upload ends…

4 Likes

Yeah, I’m not sure either but I doubt its impossible, undoubtedly it adds a lot of complexity though. Having >35gb per node does enforce a kind of…dare I say it…proof of stake minimum investment into the network. Its a fair chunk of resources, but not excessive for most people. For a whale with 20m nodes it would be quite costly.

1 Like

Yes, I agree that it would be good to assure real capacity, but I’m not sure if it can be done in a way it could not be faked.

1 Like

as I understood it there’s a “active node sampling service” that queries quotes for random data pieces every now and then across the day (getting the info about the wallet where to pay too) and when it hits e.g. 3x your wallet and 1x my wallet you get 3x my rewards when the rewards get distributed

…but as this is just something I think I heard somewhere I have no link or source I could point to …

1 Like

AIUI that was(is) a nice-to-have feature that would go in as and when time permits.
Fairly certain its not in the code ATM, but as ever I could be wrong.

1 Like

and I’m pretty positive someone capable of running 100s of thousands of nodes is not able to modify the source code a bit and use a modified binary version of nodes :slight_smile:

3 Likes

That would be a really nice improvement I think. At least then emissions would be useful in encouraging IP address diversity. So the emissions are based on not just network proportion, but also IP address diversity. Is that an option @dirvine?

That also matches my observation with data from about a dozen locations, though I could be incorrect.

I think it’s possible. Proof Of Storage prototype

1 Like

:thinking: Then it should be (I assume it has been) evaluated if it is worth it…

btw, just a quick peek, but should the tester also have the full data stored, ie tester need to store 35G to test if a node has 35G space? Well, maybe it is off topic for this thread and maybe it has already been addressed…

1 Like

As I understood it the tester can get the proof deterministically (compute it). The node cannot without the user key used to create the hash of test data. But how long would it take for the tester to compute? Probably having all the stored data would be quicker.

maybe we first make the network behaving well and working as it’s supposed to do be (fast up/downloads) before we start cracking down on those who ‘don’t play by the rules’ and do things we consider unfair.

… priorities …

1 Like

Can someone rename this thread to w/e an open a new price & trading topic ? That would be great

6 Likes

This forum seems to be going over the same debates again and again, with various levels of panic.

Folks are responding to the incentive to run more nodes due to emissions. Once more data is on the network, it won’t be possible to run so many nodes.

Is there really much more to say, or are we going to keep iterating on this loop every few days? :sweat_smile:

5 Likes

Are you sure about that? I would claim it changes nothing - just different people/kinds of servers running whale level node counts…

2 Likes

The problem with that is it would place a very heavy bandwidth load on people running nodes, rather than using the proof mechanism.

Definitely this one. :laughing:

I think uploads will help, but oversubscription (underprovisioning) will remain a gaming vector until proof mechanisms eliminate it.

Maybe we rename this one to “Post-launch general” and create a new topic for price and trading? @Dimitar?

1 Like

I don’t see how a proof mechanism could ever work. You could have a bunch of chroot’d environments unaware of each other sharing the same downstream storage of 35GB+, or you containerize the application, serve up the same disk as persistent storage to every container, and they would never know about each other.

1 Like

Have you read Proof Of Storage prototype?

1 Like

Excellent post cheers

1 Like

I could see that working, but it isn’t a short term fix for our current problem, as I doubt the team has time or wants to focus on implementing the proof of storage and network tests. Long term, I’m not sure it’s needed at all, as if the network is getting sufficient usage, we’ll find cheating nodes with regular behavior.

Still off topic, but for me it looks like the tester needs equal resources than the node to be tested (either storage or computing capacity). Also who would be the tester? Another node?

What’s the network size you estimate we’ll end up when enforcing 32gb storage per node…?

You see the trajectory?..

10 million? 5 million? 1 million?

What would be the hoped for benefit?

1 Like