When your node is full, so is the network. That means the supply/demand failed and likely the whole network did.
Node Age is gone so at this point the delete and start again is superficially appealing butā¦
ā¦starting again doesnāt help because you donāt just fill up with new chunks. You have to store all the chunks you are responsible for that have already been paid for, and you wonāt get payment for that. Only for new chunks. So you just wasted bandwidth and time refilling to the same level with chunks that donāt earn.
A-ha, they are going to be balanced⦠if thatās the case, I have to think again. I thought nodes can get variable amounts of chunks, but I guess the XOR -thing and churning etc. means they are going to be roughly the same?
Anyway, we clearly need a better picture of how this all is going to work.
The ant colony analogy is in fact the original paradigm. For whatever reason the project strayed from that and has now returned to something much more like Davidās original vision.
People are trying to answer you so try not to take it personally or let your frustration derail a helpful discussion. @joshuef is juggling a lot right now, as is everyone on the team who know all the details so it may be necessary to wait or figure it out with members of the community.
David has always looked to natural systems for inspiration and ants in particular, whereas humans are used to working differently. Especially after all the focus on blockchain and smart contracts, it can seem a bit strange and difficult to see how an ant colony and other natural phenomena are helpful here.
But I agree they are and while I too was carried away with the ideas of full concensus and so on, Iām so glad weāre back to very simple nodes and have shed all the complexity that made it hard to see how the original vision could ever be realised.
By which I mean nodes being able to run on almost any device making the network as inclusive as possible.
By designing away the need for binding quotes and concensus the team have made the network far more efficient and robust to attacks, and restored the idea that anyone with a mobile phone will be able to earn SNT and be able to pay for storage and participate.
Thatās fantastic IMO!
Thanks David. I donāt know enough about the system to understand why that is true, but I take your word for it. That property provides a HUGE benefit.
Also, thank you for responding with a civilized response. Your respect for others is not as common as it should be.
I will try to keep the ant analogy in mind and minimize accountability requirements (in my mind) except for cases requiring actual performance. The term āprice checkā would be much better than āquote.ā
The issue that remains is a client may experience a save (or partial save) failure based on something that is unpredictable and non-deterministic. I donāt know many people who like that kind of behavior when it comes to trying to store data, especially when having to pay for it. If it is a rare event, then it will at least be similar to a Microsoft moment, when Windows fails to do something it should have done for an unknown reason; though in this case the reason will at least be known.
Thatās coming for sure - but keep in mind that this new simpler paradigm was started earlier this year and they are still developing and optimizing it ⦠so give them some time - all will be spelled out in due course.
Sure, first things first!
Definitely need this. Bring on the next version of the primer!
I guess that rather than adding more capacity to a node, you decide to run more nodes (as we know from testnets, many nodes per maching is posisble).
Looking forward to getting a better understanding of this in time, and how it will work in terms of target node sizes / address ranges changing with churn / how this all balances over the network etc.
That makes sense. No benefit and only wasted opportunity to earn more in the meantime.
I knew that was coming!
I think the costs for chunks will vary slightly, but they should still be so small as to not matter. That sounds weird when the variance could be double the value from 1 chunk to another, but, and here is the bigger picture I hope, itās a level playing field. So you have a file, say 100 or 1000 chunks, some will cost 1 nano some 2 and maybe some 4, but it will amortise across the network. So on some Tuesday in March 2024 your average per chunk was 3 nanos for instance. Itās likely that was the same for everyone.
Where time and signed things fail and what @joshuef was saying is there is nobody to report to here. So given a cast iron guarantee really means little. Now there is an argument for a signed outlier quote that you can give to all nodes close to that possible thief, so they can also ignore but him, but that need some kind of time based value to show it was at time X when he did that quote and if you other nodes can check at time X what was your quote?
So we introduce several issues here
- Nodes need to agree on time (practically impossible in decentralised networks Some try though e.g. https://www.youtube.com/watch?v=BRvj8PykSc4 ) [ note agree on time is not the same as using a duration locally]
- Nodes need to record what they did at each time slot
- Nodes need to sign each āquoteā
From this seemingly obvious list, when you dive into each one itās a deep deep chasm.
BTW inability to agree exact time in decentralised networks is a brilliant example of consensus not being the answer, otherwise we could āconsensus timeā and we cannot even do consensus on a some thing like a ticking timer.
Definitely it will. Itās a much, much simpler paradigm, natural and simply, much like the ant colony, however I have been unable to explain that well and thatās more than a decade I have tried and failed. I have been overwhelmed by the āitās not exact engineering, itās not provableā crowd and caved for so many years to those who knew better (and were extremely vigorous in fighting this natural paradigm).
So it may be we get it running and show it works, even though we cannot prove why. Then document the simple rules in play to make that network work.
Much like scientists still cannot fully understand how an ant colony works or weather gradients etc. but we can understand what parts of the system does, but chaos theory perhaps means we can never understand, much like a neural network is simple to code but almost impossible to understand why the connections made perform some task, even though we can see they do.
Donāt know how I missed that! I guess I may not have got involved early enough. It has been years now, and I can only keep so much in my head when I am not directly involved.
I very much agree with your statements. I was very concerned by the complexity that was rapidly piling up. Simplicity provides fewer opportunities to screw up, which seems to be inevitable for a human being, even more so for a group of us.
Thank you also for a respectful response.
One thing I still do not like is the inability to provide storage to get storage (i.e. storage credits). I, maybe incorrectly, seem to recall that this was part of the original paradigm also. This introduces problems of its own (there would need to be a payment and also a credit system), but it provides a way to avoid this whole payment scheme for storage (a possible barrier to entry) and provides serious control over storage opportunists and seems to be more consistent with the distributed approach. There already is a strategy for performance accountability for storage, bad actors are ignored. Why should it matter if they are farmers for coin or just for their own needs (barter in kind).
Youāre welcome.
The storage for storage approach sounds great indeed, but it isnāt simple and thatās why we are where we are. It also makes it less inclusive by reducing the ways to acquire the ability to use the network so a token has benefits.
The worst thing about having a token now is that it turns a lot of people off because of the bad behaviour and bad press surrounding cryptocurrencies, but I think it is necessary and useful.
And @JPL, weāre all smiling in your direction.
Is that launch day ???
That was design 2
Design 1 - was a currency and back in the day (2006) I did patents for all parts of the system (UK, we needed some way of validating designs for investments, so ignore patent madness). The patent office did not allow currency designs. (BTW We have patented almost everything and allowed the patents to lapse, so that means nobody can now patent this and nobody is prevented from copying or using this tech)
Design 2 - 2007 - switched to storage credits - a certificate for storing. You could use these certificates as proof you should get some storage given to you. It does give us problems in proving etc. back to time and so on, unless ā
Design 3 - 2013 back to currency, bitcoin proved crypto currency works. We can reintroduce a currency, now we can get storage for storage, i.e. you get a token for storing data but itās not linked to any particular piece of data, its able to be disconnected from a particular event. This is possible due to doublespend prevention (ie. making a token a single use thing).
So we actually do have storage for storage and with a currency itās actually simpler.
Just to clarify, what I stated was not to the exclusion of using tokens, it was as an alternative; two choices; payment tokens and credit tokens (which farmers for profit would not be happy about since they are not convertable to payment tokens, another issue). The additional complexity remains an architectural concern. Understood.
Been debating this sort of thing for twenty years now -- So I get it!
Itās the same problem and debate that free market āAustrian-schoolā econ people have been having with the āKeynesian-schoolā econ people. The former wants to let natural forces work, and the latter wants to manage and control more and more of all markets.
Free the ants - free the nodes - free the people!
A proof of any system requires full understanding of all inputs and outputs. When we rely on ants/markets/people to take core authority in a system then the inputs are decentralized, chaotic, and a mystery.
What you are ultimately building is a free market trading platform - for data storage.
Itās really not. Itās highlighting the consequences of your suggestion, and the considerations around it. Why choose anything? Magic numbers donāt help us. So now we have another decision to make, something else to align with various facets of the system.
It is if they do, sure. But itās not longer the going rate is it? So weāre adding a lot of complexity (as outlined by my other points and questions).
If we agree itās important that it would be important to me. But itās unnecessary overhead isnāt it.
Something else to deal with when we actually probably donāt need to bother? (Plus it raises all the other questions I outlined).
I donāt reply if I havenāt been reading the posts, so sorry youāre off there.
Itās one problem. But there are more in the list of questions and concerns I outlined, that you deem childish but are actual design considerations to what youāre suggesting.
Iām not sure anyone was hung⦠I outlined some issues I saw with your proposal. Itās real easy to suggest XYZ as solutions. Itās not always easy to see the knock on from those ideas though. So I hoped to provide some insight.
Sorry missed some of this earlier.
It will tend towards this at he very leasst (if not outright be this). Thereās still some untested ideas here!
But basically, nodes should always be responsible for some data. They may get that replicated to them, without a reward.
When you pay, if you do not pay everyone, say you pay 5/8 thereās a decent chance that your data will be replicated to the remainder w/o paying them. This keeps the price down.
If you only pay 1. Then your data may exist, but may not persist if that one node were to go offline before a churn event eg,
So in that sense we could have this flexibility. It could well be that itās not needed. It may be we have to pay ALL nodes. Weāll be testing this out and trying to asses reliability and desireablity of this.
Atm, thereās no concept of GBs in the nodes, just a record count. This makes all nodes equal. If you want to use more space, start more nodes. Itās another dial we can play with to allow folks to increase the space available to them (and the knock on with price).
Yeh, this may be where we end up. Iāve wondered since we moved to libp2p if we really needed or wanted a separate client. If things proceed well, it may be that folk run a node and use this ot put data. Startup/delays in contacting nodes etc would be much less. Itās another improvement we could look at, yep!
(And the added churn could well be beneficial in a lot of areas too!)
My comments were about an alternative; a possible quote based process, instead of the current process. Why would the going rate matter when a binding quote is offered in the presented alternative? That statement only applies in the current process, real time pricing, which definitely has its benefits. I was not questioning the benefits of the current process, only some consequences I believed to be potentially detrimental.
The number is not a magic number, it is offered by the node with the quote and is their choice, nothing is imposed on them. It is not a consequence of my suggestion.
A statement of one problem/issue is not, by definition, an exclusion of all other problems/issues. Many other problems may exist, and some expected problems may not be real. Signature does appear to be a significant issue (CPU time), though there may be others, but signature alone appears to enough to exclude it.
So, you read my posts but find problems with what I did not state. You did more than address statements I made. I found some of your statements to be insulting and confrontational; effectively putting words in my mouth.
Very true. Guilty as stated. I did believe that this is one of the purposes of this blog. Good suggestions can come from anyone (though not everyone).
I guess we are just failing to communicate here. David and HappyBeing provided more direct and much more informative critiques, IMO. They addressed my concerns.