Proposal for network growth based on node size rather than node fullness

Not sure how you are defining “resources we require” here, but having more nodes for a given set of data seems like it could be beneficial both in terms of security/redundancy of the data and in the performance of delivering data on gets. Very much like having more seeds for a torrent is my thinking. Is that an oversimplification?

3 Likes

I wondered if the thought there was that more storing less would maybe be less profitable/less reliable. Does sound like David is saying that more nodes coming means more consensus/load on Elders, especially if too quickly but also suggesting that maybe that isn’t entirely necessary?

In terms of opportunistic caching or concurrent delivery of chunks I think that makes sense too. Just has to be considered in finding the balance I suppose.

4 Likes

It might be, however, the difference between torrents and us is torrents make no effort to relocate any seeds and we do. So we are more like always available seeds, if that makes sense. When we have enough copies that the seeds are considered always available we cannot really add more to make them “more always” available, so it can be overkill.

Then you go down the path of, what if there is a massive issue and we lose most of the Internet (or a huge chunk, so ignoring network splits and recombinations), would more seeds not have been better and the answer is yes.

So as @Nigel says a balance. A route that I feel is worth considering is long term very small maintenance nodes. These are archive nodes that don’t keep the most up to date info per se’ but do hold massive amounts of data, just in case.

I feel the correct balance will be this route, have the network solid and with seeds available with a probability close to 1. For the extremist cases then we have archive nodes.

A huge issue we have solved here is data republish and authority (all data has network authority, so is provably valid). This allows data to be presented from any src and checked as valid. This means “trust” is a pure crypto signature and stays with the data and can be checked off-line.

The 100% certain answer is not really there yet, but I feel it can be with a degree of probability imperceptibly (close to) 1.

8 Likes

The feeling I have is that copies of the Safe Network will play this role of archive nodes, not in the literal sense, but in the sense that the valuable information will probably be available in more than one Safe Network. In the long run the Safe networks with the best economy and communities of people will survive and the weaker networks will die off…

1 Like

Without reading or thinking!.. I wonder the extreme, is a network that does not necessarily die but allows bits to disconnect-reconnect and is more robust because of that resilience and persistence. I don’t know how far from that Safe Network will be or what is even possible in that regard or really wanted at all, as currently the notion is a network that necessarily is up and with nodes that are up to date and responsive. This is just my contrary instinct for testing whether the devil’s advocate - opposite is also true test, adds anything here.

2 Likes

This is our logical conclusion.

2 Likes

Imagine the network that made people rich, economic bliss. Imagine that surviving when the network with humanities data died.

I think it’s possible if we have a focus on purely “when lambo” and my fear is targeting and depending on the crypto community would do exactly this!

Wider adoption is a better place for Safe to be Safe and not a money printing story.

I say this though noting, the Internet Archive is currently so so important and relies on charity and grants. Humanity has not see fit to properly fund this, so it’s an example of how much we value data. In saying that it’s an archive, not current data, that’s what Safe should cover, a full archive of the most up to date dynamic information.

In any case, early day focus on the pure economy does come at a cost and a risk that can not be ignored.

17 Likes

The main problem I see is that ordinary people who are not in the crypto do not consider tokens to be real.

They ask questions like:

  • Where do the tokens come from?
  • The network creates them according to the set rules for consensus.
  • That is, they are printed out of nowhere? Ie you want me to sell you my resources on my computer for tokens out of nowhere? What kind of fraud is this? Why don’t you use dollars? If you are not using real money, then this is a scam.

I may be wrong, but my whole experience shows that people who are not in the crypto will not want to deal with wallets and tokens, no matter how easy it is. The man on the street has been listening for 11 years how the crypto is used for bad things.

That the Safe Network will store bad things will not help us. So my opinion is that our best chance is to bet on the crypto people and if the network does not grow fast enough to divert some of the new tokens that are printed to upload data to the network so that it can grow quickly.

My main concern is that a copy will appear while our network is small and will seize the free computer resource because of its better economy. I hope that all those who claim that the Safe apps and data will protect us from this fate are right. Because I think we will grow slowly, much much slower than everyone expects as we can see happening in Storj, Sia and FileCoin. I just don’t think we’ll have time to accumulate data on the network in a natural way before we are attacked.

And they will attack. They will invent various liquidity programs that lock the new tokens in practice by removing them from the market and increasing their value. Here is a fresh, fresh example of such an attack from L2 solution against UniSwap:

I repeat my point in the marketing thread. It matters little who is right in predicting which groups will adopt and when. I think it will be more important that we measure what we’re doing and how people respond (developers/farmers/users, as innovators, general public, geographically, etc) and adjust what we’re doing to see what works best with whom.

So rather than arguing something abstract we could be putting together the framework of what we want to measure and how we can do that.

6 Likes

Do you think that if we measure the growth of the network and it turns out to be slow, there is a chance that we will introduce some of the tactics that vampires will use against us - DeFi pool for new tokens to lock some tokens and increase the value of the other tokens on the market and artificially uploading data to attract new farmers to the network and grow faster than the competition?

I don’t spend time guessing what will happen for the reasons given. I think it’s too complex a thing and amounts to guesswork, so I’ve no idea.

I don’t worry too much about the rate of growth because if it is slow it is less attractive as a target, if it is fast it will be harder to attack. While we want it to grow fast, I think we tend to over worry about things like this. There’s a point where thinking and talking are not adding anything and are wasting effort that could be used constructively. The better the solution for the widest range of people the better it’s chance of growing. I’m more a problem solver than anything, so I look for a problem that interests me and have a go rather than try to imagine how things will play out.

2 Likes

Okay, let me put it this way.

There is a virus and the virus kills people. You can put on a mask, you can get vaccinated. You may not care and rely on the immune system.

There are vampires who attack crypto projects all the time. You can act preventively because you know how they attack the victims, or you can rely on your immune system. You tell me that our immune system is good enough, which is ok, everyone has an opinion - I want a vaccine and a mask. :dragon:

It may be like this for you but it isn’t for me. My understanding of science, anatomy, biology, medicine and human behaviour etc is based on hundreds of years of research and study, and built upon that my lifetime of education, study and life experience.

Everything in crypto is fluff from where I stand, full of randomness. To me it’s more like walking into a casino thinking you have figured out how it works without ever working for a casino, or built gaming machines, or studied probability theory, game theory and human behaviour etc, and have worked out how to beat the system.

I prefer to build things I understand, then test, break and fix. And trust that others are building bits they understand, and that together we’re building something new that will stand on its merits, not because of some special knowledge.

I’ve also watched people pitch the idea that they have a system, or inside knowledge, and use that to extract money from people they’ve never met. There are so many of these in finance, not just crypto. The first time I saw this though was on a race course - a strikingly tall Irishman standing in a field, putting down a hat and beginning to gather a crowd around them.

Even as a teenager (bunked off school) I was amazed to see people walking up to this guy, putting a fiver in his pocket and walking away with a piece of paper with a racing tip written on it. And a fiver was a lot then! I think crypto is this in spades, with hundreds of casinos full of games you’ve never seen before. So I don’t even think about going in or wasting time trying to figure any of it out.

9 Likes

It is very interesting to me how everyone accepts that the network will be attacked from within and we are working hard to protect it against all possible scenarios for bad nodes. At the same time, the idea that the network will be attacked from the outside is a kind of taboo, and it is inadmissible to talk about how we can protect ourselves from this. Very interesting.

Do you see that in my response? I thought that I answered your question by explaining why I don’t have an answer to it, and don’t try to answer such questions.

I do suggest that what I see as speculative effort could be better deployed but I’ve said that’s my perspective, and I’m not trying to or able to stop you. You aren’t persuading me to put time into that myself but I’m glad you are the thinking about this and arguing the case.

3 Likes

No worries, I shared just an observation, this was not a response to your opinion, although I clicked the reply button and I understand why you may think so :lol:

1 Like

I define the community as: The people want to find a fun and profit without any responsibility.

The project should focus on committers and entities doing business on the ecosystem.

The big difference between these two group is that the latter group makes a real contribution and does not leave the ecosystem very well.

The community is consumer. They will come and go out easily based on the contents and spot profit. The core team don’t have to work hard to grab these people.

If maidsafe provide “Autonomous and anonymous decentralized network” to the world, numbers business parties will do something on this protocol. And they will grab and bring general people from traditional internet to SAFE network. That is their business.

So, I really afraid these days “community” request maidsafe team to do marketing or listing for enlarge “community”. It is good to listen to these demands well and accept them well, but honestly, I hope that time is not wasted on unnecessary things.

P.S. As a result of looking at the blockchain field for more than 6 years, the “community” has an accident later. They are misunderstanding as they are some kinds of core team member. It is really shame.

2 Likes

This would make a great early partner once the technology is proven. I’d donate to see the Internet Archive backed by Safe Network.

I think you are correct. We need to identify key data that would make Safe Network the first place to look for data. Maybe we could create a community funded SNT pool for focused acquisition of key data to keep Safe Network relevant. There will be a point where the momentum of data being stored on the network will draw in unsubsidized data uploads.
As soon as we know the concept is valid we will, as a community, need to build the public image of Safe Network ahead if its arrival and official beta. If Fleming is say 6 months a way I’d argue we don’t have much time to build a public image of the network. If we don’t we will be risking others with much better funded creating a game theorized version of the rewards system and leading the acquisition of data. Why wouldn’t a well funded operation try to take the code and create a gameified version of Safe Network?

If we had the technology to save the Internet Archive we’d need start on that now if we wanted to see in coincide with the launch of the network. How cool would it be to see the Internet Archive backed with Safe Network. Both are noble projects that the world needs.

5 Likes

Allowing full nodes seems to imply we will allow nodes of varying size to join the network.

Is there any rule about the smallest allowable size?


I played around with the tiniest possible node size today to see what happens.

First experiment, the first node had the default max-capacity (2 GiB), but the next eight nodes all had max-capacity set to 1 (ie one byte).

The network started up fine. Auth started, account was created, login worked, uploading a 5K file returned an xorurl. But safe cat for that url failed with Error: NetDataError: Failed to GET Public Blob: ErrorMessage(NoSuchData)

Second experiment, all nodes (including the first) had max-capacity of 1 byte.

Account could be created (response: Safe was created successfully!), but could not login, got Error: AuthdError: ClientError: Requested data not found.

It seems to me that it’s technically possible for nodes to join the network with max-capacity set to 1. Whether this has any advantage, I’m not sure yet, but since it could happen in real life, maybe we can try decide what effect this behavior might have on the network.

The connection of this to the original topic is, what happens if we land on edge cases for node fullness? What sort of edge cases can we find if we used a target node size instead?

9 Likes

There are going to be some natural optimums that arise for what the best mix of bandwidth, storage, and compute capability is. These will change over time due to technology improvements. You are currently experimenting with what the minimum joining requirements might be. This is very important imo. Nodes that won’t improve the “health” of a section should not be allowed to join. Part of section “health” may be to demand a diverse plethora of different
node sizes at all times.

Imo, a method for culling old technology will help network evolution. This would be analogous to the BTC difficulty setting, but would be governed by joining rules that are based on current section statistics. For example, a section could target a normal distribution of resource capabilities with a specific mean and standard deviation. Based on the capabilities of the nodes asking to join, the mean could be shifted over time.

8 Likes