Proposal for network growth based on node size rather than node fullness

Yeah this is quite a good focal point, really is the crux of it, and I’ve not been able to answer it clearly or put it aside.

I’ve been trying to find similar pithy statements that really hit the core issues like the quoted sentence does, without reducing a complex topic to soundbites.

One thing I believe in general for the relation between node size and network growth is: as data is uploaded to the network at some point there must be a decision to either a) add another hop ie split the section or b) accept a larger average size of nodes.

How is this decided?

Both actions have pros and cons. I wonder if there’s any clear balance / optimum? It feels a lot like an economic curve, with performance vs security as the axises.

Just brainstorming and keeping a mental diary…

5 Likes

I’m just revisiting this thread and these thoughts occurred:

  1. We have a set chunk size, so it would be nice if a node had a capacity of X chunks, rather than a variable size. This would add consistency across the section (all section nodes are more equal), aiding data management. The balance across the network would also be improved for the same reason.
  2. The capacity of a section is then a know quantity and it becomes easier to estimate the size of the network. Maybe this can be used to optimise further down the line?
  3. Churn could be easier to handle, as each node (when empty) has the capacity to store the entire content of a vacated node. A swap out, swap in approach could be used, where by a promoted adult just takes on all duties of the old one.
  4. Users can spin up multiple nodes when they have a lot of storage. I understand the footprint is small and this is feasible.
  5. The chunk capacity could be re-evaluated at split time. Perhaps a certain network age could trigger a change, should it need adjusting. As capabilities improve, perhaps efficiency gains of doing so will become apparent. It could be assumed that as the network ages, hardware capacities will increase and chunk capacity should reflect this. If it was algorithmic, the magic number could be just the original capacity and a multiplier.
6 Likes

The aim in some ways is to decide how the decision process will work. What is to be decided (are we deciding average node size or to split or to join or depart or evict or reward or increase security etc)? Who is to decide it (elders or adults or developers or clients or apps or a combination of those or some voting concept)? When is it to be decided (when we design it or when we start the network or when a node starts or on-the-fly or at some epoch or at some technological milestone etc).

My hope was that target node size improves (not solves) the situation for decision-making. I believe it does when compared to using node fullness. It’s not the kind of transcendental algorithm that will give proof for some sort of correctness, but I feel it does iterate toward a better decision situation.

On a related note, I’ve recently been reading about justice and democracy and social choice theory and welfare economics and that sort of thing, and at one point it crossed into the field of Algorithmic Game Theory. Looking at the range of papers by the students of Noam Nisan it seems like these people would have a lot of opinions about the safe network economic model. Hard to know whether academic input will speed up or slow down progress but I’m sure safenetwork community will find the papers authored by these students of interest. Inbal Talgam Cohen in particular seems to do relevant work.

4 Likes

I was just pondering this some more. Just some brainstorming.

Does a defined node size also help with restoring a previously disconnected adult? If a node rejoins, the process could be similar - get, then prove, that you have all the chunks allocated.

If it is a new, replacement, adult, they would have to download them all and return their hashes as proof. If it is the previous adult reconnecting, then they should already have the data and could return them as proof. This seems like a very similar process.

1 Like

I’ve always liked this concept. A standard chunk is 1M Bytes so it becomes straightforward for the standard vault size to be 1M chunks. This means that a standard section size is about 100 TB. Now, network growth becomes very easy to discuss as it all comes down to section splits.

The issue is that of message hop latency. For each order if magnitude we are too small on the standard vault size (1TB vs 10TB) we typically need about 3 more hops to find a chunk in a standard octree/quadtree ds (like xorspace). However, it was mentioned in the latest dev update that section routing has been flattened so this may no longer apply. Iiuc this makes standard 1TB vault sizes make even more sense.

Last, to handle all the other variability of old hardware or mobile and iot, just let people create cache nodes of any size they want.

p.s. If you delve into the history of hard disk manufacturing, these issues have been dealt with before. That’s why we have fixed blick/sector sizes. After 25 years they needed to boost their standard size because it leads to higher performance or better economics (512 vs 4k).

It would seem so. Imo everything becomes easier to discuss and rationalize and code once you choose a fixed standard vault size. Even the vault code can now be mmap optimized for higher performance and robustness.

3 Likes

Interesting points regarding size vs hops. This does seem to suggest a minimum size to maintain a reasonable performance expectation.

Arguably, this also suggests that as data storage demand increases (which it likely will over time), nodes should grow in size to reflect this. If not, the number of hops will increase disproportionately over time, slowing overall network performance.

Instinctively, it feels like node size should therefore be a function of network age (size). In practical terms, at each split, the node size could increase by a factor of section age.

It would make the network less balanced between the top (ancestors) and bottom (descendants), but the average number of hops would surely decrease.

Given the limited usefulness of small devices when storing data, it would be good to find other uses for them.

As I think @mav mentioned before, their other characteristics could be their advantage. E.g. smart phones are nearly always connected/on, they roam between many masts, are often idle, don’t suffer from power cuts, etc. If there was a way to tap into these in some way, that would help the network health overall.

Maybe these can come later though, once the core is solid.

Just a small update on section size: Elder count is now 7, recommended section size is 14, and an average section size would then be around 20 nodes, i.e. about 20TB.

3 Likes

This may have been the case before, but one of the recent dev updates discussed a routing optimization that will flatten the hops significantly. It may yield a situation where the hops are essentually constant for any request (~5 hops). They’ve basically gone from one big quadtree to a quadtree-forest. This makes the case for fixed size vaults even stronger imo.

The elder count of 7 was understood. What happened to the typical section sizes of the past with between 64 and 128 vaults? A section of only 20 nodes total seems rather small.

2 Likes

It was chosen at some point based on conclusions about manageability.
There are also some nice benefits of super small sections. Comes with an overhead of course, with the large proportion of Elders. I assume we’ll tweak that number.

Messaging. I’ve been thinking about a messaging feature that could be suitable for these kind of devices to handle. Might put it up soon.

5 Likes

Could you describe some of those? I only see downsides at the moment. Granted, there is a balance between computation, communication, and storage that is necessary, but I’m about one or two orders of magnitude away from you in my thinking. Assumptions I’ve made to conclude that, which can be wrong, but imo are reasonable for a “safe” network.

  1. The elders store only metadata, no real data chunks. (This is what I thought dirvine had described as necessary. I’m not saying this is ideal, although I think it may be.)

  2. Each chunk has 4 copies in a section, and 4 other copies in a backup section, and 4 more in a second backup section (12 copies total on the network, could also be 8/8/8 to yield 24).

  3. The network should be able to withstand 66% of all nodes going offline at the same instant in all sections; these nodes will never come back online.

  4. Cheap home routers can do >16000 concurrent nat connections.

  5. Adults will eventually perform high performance computation.

  6. Any adult that can’t offer decent performance (min vault size, computation level) becomes either a cache node or a pure computation node.

If my memory serves me well, I think @dirvine talked about the security model being better with many nodes and few elders. Maybe something like it is more difficult to become an elder from larger pool of adults.

I may be wrong though, or maybe things have changed?

No argument there. @oetyng just described a change to super small sections with the number of adults being on par with elders. See above.

1 Like

It’s all a balance. If you model only on Sybil (which we have) then lower Elder count and increased Adult count (over 60 adults) gives great results. However, this is a single view. i.e. that number of nodes could be practically unrealisable (as Elders have much more work to do and many more clients to connect to etc.). This is where math proofs are tough, they ignore stuff like latency, cpu capacity etc. However it was a good exercise as it allowed me to show the routing team small amounts of Elders were good (this was not believed). In any case this exercise had negative effects as it was taken as the only view , man hard to get this all in folks heads, especially stubborn Engineers.

So then real life creeps in. Sybil resistance and relocating nodes is vital. we know that and the simulations proved this. So then splitting more often is more secure from (again) one angle, smaller sections split more and more sections means less power per section. Also means less work per Elder.

So right now we have gone for small sections, but lots of them.

There is a lot more to balance, more sections == more infrastructure complexity. So we will test and tweak and debate these points as we go.

5 Likes

EDIT: @dirvine just covered this above.

Is my math correct in thinking that if there are 14 adults and 7 elders, and the promotion is done in order from oldest to youngest, then any new node is guaranteed to become an elder in two section splits? Sounds easy.

Of course, to get supermajority of elders in one section would still be difficult, but my math skills and knowledge of the design stops here.

Would be 7 Elders + 7 Adults when at recommended section size = 14.
Then number of Adults increase until there would be at least 7 Elders in each sibling after a split. So would grow up to usually ~30 something and then split.

4 Likes

At network start up then yes, but as we grow the number of sections, it’s not so simple. You could start with age == 5 and the rest of the nodes are min 40 as groups split and nodes are relocated (including you) then you are not just as quick becoming an Elder. Then when you do you are an Elder in a section you could not control which one you were in. So it’s a wee bit more, but you are on the right track and why larger #Adults from only that perspective is more secure where a num of 60 in our simulations made sybil unfeasible.

The num sections is similar in many ways to increase Adult sizes when you look from this angle too. So yip lots to test tweak and debate, but the tweaks are simple in code (a single const value change)

2 Likes

It is the maximum size but it does not have to be the average size. It will depend on the type of applications most used on the network but my impression is that the average size will be much, much lower.
It would be interesting to know what is the average size of information transfer in today’s world. That would give us a clue to what the average size of the chunks could be.

This is from 2019 but gives us an idea of the immense amount of chunks that could be generated on a daily basis.

Yeah, I know that is what has been described. My point as the proverbial broken record is that it should be standardized to a 1MiB chunk (or some other fixed size), for everything. The same goes for the nodes, they should all be a fixed size (like 1TiB). The simplicity and optimizations that become available with this decision are worth it imo.

That doesn’t matter. The design choice between variable size or fixed size data “chunks” was figured out a long time ago. No need to reinvent the wheel here. Fixed sizes win for simplicity, economics, reliability and performance.

The hard drive in your computer likely has a sector size of 4k. No matter how big the data is (1B, 1kB, 2459B) it consumes a single 4k sector. Some filesystems do optimization to pack file fragments into a single sector, but not usually. Instead, it’s better for very small files to be stored directly in a metadata inode as a local payload.

Some of the sectors on the harddrive contain metadata (inodes), some contain actual data. The Safe Network design has been slowly converging to an analogous method and should just fully embrace the tried and true methodology imo. As a big hdd in the sky, every “chunk” should be the same fixed size (1MiB). The client apps can decide how to pack those chunks to save cost. Small immutable data can be packed or padded with zeros. Mutable chunks would be a single 1MiB, that could have the payload change.

If more granularity is desired for immutable data than a different fixed size like 16kiB, 32kiB, 64kiB or 128kiB could be selected. However, the current choice of 1MiB is nice. It allows for high payload efficiency, is a good choice for max throughput, and is easy to rationalize.

1 Like

How? If you use any App (email o messenger App for example) the size of the chunk will be defined by the size of each data exchange. I don’t see how we could set that to a fixed size.

The packaging would only work in a situation of continuous use, which is by no means assured.

Many ways to do it, just spitballing here:

All emails sent to a recipient could be stored in 1 mutable chunk, all emails received by a sender could be stored in another. The app could archive and pack the communications to immutable data as you see fit (interleaved conversations, archive entire compressed mail database, etc).

Again, a smaller fixed chunk size makes things easier and more convenient at the cost of lower network payload efficiency. But we might see rather good network throughput with chunks as low as 4kiB or 16kiB. Perhaps MaidSafe already tested this and found 1MiB to work well?

What you pick as the fixed size, whether it’s 1MiB or 4kiB, is based on a variety of factors and could use some testing. There is no point to going smaller than 4kiB and there is no point going larger than 1MiB (unless testing shows some magic throughput happens at 2MiB, 4MiB, or 64MiB… which is unlikely). The main point I’m trying to make is that the size should be fixed.

Iirc some work I did with storing chunks on AWS S3 in the past, performance plateaued around 64kiB and higher (but that isn’t really an apples-apples comparison for how Safe might perform).

2 Likes