How many nodes minimum are needed?


I have been looking for a safe peer2peer technology since at least 5 years and I appreciate the development here. My questions are for a specific scenario in Civil Protection:

My background are disaster management applications that are used in GO/NGO catastrophe relief organisations. One of the smallest cells in this scenario is a “command post” (CP). You can think about it as a small sized working group on a local network only. We are typically talking about 3-10 computers. Especially in the beginning of such an event there is no internet connection. (Think about events like the earthquake in Haiti 2010 or the earthquake/tsunami in Japan 2011).
Having that said, especially in the beginning of such a situation the CP has to function by itself without internet connection. It makes sense not to rely on a local client/server architecture because this just adds more single-points-of-failure.

Since data in safe network is shredded and distributed the question is: how does this work if you only have 2 or 3 machines?

Maybe for bandwidth or “security” reasons a CP would not want to spread the data to other machines than a defined set of machines (based on maybe a key or an MAC address). Is this something that could/will be supported?

Thinking about a lokal workgroup where files are not only shared with the group but also worked on by multiple members (like (XLSX) lists for staff and utility planing or documents (DOCX) situation reports): how does this integrate into safe network? Is there any kind of versioning embedded? Or is there a locking feature so that only one user can change files when it is open?


Hi, welcome to the forum.

When the network starts off, a client will connect on ip-level using CRUST. It will than do as much as possible to connect to the network. If there’s no network it will bootstrap one as being the first node. Than, when others connect (say 3 or 4) you don’t need 28 out 32 nodes to confirm an action (POST or PUT data). You need 3 out of 4 or so. But when different people start different networks around the planet, they can’t be joined later. Every network is unique, you need node by node to join.

The node closest to the data in XOR-space will get the Chunk. So if a node has 345 as a name, a Chunk with id 344 will probably be stored at that node (in reality these addresses are extremely big numbers). From the outside it will look like random storage.

Than you have to run a local SAFEnet I think. The “real” network doesn’t make a difference between one node or the other.

Yes, there’s work on this. A GET-request will get you data, just for free. A PUT-request will put you data on the network (the Chunks) this will cost some Safecoin. If you want to PUT structured data to the network (data that can be changed later) you’ll pay more Safecoin for that.

1 Like

Perhaps I need clarification but if I understand you correctly you want to share private data with a specific group. If I recall correctly that’s completely within maidsafe’s capabilities. Or are you saying you’re concerned your data would spread beyond your network. Keep in mind when you uploaded data to the SAFE network it would be encrypted so even if it “spread” beyond your local network no one could read the chunks if you kept the files private.

No, Im not concerned from the security aspect. The environment I am talking about just does not have or allow to use bandwidth to transfer data outside the LAN. Also, the amount of machines that can participate is limited and can be only 2 for quite some time. However, spreading the data to other machines is very usefull a) for sharing and b) for redundancy. Typically we are talking about battery backed up laptops - why not use what you have instead of USV buffered servers that are a hassle to maintain in a scenario like that.

So the question is: is there a way to limit the “spreading” of data to certain network branch / predefined group of machines, etc?

So, if you would join the existing “glogbal” network and share some data and then switch off you machines, remove internet access for all and switch them on again they would “know” each other and the network. What happens if you only use 2 machines in such an environment? Would data form machine 1 then be shredded and only be stored on machine 2. Or can you not be sure that all data will be “backed up” to machine 2 at all? Sounds like SafeNet algorithms would store some of the shredded data on machine 2 and try to reach oder machines to store other pieces. But since it can’t reach them not all data would not be stored redundant. Correct?

Ok, thanks, I did not fully get this but now “farmers” make sense. Quetion is if you can switch that off if you would run your own “local SAFEnet” as you described above (like an NGO safe net) with the given purpose.

If you have 4 machines, each Chunk you PUT will be stored (at random) on the 4 machines, maybe because the network is so small some Chunks will go to 2 machines instead of 4. But this is a very extreme situation. The network is made that it will provide millions of nodes, so when you “log on” the software requests your personal datafile. When it comes to your computer, you use the password to unlock. In there, there’s the data-atlas to all of your files.

For a very small local network you are talking about, I think SAFEnet won’t be the best choice. If you are in a bunker somewhere after a nuclear explosion, with only 12 computer left, you should probably use something else.

IMO there’s no mileage in using SAFE on a small network. It’s completely outside the design parameters, and you’re likely to hit fault conditions or unwanted behavior unique to this situation, but never encouraged during testing.

It might seem to work, but probably not worth the risk.

You could run it isolated on a small or medium enterprise but the question would be why. Security increases significantly the more nodes you have. If you have a network in one building or 3, the chances of a disaster taking out a third or more of your network is real, and if your data is chunked and spread 3 ways across that network the chances of dataloss is significant. If you have files spread all around the world, you chances of dataloss are nearly nil.

You will be much better off to just run SAFE as SAFE.


Just a thought but would it be possible to run SAFE on a VM? Also remember you can (or should be able to) run SAFE on cell phones, tablets and other mobiles as well as desktops and laptops so while your PC might be what you use to farm safecoin you might be able to set up more than 2 or 3 nodes by utilizing mobile devices you might have with you. Also you might consider investing in getting a few odroids or rasberry pi computers to act as additional nodes (and hey they could farm extra safecoin for you) so that you could compete sets of 4 and set up a small network.