SAFE Datachain (work in progress)

There’s a secret repository showing up in one of the dev’s GitHub account. It’s called “Data_Chain” and reads like something very big and new for SAFE. But sssssshhhht !! These are secret links, that don’t deserve their own topics yet as long as they don’t show up in the the main github/Maidsafe. But still a great read though ;-).

Link 1
Link 2
Link 3


wow, this sounds really great!


Whoa! Impressive stuff and nice find!

3rd link above: has great info in it.

In short, this looks like a way of providing persistent data on nodes (managing clashes between different age sources). This must be what David was talking about with data persisting between test nets.

Whisper it quietly… time stamping of data is also mentioned through group consensus! :open_mouth:


@polpolrene a spy! A spy! :joy:


“These more reliable nodes have a vote weight of 2 within a group
and it would therefore require a minimum of 3 groups of archive nodes to collude against the
network. It is important to note that each group is chosen at random by the network.”

phew … this might make an attack harder >.<


Guilty! but don’t tell me you didn’t read any of the links due to the fact it came from a 007 ;-).


No worries, clearly you are a good guy (even though I can’t see the colour of your your hat :slight_smile: ).


I’m trying to digest this doc but, besides the permanent nodes, the possibilities are endless. It’s add the power of infinite blockchains to the core.


This is all being done in parallel and will mean we can have data proven to be network guaranteed. So will potentially solve many issues

  1. Archive nodes (the nodes with long lives and good resources) - something we were looking at post launch [check past forum threads]
  2. Recovery of worldwide outage of the network
  3. Facilitate ledger based SD (I need to RFC this, but basically you should be able to add a ledger flag to any SD and it’s stored forever). Imagine you want a receipt for a safecoin transaction, then you have it, or comment history etc. Or even you want a ledger based currency for businesses etc. …
  4. Linked chains, those links in chains across chains, allowing graph analysis of the network over time (without time, just entropy).
  5. … I don’ think this even scratches the surface, but will make offline data very secure and the network much more able to maintain very high levels of integrity at the very least.

There are some limitations for now, like having to present the chain to a group that can attest to the last signers of a link having been known to the network. I suspect a few more areas like this, but we will hopefully be able to document these limitations. The one I mention is recoverable though with a slight change to node startup.

But, this is a side project/play area for me (that Viv and Andreas have poked at already and will several more times to find fault) that I hope to complete in next few days, then present it again. It may fail :wink: so in the spirit of openness it’s happening in the wild. I think it can easily show a fully decentralised blockchain type device for many different chains of data that also interlink easily (so sidechain type functionality) and a little bit more. I suspect the wider community will “get this” and it may help folk understand the decentralised approach to cryptographically secured data of any type.


Shieeet, that damn Satoshi Nakamoto-debate all over again! The argument:


Active member for a year, posted at least once


Received 5 likes on 300 posts

Gives Back

Has 100 liked posts and gave 100 likes

Out of Love

Used 50 likes in a day

In the spirit of openness, I think this deserves a thread of its own! Potentially, big news! :sunglasses:


Genius inventor outed! But no, not Satoshi this time. :slight_smile:


After several reads I still try to figure out the solution to this problem:

  • Vaults don’t store chunks as they were uploaded (PUT) by a Node. The data_managers obfuscate the chunks before they send them to a Vault.

So which hashes appear in the data_chain? The hashes of the obfuscated data? Or the hashes of the chunks? And let’s say there’s a worldwide power outage. The non-persistent data is gone completely. So we have to rely on archive nodes. And as the archive nodes try to reconnect, they need to get their old addresses in XOR. Connect to the old group of data_managers and after that there needs to be a reconnect between the obfuscated chunks (in Vaults )and the real chunks (constructed back by the data_managers). Which persona’s are responsible for that? Data_managers only?


I would caution in the use of your words. Satoshi really did create something immensely useful. Maidsafe is as of now still totally unproven. Everyone on this forum are speculators of the most advanced kind.

1 Like

Yes this part is easy actually. Any group will accept and old address they have previously seen if it holds a data chain longer than any three existing members. Of course they need to hold the data as well so are challenged from existing archive nodes. Challenge is simple. Take 1000 random data elements and prepend with a random value, have the new node tell us the new hash values and there we are, no transfer of data required.

Archive nodes will hold data from the beginning of time. One of the nice things about data chains is that only the last link need to be known to majority of current group. From there all data from anywhere in the network can be republished. So an data chain will start form a very wide address space (like a massive group) and the top of the chain will be data only in this group. Some of the groups data can also be found deeper in the chain if historically it was put on early.

[Edit - so you can think of chains as cryptographic proof of data over time (entropy) and not related to xor at all. It’s a huge list of data that all goes back to a “genesis” block :slight_smile: if that makes sense.]


And it appears you dont fully understand that what Satoshi created is still an experiment by any measure.

Q: Is Maidsafe an experiment that has the same disruptive potential as Bitcoin/blockchain?
A: Yes, more so.

Q: Do speculators play a role in cryptocurrency?
A: Speculators are integral to the success of all cryptocurrencies and without them exchanges, ecosystems and human capital growth could not exist.

Is this new data chain capacity, if it works out, expected to introduce any new form of latency to its own function or the network as a whole?

SAFE in concept just seems to get better and better all the time. With this and the coin, if it can be pulled off it seems to have swallowed the functionality of the bitcoin vision but possible improved on it quite a bit.

It should have almost zero impact on latency. It will, however, allow different “shape” nodes to exist. So capable nodes will fight to store as much data as possible to become archive nodes. Smaller nodes with less capability will try and get what they can for the small period they connect. Very small nodes may only stick with session data and not try and get any historical data at all.

So this helps with things like imbalanced node capabilities (asymmetric broadband, small disk space, low cpu etc.) where as long as a node can do the routing/network messages it can provide value and help consensus, but hold very little data, just enough for the odd reward.


I foresee a problem. Well, maybe, in this semi-fantastical scenario:

The reason (or a reason) why the Bitcoin blockchain is resistant to double-spending is that an attacker would have to recreate the blockchain back to its beginning, which is computationally infeasible.

In the event that SAFEnet collapses, there won’t be an existing chain that is linked to all the data that was on the network before the collapse, but a lot of fragments that are being pieced back together. So a well-equipped adversary might have a large number of computers ready to go in the aftermath of a collapse, and proceed to construct plausible chains with fictitious information, claiming to be the real SAFEnet. :slight_smile:

1 Like

Except that the last link has to be existing known close group members :slight_smile: That’s not forgeable and therefore a chain that ends up there is not forgeable either. This is a key point. So in total collapse all nodes attempt to restart to last known network (they all know their existing close group and their own key pairs). Then total collapse is recoverable as well. As long as we can detect network collapse per node (we can easily, the group width will be huge on startup, thnk of close_group distance as difficulty).