As you will have spotted, a new testnet is amongst us. Time to jump on board if you haven’t already! Download safe and upload some files, and see how payment for each upload is automatically subtracted from your wallet. All Monopoly money for now, of course, but it’s great to see it in action. Let us know of any errors you might see in the thread.
Payment for data is a big step forward, enabled by the fact that the network is now pretty well-behaved and stable (touch wood). Memory per node is hovering around the 50 - 60 MB range, whereas before it was in the hundreds of MB. But there’s still stuff to iron out. Thanks for the feedback so far, and keep it coming. Next stage will be to trial the basic payment flow to nodes and some sort of simple pricing mechanism.
There’s no solid NAT traversal as yet, unfortunately, but those with cloud nodes or who are happy with port forwarding can try running a node or two. Let us know how it goes.
Now that we are in a stable place, we’re also going to start picking apart the workings of the network. This week @roland is going to explain what happens during data replication.
General progress
@Chriso is working on centralised logging to make it easier to analyse errors in aggregate after testnets. Ultimately, we hope to enable community members to do this in real-time, but we’re keeping it simple for now.
Thomas Eizinger and Max Inden of the rust-libp2p team have merged a fix from @bzee, which was seeing new nodes continue to dial (message) other nodes even after they are connected to the network. @bzee is in regular contact with that team, who say that AutoNAT and hole punching should be working ‘soon’. One of the maintainers has a tentative solution that would fix the issue of port reuse - AutoNAT needs not to re-use ports on outgoing probes. Fingers crossed for some early progress on that.
@aed900 is looking at improving UX around unconnected nodes handling timeouts.
@bochaco is advancing a feature to fetch all spends for a payment proof concurrently to speed up the DBC verification process, and also an approach to make registers content-addressable.
@qi_ma has fixed a subtle bug with DBCs. When a client wants to spend a DBC to upload data, the nodes check if there is an existing spend proof at the relevant address (to stop double spend). The node holding that address will also be actively responding to GET requests, which may delay its response to the enquiring nodes, making them think there is no spent proof there. This is an extremely rare event but we’ve seen it in the wild and it would allow doublespend. We’re patching it by requiring 5 nodes to see a GET before it is acted upon.
And @joshuef is looking at dynamic payments and pricing, including the client asking nodes near the payment location what their storage cost is, and with that updating the payment for uploads. And also a basic store cost calculation based on the capacity of the record store.
Data replication
Data replication is a mechanism that allows nodes to distribute data to new peers as they join and transfer data from a “dead peer” to the other nodes. This ensures that data remains available in the network, even when nodes join or leave. However, the approach used by Libp2p
relied on a node periodically flooding the network with all of its data. This came with some downsides, including heavy network traffic and high memory usage amongst nodes. Additionally, new nodes didn’t receive the data that they were responsible for, and data held by the dead nodes wasn’t replicated on time. To address these issues, we implemented a custom replication process that is triggered each time there is a change in a node’s close group.
Before delving into the replication process, let’s briefly discuss how data is stored in the network. The network operates like a Distributed Hash Table (DHT), enabling data retrieval using unique keys. When someone wants to store a file on the network, they’d first use their DBC to pay for the file and then upload the file to the network. This data, (DBC, Chunk or Register) is transformed into a series of 0s and 1s and placed inside something referred to as a Record
. Although the records can be infinitely large, the nodes are restricted to handle records of size 1mb. Each Record
has its own Key
, which is the XorName
of the data that it is storing.
To store the Record
containing both the data and the Key in the network, the client calculates the Xor Distance between the Record Key
and its nodes in its RT (Routing Table), sorting the peers by their closeness to the Key
. Subsequently, the client asks these nodes to “calculate and share the peers they believe are closest to the Key
”. The client then sorts the responses and identifies nodes that are even closer to the Key
. This process continues until the eight responsible nodes for the record are found, and the record is stored with them. Libp2p
abstracts this complex process away, enabling us to build our network on top of it.
Once a Record
has been stored amongst the closest peers in the network, we want to maintain at least eight copies of the record, even if the nodes holding them go offline. Our current replication process works as follows: Each node keeps track of the peers close to it, i.e., its close group. Whenever a peer is added or removed from the RT, we check if this affects our close group. If there’s a change, we send the Record Keys
to our close group peers who we believe should hold this record. Upon receiving the key list, if a peer does not have the corresponding record, it requests it from the node that initially sent the Key
. In case it could not receive the record from the node that initially sent the key, it asks the network for the data.
Although further optimisations are possible, the current replication approach is significantly less resource-intensive than the Libp2p
one and effectively prevents data loss.
Useful Links
Feel free to reply below with links to translations of this dev update and moderators will add them here:
Russian ; German ; Spanish ; French; Bulgarian
As an open source project, we’re always looking for feedback, comments and community contributions - so don’t be shy, join in and let’s create the Safe Network together!