That is a bit misleading, the code in routing was wrong (you can do git history to see where it was wrong and if you wish see who wrote it, you may remember ), it did not work and routing had to be rewritten, no need to harp on though, we fixed that as we will fix many bugs. The dns on our docs is updating, but if you wish to read the definitions please run cargo doc -open
in a clone of kademlia_routing_table.
I have a copy paste here, which may help you, since you asked a bit of a question here.
This crate uses the Kademlia mechanism for routing messages in a peer-to-peer network, and generalises it to provide redundancy in every step: for senders, messages in transit and receivers. It contains the routing table and the functionality to decide via which of its entries to route a message, but not the networking functionality itself.
It also provides methods to decide which other nodes to connect to, depending on a parameter bucket_size (see below).
Addresses and distance functions
Nodes in the network are addressed with a Xorable type, an unsigned integer with B bits. The XOR distance between two nodes with addresses x and y is x ^ y. This distance function has the property that no two points ever have the same distance from a given point, i. e. if x ^ y == x ^ z, then y == z. This property allows us to define the k-*close group* of an address as the k closest nodes to that address, guaranteeing that the close group will always have exactly k members (unless, of course, the whole network has less than k nodes).
The routing table is associated with a node with some name x, and manages a number of contacts to other nodes, sorting them into up to B buckets, depending on their XOR distance from x:
If 2B > x ^ y >= 2B - 1, then y is in bucket 0.
If 2B - 1 > x ^ y >= 2B - 2, then y is in bucket 1.
If 2B - 2 > x ^ y >= 2B - 3, then y is in bucket 2.
...
If 2 > x ^ y >= 1, then y is in bucket B - 1.
Equivalently, y is in bucket n if the longest common prefix of x and y has length n, i. e. the first binary digit in which x and y disagree is the (n + 1)-th one. We call the length of the remainder, without the common prefix, the bucket distance of x and y. Hence x and y have bucket distance B - n if and only if y belongs in bucket number n.
The bucket distance is coarser than the XOR distance: Whenever the bucket distance from y to x is less than the bucket distance from z to x, then y ^ x < z ^ x. But not vice-versa: Often y ^ x < z ^ x, even if the bucket distances are equal. The XOR distance ranges from 0 to 2B (exclusive), while the bucket distance ranges from 0 to B (inclusive).
Guarantees
The routing table provides functions to decide, for a message with a given destination, which nodes in the table to pass the message on to, so that it is guaranteed that:
If the destination is the address of a node, the message will reach that node after at most B - 1 hops.
Otherwise, if the destination is a k-close group with k <= bucket_size, the message will reach every member of the k-close group of the destination address, i. e. all k nodes in the network that are XOR-closest to that address, and each node knows whether it belongs to that group.
Each node in a given address' close group is connected to each other node in that group. In particular, every node is connected to its own close group.
The number of total hop messages created for each message is at most B.
For each node there are at most B * bucket_size other nodes in the network that would accept a connection, at any point in time. All other nodes do not need to disclose their IP address.
There are bucket_size different paths along which a message can be sent, to provide redundancy.
However, to be able to make these guarantees, the routing table must be filled with sufficiently many contacts. Specifically, the following invariant must be ensured:
Whenever a bucket n has fewer than bucket_size entries, it contains all nodes in the network with bucket distance B - n.
The user of this crate therefore needs to make sure that whenever a node joins or leaves, all affected nodes in the network update their routing tables accordingly.
Resilience against malfunctioning nodes
The sender may choose to send a message via up to bucket_size distinct paths to provide redundancy against malfunctioning hop nodes. These paths are likely, but not guaranteed, to be disjoint.
The concept of close groups exists to provide resilience even against failures of the source or destination itself: If every member of a group tries to send the same message, it will arrive even if some members fail. And if a message is sent to a whole group, it will arrive in most, even if some of them malfunction.
Close groups can thus be used as inherently redundant authorities in the network that messages can be sent to and received from, using a consensus algorithm: A message from a group authority is considered to be legitimate, if a majority of group members have sent a message with the same content.
As for code for the routing table specifically that is here GitHub - maidsafe-archive/kademlia_routing_table: Kademlia Routing Table implementation
Routing itself is here
I hope this helps