Good You’ve made me re-read the doc in question, and it’s unfortunately out-of-date too. We really struggle with documentation and communicating the current plans outside the Troon hub. As you’ve probably noticed, @dirvine does a mountain of work on this front, but we need to have more than just him. There was talk recently of taking on some technical authors to address this problem, but typically, because it’s not directly related to writing code, I haven’t kept up to speed with how that’s progressing.
Anyway, as for this specific case, your time wasn’t completely wasted. Much of the doc is still valid, but I’ll focus on the section about the lookup. To quote the paper:
E. Recursive lookups
It is very important in distributed computing not to hold state on remote actions. This is because remote actions are just that, remote and therefore out of your control. Kademlia handles this well with iterative searches carried out in a loosely parallel fashion as described in II.
With managed connections, however, there is a different situation as we are working with a very current network of nodes who are all in communication. In such a case a recursive lookup may prove significantly faster and also with much less network traffic.
This recursive lookup can now be a single message to the closest node in the routing table, who recursively passes on to their closest node and so on. On any failure the recursion would continue from the previous node to the failure, who has an open RPC that will fail to the failed node and can easily select the next closest node.
On finding a node or value the requester is passed the contact tuple of the node in question from the last node in the chain (not the actual node who has the answer) and then continues with normal kademlia logic, which may involve getting the κ closest nodes in a find node situation or simply getting a value in the get value situation. Caching and last node requests in addition to caching (in future) can then also cache the value in a find value request and do so without being requested.
Much of this is correct, but there are a couple of critical differences between this and what we now have implemented.
At this level (the Routing layer), there is no longer any notion of values. All values are stored and retrieved via the Vault overlay’s protocols, which is probably where much of the confusion in this thread has stemmed. Routing simply provides a method of delivering messages from one node to either a single peer, or a close group of peers.
The other significant divergence is that the last quoted paragraph is mostly wrong. Regardless of whether a message is destined for a single node or a close group, no contact tuples are passed around. As far as I’m aware, we use NodeIDs (more or less the hash of the Client’s or Vault’s public key) exclusively in Routing, with the single exception of the case where 2 peers wish to connect to each other.