Without doubt. This model allows us greater operability with many projects. Imagine using libp2p and having inter network comms etc. we can also use internetwork Sybil defence, meta tags, backups and more. The possibilities of decentralisation could be made much simpler If truly decentralised projects collaborated logically, regardless of project “leaders” etc. but at a logical level.
It’s a bit deep, but I think there is a ground swell of folk who can now see the benefits of a server less network that has many different facets and therefor opportunities. It just needs folk to not be precious, but to embrace a lot more.
We are currently looking at using resource provider model in libp2p instead of fine tuning kad as we do. It is pros and cons, but interop is an easy goal and we are hoping this week to fully test the provider model there.
In short that means we get providers of X near an address. So that could be a Safe node, avalanche node, eth node, filecoin/ipfs etc. It’s just a matter of ensuring this does not stress groups consensus too much, I think it will be fine. But if we crack that (by crack I mean understand the philosophy there) then I suspect Archive nodes, DAG Audit nodes (SNT audits) and so on, becomes very simple indeed.
An area that is not complete yet is anti sybil in terms of offline key generation (i.e. have millions of keys and target an address). We have mitigations there with close group then the reclusive hash, to ensure a chain of groups only have a single piece of data under their control. That’s super simple and very powerful.
However, libp2p uses Quinn (quic) which uses rustls. So now they use a few crypto algorithms like ed25519, rsa etc. BUT the rustls guys are looking at pluggable crypto. Then that gives us another opportunity to kill offline key attacks. We can use BLS keys. I have spoken with the rustls/Quinn folks on this and it’s coming soon.
Then what we can do is limit how nodes join, but not prevent any node joining. So list where they join and ensure offline keys are useless. It works like this
- Node creates a keypair (BLS)
- He gets the X closest nodes to the keypair (say X == 4)
- He derives a key from that old key plus the hash of the closest nodes to that key.
- He joins the network at the new key.
As the network churn there is a time limit on him joining as the closest nodes to teh old key will change. So he may need to do this a few times.
This means a node cannot target (easily) a close group and make it infeasible to do so.
Anyway a lot of opportunity and it’s simple stuff (we use BLS derivation like this for DBCs anyway, it’s not new to us)