Safe Computation, Apps & Plugins

Speaking only for myself, I’m just a backseat coder, and not a good one either! I think this thread is a great brainstorm and perhaps an interested and decent coder like yourself @happybeing will come along and begin attacking the problems here and fleshing something out. Probably not a rush in any case as this would definitely be a post-beta upgrade.

From what I understand of this proposal it seems like it would have less impact on the network than a full in-built decentralized VM and I find that very attractive as I’d really like to see consensus computing and oracles on Safe in the future - but wouldn’t want to ‘break’ the network’s speed and efficiency to have that functionality.

Thanks. I don’t understand the distinction between smart contract and plugin, both are inputs, code and outputs verified by the network in some way but maybe the answer to this is to define more precisely what the difference is in your scenario.

The network owned file is interesting and sounds like it might work both with your Elder + plugin approach, and with my more general any node approach. Essentially is a central database of nodes and their willingness to perform an optional class of operation (plugin or smart contract, however they are defined).

I don’t agree that Elders can be punished by removing their status because this interferes with the core security mechanisms of the network and we should avoid that wherever possible. I agree that using Elders gives some protection against bad behaviour but I don’t think this is necessary, and think we can avoid interfering with the network’s security mechanisms as explained.

How can you enforce this? I expect people will take the easy option and have them run on the same machine. Does this matter?

I’ll think about it more later, thanks again.

1 Like

Thanks that’s helpful clarification.

I think the scheme I outlined achieves this and gives greater flexibility (and market efficiency - as in cost benefit), and doesn’t require interference with network operation (eg by demoting Elders).

2 Likes

Yes, that can work. I don’t think it’s necessarily better than a scheme that’s open to everyone with the trade off controlled by the client, though it may be simpler which is also a plus!

1 Like

Oh generic plug-in framework… how I want you so. :star_struck:

Would be a great BGF project eventually.

The main purpose of a Elder run plug-in network is to port info from the clear net and/or act as oracles in a decentralized manner.

  • I’m thinking that Elders should have to prove to the plug-in and network with their sk? that they are operating both.

  • These groups of Elders running plugins should probably also have to provide redundant material to the group to come to an agreement that it is the desired result of what was being requested by users within the network.

  • Is there any form of punishment for Elders giving wrong info?

  • Should the Elders receive further incentive for this service and what should the reward be? SN Token?
    :point_right: An app specific token that apps utilizing the general plug-in framework could reward? :point_left:

3 Likes

I imagine there is a client app and an elder plugin. The app the client uses should query multiple elders running plugin … and reward those that give consensus answer.

Just don’t reward them.

Could be either - open to the app & plugin developer.

Not sure that any framework is needed here? Seems nodes, can run what they like in addition to the network and provide oracle info … do elders need to necessarily be involved? This is just nodes sharing with nodes and forming their own consensus.

Of course a standard bit of code for running an oracle plugin + client app would be great.

2 Likes

I was thinking a little bit about this and to make it as seamless as possible I thought it’d be better if it was any SN Dapp that chooses to offer or knows when it’s asking for clear net requests, which it then passes on to a specific plug-in.

That way clients just interact with what they want to on the network and if what they happen to request is sent to a plug-in for a clear net query then they don’t have to know any better. As long as are no privacy concerns (which I don’t think there are) then it would be the most user friendly IMO.

Granted if the plugins were Dapp specific then maybe it could be hard for some plugins to gain traction amongst Elders. So you might have a point that just nodes coming to consensus could be more inclusive. But

The point is that there is already an element of trust established by piggy backing off of network Elders that may want to earn more SN Tokens and/or another token.

Could be as simple as, as long as they are returning the same results then it’s good enough? Collusion could be a lot easier on plugins but if bad participants are constantly being weeded out it would probably be sufficient.

I’m sure something important can be learned from already existing oracle networks and how they handle things which I’ve actually never looked into. Something I’ll have to get to.

I think that is a good option too and the most flexible. Personally I would like the option to have a token mined upon successful retrieval of specific information and the ability to set parameters of the token supply, mining reward over time, mining difficulty, etc. which perhaps those could be part of a DBC api that feeds into a plug-in api.

Not sure how it would all fit together to be honest but I definitely know what functionality I’d like to see.

2 Likes
3 Likes

Also tagging @intrz, @happybeing, and @oetyng here.

So now without Elders but instead having Libp2p, which many other projects use for their de facto connections, could we not now allow anyone to run a plug-in by having the SN peer communicate with other networks, like perhaps ChainLink for oracle services, over Libp2p?

I still think this concept could be extremely useful. Say we want content to have proper meta tags and ontologies, or stored as RDF/LinkedData, get verifiable outside network real time information etc. a plug-in could do that no?

Just wondering if others think this has actually become more accessible, possible, or more effective now that there is close group consensus vs Node age/Elder group/Section consensus

5 Likes

Without doubt. This model allows us greater operability with many projects. Imagine using libp2p and having inter network comms etc. we can also use internetwork Sybil defence, meta tags, backups and more. The possibilities of decentralisation could be made much simpler If truly decentralised projects collaborated logically, regardless of project “leaders” etc. but at a logical level.

It’s a bit deep, but I think there is a ground swell of folk who can now see the benefits of a server less network that has many different facets and therefor opportunities. It just needs folk to not be precious, but to embrace a lot more.

We are currently looking at using resource provider model in libp2p instead of fine tuning kad as we do. It is pros and cons, but interop is an easy goal and we are hoping this week to fully test the provider model there.

In short that means we get providers of X near an address. So that could be a Safe node, avalanche node, eth node, filecoin/ipfs etc. It’s just a matter of ensuring this does not stress groups consensus too much, I think it will be fine. But if we crack that (by crack I mean understand the philosophy there) then I suspect Archive nodes, DAG Audit nodes (SNT audits) and so on, becomes very simple indeed.

An area that is not complete yet is anti sybil in terms of offline key generation (i.e. have millions of keys and target an address). We have mitigations there with close group then the reclusive hash, to ensure a chain of groups only have a single piece of data under their control. That’s super simple and very powerful.

However, libp2p uses Quinn (quic) which uses rustls. So now they use a few crypto algorithms like ed25519, rsa etc. BUT the rustls guys are looking at pluggable crypto. Then that gives us another opportunity to kill offline key attacks. We can use BLS keys. I have spoken with the rustls/Quinn folks on this and it’s coming soon.

Then what we can do is limit how nodes join, but not prevent any node joining. So list where they join and ensure offline keys are useless. It works like this

  • Node creates a keypair (BLS)
  • He gets the X closest nodes to the keypair (say X == 4)
  • He derives a key from that old key plus the hash of the closest nodes to that key.
  • He joins the network at the new key.

As the network churn there is a time limit on him joining as the closest nodes to teh old key will change. So he may need to do this a few times.

This means a node cannot target (easily) a close group and make it infeasible to do so.

Anyway a lot of opportunity and it’s simple stuff (we use BLS derivation like this for DBCs anyway, it’s not new to us)

13 Likes

That process is like having to see the bouncer before going into the club. :smile: Getting checked at the door.

So I am interested in a unique use case for a plug-in but am hitting road blocks in my understanding. If anyone could clarify or help work through it with me it would be highly appreciated.

To start, what is the best implementation for a plug-in?

Does a plug-in provably run on hundreds or more Safe Network nodes?

Are they just lightweight nodes outside of SN running scripts and connecting to SN via libp2p?

How are they incentivized to run? Can they be rewarded by the foundation? Do they need their own token? I think we need less tokens, imo.

Then the real unique use case I was speaking of

The goal.

  • Have content be properly tagged
  • Stored as RDF with ontologies
  • Indexed

Perhaps a user pays for the upload, then platforms can access that, cross check it, add meta data for cheap enough?

I am wondering if deduplication might be a hurdle. Obviously it is desired for platforms to not be intermediaries in any way but I’m struggling with some things.

  • If the user directly uploads the content, the user has its wallet tied to the content. Platform cannot pass royalties onto a Verified Artist.
  • Platform uploads on their behalf somehow might put the platform in some form of legal trouble. Isn’t desired
  • Platform uploading the data after modifying what the user has already uploaded could possibly be done in two ways
    • Platform just links tags to the file. Pays less storage cost. Doesn’t have royalty rights but probably not legally liable. Seems like best solution to me.
    • Platform adds meta data to the actual file then uploads as unique data. Pays the whole storage cost and also now may be legally liable

Anyone have ideas or thoughts on this?

I guess it’s just needing clients to connect to these plug-ins so content is more useful before they upload to the network.

Seems like a platform could possibly have the client run through a plug-in before the client even queries the network for storage cost and operations.
If it is new non existing content then the plug-in would have to be bypassed.

5 Likes

This one is interesting. There are options open to us. An easy plugin would be a Safe node that can subscribe to a Resource Provider.

We don’t have that in code, but I can imagine we can provide a plugin API for this. So the node would require to be a true safe node as that’s easiest. Then we have some API that can run a Resource Provider (this resource provider is a libp2p thing).

With that then we can have users search for a resource provider near some address.

In terms of monetisation then I think we need to open our minds to creating several schemes to pay nodes. It could be that nodes who also run X resource provision are paid in some fashion. It could be from the foundation directly (easy but untrustworthy in the strict sense). It could be that we have some voting mechanism, i.e. nodes running X agree to a payment of Y. Where users define X but the network must define Y (the payment).

It is probably easier to allow this to be configurable from the provider of the plugin, so adopting any mechanism they wish, but paid in SNT. i.e. their users pay the requested payment to the resource provider.

With this, I feel we can open up an interesting API that allows these resource providers to almost do what they wish. It could be APP devs use this scheme to provide apps and agree payment for them and so on.

It’s a huge discussion as you know, but the options we have now are pretty good and simple. Each app/resource/service though will need to find that magic formula themselves AFAIK?

Brainstorm mode on/

10 Likes

Yes to all of this and comment bookmarked! The closer tied to the network via actual SN nodes and earning actual SNT, the better. Having network APIs and configurability for a plug-in, even better.

What do you think about data being modified by a plug-in before storage cost queries/upload?

5 Likes

I imagine that alt-nodes would have a safe-site/XOR-URL that they collectively use. So you’d send your data to that site address along a fee (or maybe they respond with a query for payment) and when payment is made, their nodes process the request.

E.g. send a video asking for a timestamp. They apply it and upload it, then send you the link for the uploaded video with timestamp.

3 Likes