Safe Computation, Apps & Plugins

I just don’t see how you could propose a distributed compute without people working from the same baseline. I understand what you saying, that as long as the answer is correct, it doesn’t matter how they get there, but it complicates so many more things.

How are you proposing that someone running a compute node limits its resource usage without containerizing it?

I just think you are exposing the user (and network) to way too many risks. Resource consumption issues, security issues, there’s no way to stop them. A nefarious plugin could take down the entire network at once. Containerized they are of little (or at least less) threat. I’m not suggesting plug-ins spin up containers, I’m suggesting the compute node runs as a container, and the plugins run inside it, as a playground area.

Apologies in that I have not read all the replies.

It is certainly an interesting idea.

There is though the issue that the storage is so well integrated in the network protocol that it maybe difficult (at least for now) to make it a swappable module with another system like a blockchain. Maybe later on as the network software undergoes more restructuring over the life of the network it may get to a point where the pure storage components can be swapped out and replace by another requiring the consensus system.

Although it would seem to me that adding a component that essentially provides non-exposed (to APPs) APIs and provide the means for another network to be built with ADDons using these APIs for consensus purposes etc. But this would essentially be another network.

The alternative then would seem to have a set of APIs that provide semaphore operations (atomic actions) to occur across the network. It would be that the atomic action action would somehow have an address component so that the section handling that XOR address does the atomic action and signs off on it. This basically allows most of what you seem to need. Let the storage system be a reward for storing the resulting data required by the ADDon (Now an APP).

I think I get it now. Below is my understanding - I like to summarise to understand things - and a few questions that came to mind. I’m not qualified to comment on the technical feasibility of what you’re suggesting, or the possible security issues, but is this more or less it?

What is a plugin? – an external script that talks to the outside world and also to the Safe Network

How can we know if the plugin is trustworthy? – same sort of mechanism of Node Age, whereby a misbehaving plugin loses trust.

A plugin could extend the functionality of the network by allowing individual Elders to run a script, or download a blockchain, or prove that data is trustworthy before uploading it to Safe.

Q: What happens to the plugin if the Elders supporting it are demoted or go offline?

Any Elder can configure their config file to support e.g. bitcoin_plugin, Java computation plugin, Youtube video download plugin, etc.

‘Supporting’ Elders don’t run any computation, they just indicate in their config file that they support e.g. bitcoin_plugin. Computation is done by the plugin, with the results of any computations passed to supporting Elders. [See @Antifragile comment below: computation can be done anywhere, but it is external to the network].

Elders are rewarded for processing a valid result but punished if they fail to deliver the consensus result. After being punished they also lose the ability to support the plugin for a certain period of time.

Clients (ie users) can also call on the plugin functionality. Requests will be forwarded to supporting Elders by the network, which retains a list of supporting Elders for that particular plugin. The Elders then pass the request to the plugin and return the result.

The results returned by the plugin are validated by Elders in one section if there are enough Elders that support the plugin, and by additional sections if not.

Q: Is this cross-sectional consensus currently possible?

If the results returned by an Elder differ (from those of the majority, supermajority?), then that Elder is demoted, and this information is propagated around the network.

In this way a result (yes/no) is generated by parallel computation (quorum of Elders of sections agreeing). The result can then be cryptographically signed and returned to the plugin.

The client receiving the signed result can trust that it has been through the validation process.

The client can then (if they choose) upload the data to Safe, with the attached certification showing the data (whatever it is) has passed the validation process and can therefore be trusted, to an extent, by other applications.

Q– would this process be sufficient to say the uploaded data is valid forever? Looks like a security hole in the case of bugs.

With this mechanism any user can download external data, e.g. the Ethereum blockchain, a website or a database and have them validated by the network as trustworthy. This means other users and apps can trust them too.

So plugins can be used to download data from the web and upload it to Safe, and in doing so verify the data is trustworthy in that it has been validated by the network.

Q: It could still have bugs and malware though. So the limits of that trustworthiness would need to be spelled out.

Plugins would extend the functionality of Safe to pull in blockchain transactions in real time, kind of like a platform or middle layer holding wallets, exchanges etc.

It can also be a simple mechanism for decentralising data that is currently centralised.

“Do you want others to trust your centralised data? Write a plugin, let the data collect on safe network via plugins and others will trust your data instantly. Plugins + singing of the results with network owned key are proof of data validity.”

7 Likes

Thanks for the summary @JPL I have not followed this thread but it looks super interesting.

Am I wrong in thinking that this is basically Safe Network Oracle’s?

Just had a scan through the topic and to me it looks like it.
Is that your inspiration @Antifragile something similar to chainlink?

3 Likes

Another useful plug-in I could see if this concept could work is torrent plugin. With the torrent plug-in someone could then make an app so that you go to a safe site for torrent mirroring, on that site all a user would do is to upload a torrent file and pay for data upload and the torrent plug-in would download the contents of that torrent and upload it to the safe network with a signature that the data is correct. Then a hash of the torrent could be stored, with a signature, at a global index containing a list of all torrents that has been mirrored to safe.

There are now a number of large datasets that is only available as torrents, as a way to save bandwidth costs. That could be very useful to store on the safe network, so it’s not something only useful for pirated movies.

3 Likes

Maybe instead of elders configuring which plug-ins they support, there could be special plug-in nodes that elders could contact.

Then network manages global list of all PLUG-IN nodes that support that plugin. So it is like hasmap { plugin_id_1 : array of plugin nodes supporting this plugin_id_1, plugin_id_2: array of plugin nodes supporting plugin_id_2}.

When elders get a request for using a certain plugin, they wouldn’t then use IP:HOST of the plugin node to connect to, but instead the xor address of the node found in the list of plugin nodes.

1 Like

A problem that came to mind in the case of the Bitcoin plug-in is that blockchains provide only probabilistic finality. This means that it is perfectly valid for different nodes to have different views on what the current state of the network is. This is why there are occasionally orphaned blocks, and why the rule of thumb is to wait 6 blocks for high value transactions to be considered final/confirmed. So the plug-in idea would be valid for things that elders would be willing to attest to as being final (ie 6 blocks deep and unlikely to be reorganized), but a “real-time view” doesn’t really exist.

Seems like there will be trouble if people keep asking for a block that is 6 blocks old. Maybe a majority of elders say they have seen the new block and mark the 6 deep block as absolute. But the newest block hasn’t propagated yet to some others, so they vote their 5 deep block as unclear. Would they then be punished? They didn’t do anything wrong. If they are not punished, what would their incentive be to mark anything as absolute?

Yes, not very simple for Bitcoin. I think there is an issue in setting the “truth threshold” (unsure → sure) used for punishment. If it is block based, you are going to have the same issue of different nodes receiving blocks at different times. It doesn’t matter if the threshold is 6 blocks or 100 blocks. If it is time based (is block X seconds old), likewise you have different elders receiving the request at different times and therefore reaching different conclusions. Distributed consensus is certainly not simple.

So what is the incentive to ever commit to anything as final?

Okay … sorry I missed this thread when it started. Very interesting.

I’ve read all the way through and I think I understand the theory here. I would simply call this an oracle system. It has many uses.

Just off the bat, as Bitcoin blockchain was mentioned many times as an example: Why is the blockchain needed at all? Assuming oracles exist, my integrated SAFE bitcoin wallet can simply query for the results of my bitcoin addresses with the [bitcoin-address-query-plugin]. The plugin has the blockchain and deals with it. This is like any lightweight bitcoin wallet where the blockchain is managed by a centralized server - but in this instance, the data is verified by the network from multiple servers.

So we don’t need to deal with blockchain issues at all. Done.

Yeah, that would be a great one. And a nice way to integrate the existing torrents and torrent users into Safe Network.

There are tons of thing this could be used for … goes well beyond smart contracts.

Very clever @Antifragile !!

@dirvine - have you seen this thread yet? If not wondering if you could have a peek. I’m very curious if this is do-able or not and how hard would it be to implement. Suggest you start from @JPL summary post #37 above.

1 Like

The problem would be how the elders source their information. If you are having them source a certain API X then you would expect them to all return the same value, maybe some elders are unable to access the API due to ISP-level restrictions and should then return a time-out or unable to retrieve status, so they don’t get punished for a null value or being too slow.

How about DNS poisoning at certain level where in some way the IP of the destination has been poisoned by a Man-In-The-Middle attack. The elder will get punished while he just did his job and is now being reported for being malicious. This could be an attack vector if those elders are punished and people who are able to exploit this force many elders to suddenly shut down.

I believe a high success return rate should be the leading determinant because we can never be 100% certain, but those who have the best consistency and success in being in consensus + speed should weight the final answer. Or the return data should have a status of “indecisive” with % of peers that agreed.

Also it would be great if certain services can pay the network to cache (ImmutableData) certain calls if necessary so the call be compared or be used to yield even higher confidence.

4 Likes

I don’t think that is how it works. The elders that choose to run these plugin’s source them themselves. They could be from the same open-source repo, or (less likely?) they could be coded by node manager themselves. If there are ISP’s blocking access somehow, then the elder can run the code on a local server - I would expect that to be the default in most any case. Latency would be an issue if you are querying a service on the Internet.

My understanding of the Elder setup:

  1. Elder node sets up these services (servers); personally I suspect these would all be on local machine(s) of the Elder.

  2. Elder defines the location of these services in a Safe Network config file. If they have a powerful enough machine, they might be running these all on the same box (also going to depend on the resources needed by the particular plugin), and if so all that would be in the config for the plugin is the local port number of the service.

  3. services offered by the elder are broadcast by and in Safe Network.

1 Like

I don’t see any need for network to have any sort of full compute layer with your proposal @Antifragile - compute can be done with plugins. In fact with plugins we could emulate other crypto VM’s like Ethereum and also create much simpler or more powerful VM plugins.

All Safe Network needs to concern itself with is determining & returning consensus results (and I don’t mean to make light of this - I’m sure there would be a lot of consideration./effort involved in developing this additional API? into Safe) and managing the Elders behavior.

1 Like

That reminds me a bit of the idea behind DLCs in Bitcoin, where oracles just publish signed results and it is up to the people making the contact which sources they want to use in their contact.
It seems that using elders for consensus wouldn’t be required for a DLC oracle styled approach. It could just be like a separate reputation per operator per plugin at the application level. You’d just need some index of which nodes support a given plugin.

1 Like

@Antifragile do you propose this is part of the core network code or would it be enough just to prove you are an elder on the Safe Network and adopt the plugins as if a parallel network?

I think I know the answer just because punishment wouldn’t be the same if not integrated into core code so the beauty seems to be piggy backing off the networks age reputation/promotion/demotion to have an equal amount of security and decentralization.

As an aside, I had an idea of how to make media content be properly tagged etc to make sure that media apps aren’t just getting filled with junk or improperly labeled content. Part of that off the wall brainstorm idea was to run basically a parallel Safe Network but the nodes redundantly process some same content to make sure there is consensus on if the info is correct for content and if so they then get rewarded a token (the mining or farming bit) and that token could be given value at market or otherwise, but I won’t get into that right now.
But it seems to me this piggy backing plug-in idea could work much better and lower the complexity for anyone wanting to create a similar plugin.

I think the real attraction here for me is being able to suck content and information properly from the clear net to Safe Network in a way that is at least somewhat reliable, continuous, decentralized, and incentivized.

1 Like

Regardless whether integrated or running parallel, being required to provably be an elder means it gives even more reason and incentive to be an elder as it increases network utility by serving valuable information via node age/elder status. Imo.

I wonder what you and others think of if DBC alt coins could also be rewarded as further incentive to this kind of processing…

1 Like

Look at chain link. I don’t think it gives a flying F about bear markets. So I think you make a great point there. Of course some plugins might do better than others but data is data and it needs to port to Safe Network, even for it to act as a bridge for reliable data for clear net once things progress far enough.

2 Likes

I’ve only spent a couple of hours trying to figure out a way for smart contracts to work, sprinkled with some of the snippets I’ve picked up skimming this thread over time. Though I did think about this a lot in the past and concluded it was hard, and that it would need to replicate or reuse a secure concensus mechanism such as we now have (Sections with splitting and node aging on good behaviour) etc.

Clearly that is to a degree in line with using Elders with plugins.

I don’t claim to have understood everything in this thread so have definitely missed stuff that might be important, but without a definitive description of how this can work, all in one place and explained to a level of detail like a white paper it is very hard to understand if this flies or not. Hence I’ve tried to figure out how it could work from first principles. But as noted, only for a couple of hours.

So far I can’t see how some key aspects could work, while I can imagine solutions to other parts.

He are some blocks I’m stumbling over:

  • how are contracts discovered by the network? I can imagine creating a smart contract and sending it to the network and arriving at a section which knows what to do with it but this quickly runs into other problems, such as…

  • how does the network discover nodes which are capable of running a particular contact (or plugin)?

As I tried to solve questions like this I ran into other stumbling blocks.

One reason I’ve taken this approach is that I don’t understand how using Elders can work. For example, if you punish Elders in terms of network status you risk undermining the network by opening up attack vectors. You could punish them by blacklisting them from running a particular plugin as @Antifragile has suggested, but this may also introduce problems. How is this status stored, managed etc?

In fact I don’t see any way this can work that interferes with normal network functions, even having Elders running arbitrary plugins could be a problem in this respect so I’m concerned about that, but it’s not the only problem.

I concluded that using Elders with plugins doesn’t necessarily solve the problem, though as noted this may be because I don’t fully understand how the issues I’m stumbling on plan to be dealt with.

I also am not sure you need to use Elders at all, so long as you can solve the problems I’ve encountered early on. If so I think you can allow any willing nodes to participate, and minimise the workload of Elders by pushing most of the work out to the client requesting the computation: such as selecting nodes from those willing to run a particular contract, and wrapping this up with a DBC for payment and the parameters needed to specify exactly what is required. This would be sent to the participating nodes who reply to the Section which matches the hash of the contract specification published by the client. Results are validated according to the contract and the DBC can be used to reward Section Elders and each participating node which fulfilled the request according to the criteria. I think there may be scope to get the client to do more of this work (eg allocating the rewards and creating DBCs to distribute them), but this is a start.

So the problem seems to be about what I listed as stumbling blocks, and may not need to rely on Elders running the computation at all, which I think would be preferable. There a question of how to punish nodes that don’t behave, such as by spamming the contract, but I think that’s solvable. For example, the client set a minimum node age and requiring any willing nodes to have put up a stake in the form of a DBC in order to be considered, which can be forfeit should a node not provide a result which meets some validation criteria.

I think what I’ve realised from this exercise is the importance of a more detailed design paper, an RFC or even code. Maybe someone has thought deeply enough to do this? Until then I’m not sure we can know if a particular approach is feasible.

3 Likes