Distributed SAFE OS

Ahh gotcha!

that would be awesome…imagine TV boxes and workstations doing like a LAN boot with SAFE…that would be epic!

2 Likes

[One does not simply copy google][1]

On topic, off loading tasks I don’t want my computer to be doing currently onto the cloud (for a price, I’d imagine) is nice. An OS that effectively must always be online on the other hand, is not so nice.
[1]: In Iowa, A Field Becomes a Billion-Dollar Google Server Farm | Data Center Knowledge | News and analysis for the data center industry

2 Likes

The farmers can go on and off the network at anytime. It’s beneficial for farmers to keep running 24/7 because that improves their rating. After temporary disconnections their ratings quickly gain again.

Remember that the distributed OS is like the internet; use it when you want to, and with any client you want to, the internet will still be there even when you turn off your smartphone.

Minimizing Churn in Distributed Systems – http://www.cs.berkeley.edu/~istoica/papers/2006/churn.pdf

1 Like

Wouldn’t having a distributed OS suck bandwidth like crazy and be utterly unfeasible? Also I’m not sure I like the idea of having an HTML/Javascript OS. That sounds like it would look horrible and be incredibly limited in what it could do. The graphics would suck to say the least. And how would you pull off dynamic effects like those found using compiz or something similar? But really I’m more concerned with the bandwidth issue. What would happen if you wanted to run your computer offline? Or if the internet cut out? Sounds like this whole proposal is reliant on a permanent connection to the internet.

3 Likes

:laughing:

And how exactly do you think any SAFE app works differently?

If the OS can do OS functions, such as booting and updates, without access to the internet, then that necessarily means that the OS must be able to do it’s functions locally, ergo distributed OS is optional, ergo local OS with cloud computing features.

A webapp and an OS have two completely different sets of expectations though. If the net goes out and I can’t access safebook, that’s one thing. However, if the internet shuting down means that I can’t even manage my local files, then that’s like tar and feather time right there.

3 Likes

JavaScript is just for running backend code. Like in Node.js:

“Node.js is an open-source, cross-platform runtime environment for developing server-side web applications. Node.js applications are written in JavaScript and can be run within the Node.js runtime on OS X, Microsoft Windows, Linux, FreeBSD, NonStop,[3] IBM AIX, IBM System z and IBM i. Its work is hosted and supported by the Node.js Foundation,[4] a collaborative project at Linux Foundation.[5]” – Node.js - Wikipedia

The frontend can be anything, like an iPhone app written in Objective-C or a Windows application written in C++.

And the bandwidth requirement for the farmers needs to be examined. For the users there is very little bandwidth requirement. When people use Google Search for example, there may a huge amount of processing going on to process a single query inside the Google data centers, but there is very little bandwidth needed for the users.

How do you think you’ll connect to the your SAFE OS? your computer still needs some sort of OS (even a basic kernel) to connect to it. Of course you could manage local files, just save them from your SAFE drive.

Or better yet have the local copy be persistent with constant uploads to SAFE. If the internet goes out so what, it comes back on and it’s re-synced.

Anyone here knows how to perform parallel computing such as grid computing? My initial idea is to have the OS be a distributed JavaScript engine running in thousands and possible millions of farmers.

A simple example with weak security: The processing is divided into tasks running up to a 1 minute deadline. Each task is a snippet of JavaScript code that the farmers execute. The farmers run two types of processing nodes: TaskManager and TaskEngine.

The TaskManager receives a task and wraps it in a JavaScript function that returns a unique code. Then it sends the wrapped task to a random TaskEngine node on the network. The TaskEngine node executes the code and returns the result (the unique code) to the TaskManager.

The TaskManager checks if the returned code is correct, and if it is then the TaskEngine is given a farming reward. If the code is incorrect or the deadline has been reached, the TaskEngine sends the wrapped task to another random TaskEngine node on the network and repeats the attempt. I that also fails the TaskManager returns an error code. If the task was run correctly, the TaskManager returns a success status.

All google did was reverse the hyperlinks of the web

Even easier on the SAFE Network because of all the public data that’s there already; much less work than using crawlers across all the servers of the world

here is an interesting tidbit from dirvine for an idea…

1 Like

Is it possible to improve the security in my previous example by using the already existing security in the SAFE network?

What is needed is that the TaskEngine node execudes the task without directly accessing the SAFE network. Instead it should produce a list of changes to be made. In order to make sure that the list is correct, the TaskManager sends the same task to two random TaskEngine nodes who return the hash for the list. If the hashes are equal, the TaskManager sends a command to one of the TaskEngines who sends the list to the SAFE network.

The SAFE network receives the list of changes to be made and calculates the hash, and then asks the TaskManager for the hash. If the hashes are the same, the SAFE network performs the changes in the list.

At the scale of stupid. As we know about big data, information at that scale behaves fundamentally different than at smaller scales, a fact which google has and can continue to use to it’s advantage.

How is it going to be curated? I’m still seeing a way for google to take most of the money.

EDIT2: There probably has to be a cost to use the computing resources because they are easier to abuse than the SAFE data storage. See post #47.

The farmers need to earn a lot so that the network can run efficiently and grow. And a large enough network effect requires, I believe, that the network is free for users.

This is incompatible with the SAFE network that has an inflation model based on recycling. To solve this, the SAFE network source code can be forked into a FreeSafeNet, and safecoin renamed to storecoin or somthing like that with a flat inflation like dogecoin since there is no recycling because all resources in the OS such as processing power and data storage are free for the users.

EDIT: And to remove most of the incentive for spam and abuse, no storecoin reward can be given to app developers or content providers. And for the mail system users need to add allowed senders manually. And the farmers must be running on the FreeSafeNet only, without any possibility of external communications.

[This post is deliberately a bit provocative in order to push the envelope.]

How you going to do that?

Silly. That’s what this is.

SAFE isn’t even out yet. All the things folks are arguing out cannot even be tested yet. The developers are too busy writing code to engage in these discussions for the most part, yet folks keep getting their panties in a wad over this or that imaginary problem.

2 Likes

The distributed OS is something for the future. First there must exist a real version of the SAFE network to fork. And then a lot of development is needed to implement the distributed OS.

If my (a little provocative) assumption is correct, then that makes it tricky for Maidsafe. They could make such fork themselves, but that would risk anger the IPO participants and even the core developers. Of if Maidsafe legally can modify the safecoin inflation model, then no fork is needed and they can develop the distributed OS themselves on top of the real SAFE network.

Yes, but how are you going to preclude machines vaults from external communications?

The network has no control over who hooks to it or what they are doing… The security has to be built into the protocols.

At any rate. The whole discussion has been a waste of time. Fork away. And like I have said tons of times before, I don’t think the network needs a currency at all to succeed. Just a spam prevention mechnism of one flavor or another.

It isn’t about the money. Its about the security and freedom.

3 Likes

If someone hacks the farming software to open up ports to external communications, then there is hardly anybody who can use it! Because the client API only contains calls to the SAFE network.

And if a client tries to write JavaScript code that calls the hacked external access, there is small chance that the code randomly will be executed in a hacked farmer (when most farmers are well-behaved).

And the random selection of which farmer should run the code is psudorandom which is actually deterministic. This can be achieved by taking the hash of the client JavaScript code (the task) and then using XOR for selecting TaskManagers and TaskEngines in a deterministic yet pseudorandom way.

The client JavaScript code needs to be encrypted with the client’s key and decrypted by the TaskEngine nodes who have the closest XOR distance. Some secure key exchange or something like that.

Backend applications (apps) are JavaScript files on the SAFE network that are started by sending them JSON objects as input.

As an example a search engine has a frontend app where users can enter text queries. The frontend app sends the query as a JSON object to the backend search engine app which is run by a TaskManager sending the search engine app JavaScript code plus the JSON object from the user to TaskEngines. The result is sent back to the frontend app as a JSON object.

Backend apps can call other backend apps, so for example the search engine can first have an app for query parsing who calls another app that does index lookup which then calls a third app that gets the search results that are sent back to the frontend app.

There is absolutely nothing that can be done to prevent somebody from changing the SAFE client in whatever manner they choose and running it on the network aside from monitoring it’s observable behavior and upgrading or downgrading how much the network trusts it.

The code is open source. Anyone can modify it and run it.

The protocols are secure enough it should’t matter.

1 Like