A path to decentralized computing on SAFE

Yes you do need to catch up on your reading :slight_smile:
Safecoin is distributed according to the Farming Rate using algorithms, that DO NOT take into account time. or any time related rate.

Also unlike bitcoin the coin is used to pay for uploads and is thus recycled, which bitcoin never does. Every new bitcoin means everyone else owns a smaller proportion of the whole bitcoin amount. But SAFE is not like that because earnings add to the whole and uploads reduces the whole.

Without any uploads there is no farming rewards because no farming occurs.

More uploads (coins returned to SAFE) means more content to farm which means more coin created which means more coins to use for uploads and the cycle continues.

Also data usage stats show that people access newer content at many times the rate of older content.

Also you seems to have missed reading that the payment algorithms are purposed for the home user who is supplying spare resource, so for them electricity, disk costs, ISP connection are already paid for. So running costs are minimal with perhaps a tiny amount extra electricity. The home user can switch off for the night as there is no need to run 24/7 in order to farm and earn

So your analysis of [quote=“bluebird, post:41, topic:4627”]
So the more farmers there are, the lower the reward per hour to each farmer. Below a certain hourly rate of reward, the costs of electricity, etc, exceed the reward and the farmer has to shut down. Those costs are lower in China. Therefore, you would have, in your example, nearly 100% of the processor farms being in China, because the reward would be below costs for those outside China, and therefore only hobbyist farmers (who are doing it for fun) would exist outside China.
[/quote]

is based on incorrect assumptions of how things work.

6 Likes

Thanks for the explanation!

2 Likes

Your welcome. @bluebird

Let’s also take into account the fact that people’s computers are idling anyway ,
storage included ; it makes sense to run SAFE and get some safecoin for farming
and giving resources , and of course , using it through a variety of DAPPS , services .

While it may be a source of income to some entrepreneurial activities , it will
certainly be an additional income to offset general costs for all those :ant::ant::ant:'s
that are on tighter budgets or people who just try to be a bit more cost effective .

SAFE will be quite easy to use for all kind of people with a significant variety
of needs , wants and expectations . Get ready , the Ants are coming … :ant:

1 Like

These lectures are good; watching them now:

Key idea: One-to-one mapping of files to XOR addresses, and machines to XOR addresses. Therefore the routing is unambiguous and you don’t need a lookup table of addresses to machines (or even a broadcast request) as compared to, say, TCP/IP.

Aha!

[EDIT] I should amend what I wrote above as follows: “Each file and machine has a unique XOR address.” That doesn’t imply that one could deduce the (IP) address of a machine from its XOR address, or reconstitute a file from its XOR “address”. Next step is to understand how machines find each other. There’s a tree diagram at lecture 1.2 that left me staring blankly and which will require revisiting.

3 Likes

Sorry, didn’t read the rest of the posts and I apologize if this was already brought up.

Did anybody read The language of the network | Metaquestions?

There is a line in it that says “Future, ComputeNode to handle computations (using zk-snark etc.)”.

1 Like

Brilliant resource, I hope these guys can do another series on structured data/ programmable safecoin etc Maidsafe should definitely fund more of this, I bet they are utilizing these talks to attract partners already :wink:

2 Likes

Looking once again at the original post, a couple of thoughts occur to me:

  1. The example calculation is small in proportion to the total computations, such as hashing, performed by the SAFE computation layer (CL). Since the CL has to be paid for its services, the client would be better off doing the calculation locally for small calculations.

  2. For large (compared to the CL overhead) calculations, you have the problem of privacy: A large amount of calculation will reveal clues as to its purpose, and perhaps its originator, to the CL node doing the calculation.

  3. For calculations where privacy is unimportant, the client would be better off renting time on some cloud service, which doesn’t have the CL’s computational overheads.

  4. How does one conceal the purpose and ownership of some arbitrary computation from the CL?

From the above considerations, I conclude that decentralized computing on SAFE is either uneconomic (because cheaper to do elsewhere if small) or impossible (because not private if large).

I would like to be proven wrong, though.

I posted an article a few months ago about MIT’s research on homomorphic encryption that could solve this problem. Enabling things like doing searches with real results where not even the search engine can tell what you searched or distributed computing. I’ll see if I can find it later.

Except when you rent from AWS or something, you’re paying for the machine, computation time, maintenance, power, staff, etc. When you buy from me (via safe) you pay my electricity and some bonus money for my machine. I think it could be offered much cheaper than AWS.

3 Likes

But that’s only calculation without access to SAFE. What happens if the script involves doing a PUT on the SAFE network? Should all three nodes perform the PUT?

I have an idea that two random nodes are given the script and instead of accessing SAFE directly they produce a list of what actions to perform on SAFE. Then a group of nodes compare the lists produced by the two computing nodes and if they are equal the actual changes are applied on SAFE.

That’s hardly a disproof; I think otherwise. AWS have economies of scale, being able to purchase warehouses full of identical servers and they only need secure their periphery rather than each server from all its AWS siblings.

I would indeed like to see the article you mentioned.

Article here, sorry I passed out last night.

I’ll do some math on AWS numbers later as well.

1 Like

I am certain that, in principle, it is possible to implement a general purpose computer on SAFE, by which I mean the actual computer is in XOR space, and not just some meatspace/metalspace node.

The simplest view of a general purpose computer is that it is just a collection NAND gates, registers, and a way of moving bytes between the registers.

So the crudest sort of SAFE computer could be a set of structured data (the registers and rules), with client computers moving bytes between the registers according to a program.

It would be relatively slow, but wouldn’t require any new technology except an app to harness clients as the “transport layer” and no computing layer within the network.

But, despite its relative slowness, it might be extremely useful:

  1. It solves the privacy problem, since the calculations per node are so atomically small.

  2. It could be used by developers, in conjunction with a scripting language, to do small-ish (but much bigger than the addition that @polpolrene gives) computational tasks that cannot live anywhere else but on the network.

  3. Since it requires no computing layer, it is certain to be implemented much sooner. The computing layer could be years away.

  4. EDIT: That scripting language could serve as a smart contract language.

1 Like

With one exception: the owner of that virtual computer could see what’s being done.

The remedy would be, rather than the virtual computer existing as a rental service (my initial assumption), a client would have an option to create a new instance of it from a template.

While I like this idea, and had never thought of making an “app based CPU” I think the overhead would make it so terribly slow. How would you send the “you’re being a relay right now” vs “hold this in a register” vs “do this single op calc” without it being 5 times the size of the data actually being managed? Or do you not and that’s just part of how it has to be? (until compute is built in to the network)

Indeed, for example, the overhead might be 50 times the size of the data being managed, but still small. This smart contract, for example is less than 500 bytes: http://pastebin.com/raw/REFtam2m

1 Like

Alright, not efficient but it would work. But the next issue becomes, how many people have to do this and give the same result to be considered “the answer”? I’m playing devils advocate here.

You get a contract “Do a xor calc on data held in eax @ x node and ebx @ y node and return the result” you have 3 people who can change/manipulate the data. You cant rely on multiple people doing the calc from node x and y, so you have to have x’ and y’ to verify against. Maybe rely on SAFE data handling to make sure data doesn’t change? If you could do that, then you only need a few people to run the calc using data from x and y to verify each other.

David frequently mentions zSnark when we talk about safe computations. I know nothing about it and it may address some of these issues.

I don’t know how many people, or whether that is the best form of redundancy.

There’s also the question: Can the virtual computer own anything? For example, could it create throwaway accounts, that could receive and pay Safecoin? And would such throwaway accounts be concealed from of the “transport layer” nodes?