That’s why you need to create consensus in your close group. If 4 people do the calculation, and only one comes up with 1599 while the others say 1600, it’s rank might go down. Same is happening already if a node misbehaves in a group.
This is detected either by comparing and returning the majority vote (2/3 or 3/4 were suggested), or through cryptographic proof as in zkSnark.
that’s where the network comes in. the network should produce a reliable output to a computation as the network will have to provide a mechanism to independently verify the result you output. It should be sufficient that if verification for your result fails, that you are not rewarded - then you can enjoy yourself waisting your resources to produce nonsense
ok, sorry. I read your post but somehow it just passed my brain without being understood. obviously I’m too tired right now.
Make the nodes compete time-wise and penalize slower nodes.
Another issue would be how to calculate what a computation should cost and how to price it for the user requesting a computation. Ethereum made a lot of choices in this regard that are worth reviewing.
The main difference to Ethereum is that computations there are done by every node which is expensive and slow.
It might be possible to re-purpose some Ethereum code to work in context of SAFE instead of a blockchain. That might help address the question of how to price (or at least tabulate the overall CPU usage) of computations.
Zk snarks are interesting. Libsnark is beyond my comprehension but this paper explains the fundamentals in a more understandable fashion: https://people.xiph.org/~greg/simple_verifyable_execution.txt
If snarks can be used to let users trust a computation, and if each assembly command can be tallied along with a target time for each command to execute (averaged over the whole network), then it’s maybe possible to have distributed computation without any sort of voting/consensus/redundancy. The user trusts the output, trusts the tally of operations executed, and pays according to the going rate for the number of operations executed, times some multiple to account for faster/slower CPUs. That would go a long way to making this affordable/cost effective.
This is what i’m talking about! At least i know its possible for SAFE to have p2p computing. The only way i see SAFE websites going mainstream if it uses the same kind of process as the current web. And i am sure that Single Page Applications are alot more limited and inflexible than having backend code computed by the SAFE network.
I hope @erick is still working on this.
I should have miss this one.
I don’t know snark but if it only need one computer I would not trust it.
3/4 consensus is good for me.
There 2 polls for storage and for computation. (Like pay per 1MB storage or pay for 1 millions CPU cycle (Should be more).
First the script or code should be send uncompiled.
A fee is taken from the computation poll for the compilation first before doing the calculation. Additional fee for bigger compilation based on CPU cycle.
Now the calculation is performed and a fee is reduced from the poll for let’s say per 1 millions CPU cycle.
If the poll got empty before the computation end. The computation stop and the coin is not refunded and no result is returned. Participant still get paid.
At least 3 should have done 1m CPU cycle before reducing the poll.
If 3 good result is returned, the result is returned and for the 4th one that remain will start to have a timeout to finish the calculation and get paid too if the result is good.
That’s my way…
Then you shouldn’t trust encryption, or crypto hashes, because they only use one computer. zkSnark is mathematically based on the same principles, proven in the same way.
Maybe we should not use consensus at all for the SAFE Network because everything is encrypted.
I’ll look at the whitepaper and see if it will change my mind.
I believe zkSnark probably provides other ways of doing some of the things we currently use consensus for.
I disagree on the “no results returned”
The person paid for xxx cycles, so they should have the state of their computations returned so that it can be continued when they have enough coin.
Its like paying a house cleaner x hours of cleaning, but after the x hours there is still more to do. Do you think its right for the cleaner to mess up the house again because the job was not completed for the hours paid for?
Returning the stat? Yes it should be better, but I was assuming that the client is responsible enough to buy enough CPU cycle before asking to do the job for the following reason.
Saving the stat to continue later it’s a burden for the vault and to develop. It is worth it or not to develop. Maybe not now but that feature could be added after.
Often it is impossible to know how much time a computation will take. There can be many factors in a modeling system that make the time variable.
Also some students doing research do not have all their funds at once and so wish to do modeling which spits out values as it goes and stop when funds dry up. Then as more funds arrive they continue the model to get more values. The idea is that the interim values allow them to continue writing up their thesis.
It would be a very bad implementation that says not enough funds then stuff you, you get nothing.
About the time I understand. For CPU cycle it’s close enough from CPU architectures to have a good estimation.
Something I realized here is the amount of memory it is needed that is not talked mush that should be taken in consideration too. If a job need 1 gig to do the computation and the final result is only 256-bit for a final hash. I don’t see a vault sending 1 gig stats for an unfinished job to be continued later.
Ummm, If I had computing done, I would want the results, not a hash.
(the hash is for proving the computation from multiple machines)
That was an example of the final result. It could be anything else. But the full memory needed to reach the final result it’s not needed at the end.
Added to that I just think about it. Saving the stat and having consensus about that unfinished job stat it’s impossible to do. Because the job will not pause at the exact same time, the architectures is different, the OS is different, the memory allocation is different and so on…
Having a consensus on a finished stat I don’t think it’s possible to be agreed. Maybe it’s possible with Snark, but I’m still looking at it. So far it does what people here are expecting it to do and even more like privacy about variable transmitted. But for me it still hard to believe that it is possible and if it is really secure with zero-trust has expected. Still studying it.
But for the returned memory stat, it’s a price that should be considered. Like locking a certain amount of SafeCoin in the pool based on the reserved virtual memory amount asked for the job. And if the job can’t be done because there is not enough SafeCoin available, charge the locked amount and return the stat to be continued later.
Would it help if processors were “grouped” so as to be identical? Out of thousands/millions of candidate nodes, and a few hundred makes + models, you get sufficient numbers in each class. Then you can forget about ranking individual processors by speed. You can also have a simpler reward algorithm, since performance will vary according to whether the processor is being called upon to perform local tasks, so the owner of the processor has an incentive to only participate when the computer is idle, which can be determined algorithmically, like a screen saver.
I just thought of an objection to this whole idea:
It has been argued (and I accept) that we won’t see Chinese SAFE farmer farms, because they don’t have enough connectivity to the rest of the world.
That’s the situation with vaults, because fast access is a dominant factor.
But for distributed computation, speed of access is much less important than the speed of the processor.
So might we see Chinese processor farms appearing in order to glean what earnings are available for distributed computing? Because if we do, then I would abandon the idea of being such a supplier.
Perhaps one of the beauties of SAFE is that the playing field is somewhat leveled. There is only minor advantages to higher bandwidth.
Farming the chunks are spread out across XOR space. So if the average store of chunks in vaults is 2GB then it does not matter if you are providing 1TB or 8GB, your vault will have 2GB (on average obviously)
Computing will have similar, in that the compute jobs will be spread across XOR space and so each suitable computer will have an equal opportunely. Now for compute there maybe some other factors, like minimum capability which could see IoT/phone/ARM devices excluded from some compute jobs.
Lets say the chinese compute farms set up 100000 computers per farm and there is 100 such farms, and there is 100 million nodes and growing total in SAFE accepting compute jobs. Then those 100 farms will get on average 1/10th of all compute jobs. In other words only in proportion to the number of nodes they supply.
So the problem is? I would see those chinese farms helping the world of SAFE. The home farmer/compute node still receives their fair share of compute jobs.
Saying that Chinese farms (farming/compute) will adversely affect everyone’s elses experience and earning ability is like saying all the other nodes in the world is affecting your earning ability.
The only time server/compute farms adversely affects you/me/SAFE is when they try to disrupt the network. Otherwise they only receive their proportional share of the market and in return strengthen the network which helps all of us. If the “farms” pulled out then they would simply be replaced by home folk who decide the earnings are looking good. If earnings per node go up then more people will add their home node. So in all likelihood the difference in earnings between having these server/compute farms and not having them is a whole lot less than one expects because of their size.
Admittedly I still need to read the original papers and to grok XOR space.
But… I understand that there is currently a cap on the number of Safecoins to be created. So (correct me if I’m wrong) they will be created at some fixed and diminishing rate, by analogy with Bitcoin. So the more farmers there are, the lower the reward per hour to each farmer. Below a certain hourly rate of reward, the costs of electricity, etc, exceed the reward and the farmer has to shut down. Those costs are lower in China. Therefore, you would have, in your example, nearly 100% of the processor farms being in China, because the reward would be below costs for those outside China, and therefore only hobbyist farmers (who are doing it for fun) would exist outside China.
[EDIT] Unless the price of Safecoin against other currencies goes up at least as fast as the reward in Safecoin per hour diminishes. The factors that would determine that are not clear to me.