Bytemater - BitShares - Daniel Larimer opinion on MaidSafe

http://bytemaster.bitshares.org/article/2015/01/30/Can-MaidSafe-Decentralize-the-Internet/

This is a nice article by Daniel Larimer from BitShares; This post contains his opinions about the MaidSafe endeavor,

3 Likes

His tone seems abit pessimistic and in disbelief such an network can come to fruition.

1 Like

His main argument rests on outdated knowledge, namely that you pay for a GET request. It doesn’t work like that (anymore), people pay for PUTs, GETs are free. So the inefficiency he’s talking about doesn’t exist.

That really only leaves his compliments to the team. :smiley:

6 Likes

Concluding many events events this past week, and last weekend; many high skill level engineers have illuminated that they are consistently checking in on MaidSafe;

I’m glad to have read this regardless of some mistakes, don’t forget you can definitely let Daniel know

4 Likes

Great to see a reply from someone who’s experienced with building crypto-software. I cannot prove him right or wrong, because I never did any C++ coding so far. But to reply to one of his points:

We know from Netflix that the cost of providing 1GB of data is $0.01 and falling. This means that you owe 1000 people each $0.00001 for the MB of data that you downloaded from them. This is well below the level of what Bitcoin considers dust and would be uneconomical to process even in a very centralized way.

I think these micro payments are like on of the best things ever. Bitcoin is way to expensive, burning like 600 mljn. a year for some POW. Transactions cost money, with Safecoin, transactions are free. No matter if you pay someone only 1 coin or 0.00000000000055 coin. It takes less than one second to do a payment at all. Even while you need more than 32 nodes for a payment, as far as I know, some other groups have to agree as well. But what would the cost for a transaction like that be? Let’s say 100 nodes, doing some simple hash calculations for a second. What would the cost be for something like that? It’s like almost free!

In a future article I will outline alternatives that will achieve most of what MaidSafe is attempting in a far simpler and more economical manner.

I think this is great, let him come up with some alternatives. Again, I don’t see his economical problems, just for reasons I described above. Safecoin and Open Transactions are the only systems in crypto which can do micropayments in a very fast, secure and cheap way. Once the blockchain brothers see it, they will believe it is my guess :smile:

Now let’s talk about speed. I made this point in some other topics as well.

I predict that contrary to expectations, the MaidSafe network will either be more expensive than alternatives or slower than alternatives. I highly doubt they can be both cheaper and faster given the overhead of being censorship resistant and decentralized. I hope they prove me wrong because I want their product to succeed.

Maidsafe is not like Google or Amazon AWS where you make 1 connection to a server. So even if Maidsafe is 3 times slower than the regular internet, speed could still be higher due to the fact that a node may have 4 connections into the network. 3 times slower means like 33% the speed of the regular internet, but if you use this speed in parallel it will be like 4 times the 33% (if it’s done 100% effective) so you end at somewhere over 120% speed. And RUDP can be faster as well. As far as I know, Youtube uses a TCP-connection in the browser. It could be that RUDP with the RAID-like only 28 nodes of the 32 needed to re-create the chunk can be faster. And don’t forget about caching.

7 Likes

And even the pay for PUTs are not done by micropayments, but by a data PUT cap/tally per “account”.

His main objections seem to be tied into this one point which he took out of the SafeCoin Paper regarding the possibility of content providers using micropayments sometime in the future. This is an idea which is far from fundamental to the SAFE network function. It may or may not be feasible to do it ultimately, but that has nothing to do with the basic network model.

We’ll see if the latency issue is a factor on the network as a whole, and the processing of safecoin in particular, in testnet 3 and beyond. That is something that I wonder about, but I can also see reasons why it might not be as bad a problem as one might think and might even be much faster.

I respect Daniel and am looking forward to seeing any other articles he puts out on the subject of other alternatives. But I don’t think he’ll grok Maidsafe unless he digs in a lot further.

3 Likes

Just thought it’s funny he wants to discuss maidsafe on bitsharetalk instead of in the maidsafe forum.

2 Likes

Interesting read. It may have some outdated info (pay to get etc.) and slight misunderstandings, but he was in contact well before BTC was known of. He is right it is a challenge and the reliable UDP protocol is a hard thing to get right (that is why Rudp is becoming a new project CRUX and headed for boost libraries). We have boost Engineers in the team working on it and Chris Kolhoff who designed asio did the initial Rudp, so it’s not us doing it, it’s us getting the right people involved as well.

I think issue of speed etc can only be measured, but there is a clear issue over the approach, even if every tx takes 1 sec (per group) and there are millions of groups then those millions of tx will happen in close to a second (say tx gets close to a million messages per second , per channel). Again measurement will help. Then if safecoin takes the integer route, it will be one message so …

In terms of micro-transactions then the easy solution for us in the short term is as previsouly outlined. Pay in safecoin and the network will consume these in chunks as you put data (so you effectively buy X space at the cost at that time) This allows transactions to look like microtransactions as the manager group can say OK that is 10 million puts we have deducted a safecoin. Further testing and algorithm detection will make even micro tx simple, it is one thing I worry less about (we never used to pay a woodsman for every axe swing :slight_smile: )

So the goal is to provide speed and privacy as well as decentralisation (speed does not require to be day 1, but its a factor and I believe there is no way a centralised system will keep up as we grow the logic out, not without huge security and privacy issues). The key is to decentralise every single algorithm and that is extremely hard (we have done this mostly). It is akin to factoring every equation to it’s base algorithm.

I think it is like back in school you can have an enormous equation with many variables, that is generally what a program does, implements an algorithm. So you can grab nodejs or python and quickly implement the huge algorithm or take time and do it in c++ and they are both very wrong. First the algorithm needs reduced and you may find you have Z = 5X + 3Y as an example. People can code a massive version of that equation unfactored. This is the difference between fast to market and right. I prefer to factor as much as possible. No matter what the number of bugs is 3 per 1000 lines of code so less code == less bugs, but more importantly it means you have found the more core algorithm.

When you do factor down then another thing happens, people can look at the results and say, oh that is simple, simple code and very easy to work with. Then you win all round. If you do not ignore the market though, it is essential, so a fine balance of correctness verses speed (in our case testnet measurements tell us if we got the algorithm right no matter how un-factored [it’s not even a word :slight_smile: ]) and not coding yourselves out of upgrade paths and you are good to go.

This balance is the key component to viable product (minimal or not) and I feel we are there now with the network algorithms. We have simplified them down and our complexity is imagining the network running to see how they interact and these test-nets are key. Some things cannot be measured like ripples in the sea, if your system is design along natural rules then you create immeasurable outputs. So you need to step further back and look at the network as a human sees it when many other humans use it.

In the last 6-7 years I have not been able to get close to the code (running around begging for money to pay the Engineers time implement a dream without being able to speak much with me), since Aug/Sep I have and it is amazingly good as there have been some step improvements and reductions in the code and this week I intend to present even more. This happens in parallel with the testnets though as we balance launch and correctness.

tl;dr I spoke with Dan years back and more recently in person, he is very sharp and decent. I think he is more focussed on economics whereas I am more in decentralisation so we will differ in opinion (route to improve our world) but not respect and that comes over well and hope I re-iterate it here.

14 Likes

Would that be more like Bitcoin, where coins don’t exist as unique entities with unique ID’s?

Yes, when we can ensure groups are as secure as we think (non deterministically linked groups, deterministically linked are already provably very secure) then we have the possibility of a safecoin balance ( a 64 bit integer). Then a single transaction will be just that for any amount. Unlike bitcoin, no dust to deal with, it means instant micro transaction are possible. Then the network can process transactions even more efficiently and at linearly scalable speeds of transaction per second.

3 Likes

That would be a lot more efficient, though it feels scary that only one compromised group could then essentially destroy the SafeCoin ecosystem. It’d take only one bug, or a compromised update… Risk=Probability*Impact, and the impact would be catastrophic with a balance-based SafeCoin system. With the unique SafeCoins system the damage would be quite small.

Yes this is my feelings right now, we need inordinate testing, but essentially this should be the goal and definitely achievable. There are several protections though as you need to go through close groups, plus a total break would mean perhaps a person has X coins and nobody wants to trade with them but to dangerous for versions beta/1.0.

After launch this will be a big investigation as I feel the answer will be as super simple as ever, and guaranteed secure. When we see it we will know it though and I cannot see it yet (not really looking though)

1 Like

Dan has made a new post about maidsafe:
http://bytemaster.bitshares.org/article/2015/02/01/Thoughts-on-MaidSafe-and-an-Alternative-Approach/

@dirvine

I think you might be surprised how much you and Daniel Larimer agree, actually. I participate in a mumble chat with him weekly on Fridays at 10AM. I’d love to hear the two of you hash out some of the finer points of your visions for a better world. There’s a great deal of overlap.

2 Likes

Dan wrote: “Understanding economics is the key to building these distributed storage systems that are censorship resistant.”

Maybe he means the economics in the beginning, but he should check out what Ray Kurzweil says about price/performance of information storage, communication and processing. The price/performance improvement follows an inexorable exponential trend! So in the long run, that kind of basic information infrastructure will become basically free. And because we already are at a fast-growing part of the historical curve, it will only take some decade for the price to drop a lot. So a sufficient network effect provided by users in the beginning is likely enough for long term survival of the SAFEnet.

2 Likes

Yes there is the departure in agreement between myself and Dan, I do not believe any form of collaboration between humans with economic incentive will be fair secure or efficient. We tried it as a species and failed and always will. Value is what needs to ripple through systems and not all value can or should be measured (education, health, research … long list).

All the same its good to see people try different things, we will see in the end, but I doubt anyone knows how fast/secure/private SAFE will be/ We can speculate, but not for much longer :slight_smile:

In terms of value of safecoin to the network and to humans (the market) being different issue but providing an agreed purpose is hard to fathom. The network needs safecoin, but ONLY to keep farmers incentivised to keep looking after data, that’s it. Humans will use it for money/store of wealth or all those economics based terms (very few economic experts are super rich :slight_smile: ) So the network controls the # of safecoin needed to store/calculate/message etc. balanced against the number it need to keep enough farmers providing the resource. Different from price setting this is lowest possible cost of network resources being achieved, not because some human decided, but the network measures in real time and dynamically. So safecoin to the network is a measure of human involvement, nothing else. To the market safecoin is a crypto based currency protected by the network and importantly with zero fee’s as they are always gonna favour the rich and stifle the poor as well as just simply not worth calculating a payment for

@faddat I agree and recognise we disagree on implementation and some basic fundamentals. I wish I had time to look closer at other projects and depend on others to tell me just now. I can only imagine trying to work out SAFE and working on all the bitshares projects, it says a lot, but also explains where misconceptions can happen. As long as we are all open and honest then what’s to lose.nothing, but much to gain :wink: I wish I had time but I have one day off since a year ago in October ( a weekend of) and that was when Paige was over recently and I have a ton of research to get through. I wish for time but have a presentation tomorrow that should allow me to help the team see what I have been ranting about in the last few weeks and take a few days off. I really feel the need these days for a wee break and get up the highlands to my nature :slight_smile:

8 Likes

Great to see another reply, thanks Daniel.

Have you ever downloaded from iTunes? It is ALWAYS faster than downloading the same content from a popular BitTorrent with 1000’s of users. There is no escaping the network overhead of negotiating hundreds of connections and maintaining a “tit-for-tat” incentive structure.

This point is well made, but in P2P-systems, your download speed is someone’s upload speed. You have to use pipelines in any P2P-network. It’s true that there’s overhead in here. But think of Bitcoin, every block and transaction shared with something like 8000 live nodes? To only install a full bitcoin-client I start with something like 35 gig! That’s completely not economic as well. And what about burning for POW? For Bitcoin it’s something like 600 mljn a year, just for energy (I know Bitshares is way more smarter than that, using DPOS, that’s great). Maidsafe doesn’t have anything like that. So if you install the Maidsafe software, you are way ahead of all competition using blockchains who want you to install and store these huge files.

Individual business men would have to structure themselves as “caching proxies” so that they can claim immunity from DMCA copy-right infringement claims. In the event that the caching is insufficient all that is necessary is to provide an automated way for lawyers to file take-down requests of individual chunks of data.

Maidsafe wants to provide: decentralized filesharing, decentralized storage and decentralized (D)Apps and decentralized websites in a way that no-one can ever understand who downloaded it, who’s storing it, etc. So the system is made in a way that there can be no claim at all, because no-one can find anything on your machine. And if they find anything, they have to work extremely hard to find out which chunk belongs to which file and who ever requested it. It’s not only about datastorage.

Farmers will pick random chunks of data and request nodes to prove they have stored it. The more you store and the more you farm the more safe coin you will earn.

Just to make clear, someone pays to PUT data onto the network. It than is stored in vaults which are run by Farmers. The moment a file is requested, than there’s the probability for “farming Safecoins”. You won’t get payed for sharing TB’s of data and proving that you have them.

This means that they expect nodes to provide resources to both fetch and serve cached content without being paid. This is very altruistic but not economically viable.

Caching data is like a “free service” nodes provide to each other. It’s actually makes things cheaper for them. let me explain: if a node is asked to provide chunk ABC, and it has to ask his closest nodes if they have it, this all brings up overhead and costs time. If the the node already has the chunk, it will just provide it. No need to ask any other nodes etc. This mean faster and cheaper. And in a routing system like SAFEnet, all nodes need to provide routing already to each other. So why not cache the last x-numbers of chunks in memory and provide when needed. The “price” for this service will be part of Safecoin, because no one will work for free and every node provides a vault. So if a Farmer is calculating his costs it will be something like: Storing the data + bandwidth + electricity + hardware +cached chunks etc. If it’s not profitable? Well, than a lot of farmers will go offline! Stop providing their services. The SAFEnet has a built in mechanism for that as well. The Safecoins provided for a GET request (the actual farming) will become higher and higher until the network stabilizes again. This is smart and economicly feasible, but hey, that’s my opninion :wink:

4 Likes

Absolutely (nicely put you really are getting onto the depths of really grasping it all now, very cool, Eric and I were discussing the number of folks who are really getting into the detail, for me it’s really brilliant.),

Another thing is that you want to provide cache data, it means your chance of earning a safecoin increase. Weird, you bet :slight_smile: but it means you are stopping somebody else from a potential farming attempt if you can supply the data. So a mix of not wanting all that extra traffic through you and increasing ability to earn whilst making the network faster and more efficient is all net positive I believe anyway. It’s the deeper aspects of some of these parts that become interesting they do a lot for the network that entirely not obvious.

6 Likes

Thnx for the nice words. And boy, this quote blows my mind again, that’s like really true! :slight_smile:

4 Likes

I really wish he would just come over to these forums and ask some questions first, rather than speculating on many points what MaidSafe plans to do. Because the result is that he often doesn’t list the option that MaidSafe is going for, or he picks the right one and spends only one sentence on it.

If I upload a picture of my cat then chances are there will be
relatively few fetches. If I upload a copy of the latest movie then
there will be millions of fetches. Because the data is all encrypted
the network has no way of knowing how often the content will be fetched
and thus no way of knowing how much to charge in advance.

There is a global variable PUT price which will settle on about the average cost of storing and hosting a PUT. The price will be algorithmically based on the used storage:total storage ratio. If there’s too little available storage PUT prices will go up, if there’s lots of free storage, PUT prices will go down.

I know they have implemented caching of frequently accessed content.
This means that they expect nodes to provide resources to both fetch and
serve cached content without being paid. This is very altruistic but
not economically viable.

To sustain and increase your vault rank in the network you need to both host and route what is expected of your vault, or else you are de-ranked, which translates in lower SafeCoin rewards, and eventually kicked out your close group and blacklisted. Caching is in fact beneficial for a vault, since it saves additional routing.

Through rank, vaults are forced to provide the whole package of participating in the network, and because the farming rewards, like the PUT prices, are variable based on network capacity/load, it will settle on an economically viable value. In essence, what you earn from the GETS is actually payment for storing/hosting/routing/caching all together. If this payment is too low, vaults will go offline, and the algorithm will accordingly increase this payment until it reaches equilibrium again.

Farming

The paragraphs under it are simply not correct. Data integrity is maintained by rewarding farmers on replying to GETS, and otherwise deranking them. Vaults are also “pinged” by their managers every 10 seconds or so to check availability. He doesn’t seem to be aware of the SafeCoin recycling process.

Latency is far more important for user experience, especially web browsing.

Popular websites will be cached, which means less hops, which means less latency. In addition, there is a race between farmers for providing on GETS, the one delivering the data fastest of all will get the SafeCoin reward. This will cause latency minimization.

For these two reasons, his expected average latency of 5 seconds is extremely pessimistic.

4 Likes