My apologies, yes. That is what I meant. Asking users to sacrifice for the “greater good” almost never results in the intended outcome. Statistically in all scenarios, they will choose the option that benefits them as individuals the most.
Yes exactly! Speed on the SAFE network is built-in. It’s finely tuned and will serve files ridiculously faster than any direct tcp connection, WHY WOULD WE EVER REMOVE OUR BIGGEST ASSET?
It’s like asking is social welfare a good idea. That which is seen is that it takes some homeless off the streets. That which is unseen… See http://bastiat.org/en/twisatwins.html for the rest.
It is my personal belief that the outcome is that the approach will improve centralization and discourage specialization and as I said above, the result will be more hobby farmers at the cost of sub-optimal service. When I say “sub-optimal” I mean of lower quality than it would be possible. I know for some that cost is worth paying in order to prevent centralization. Based on various comments I concluded centralization will be resisted and is therefore unlikely. I expanded on that here and also in another topic about (not) using RAID, which is what part-time farmers will in my estimate do (because the system won’t be able to punish them enough to discourage vault/data loss).
Read the devs’ comments, all of them are consistent: decentralization and “balanced rewarding” (I don’t know what other term to use) have always been mentioned as important approaches.
I used to comment a lot about this before, but it is what it is. And also from a tech perspective it may be quite hard to collect and process performance stats of all vaults. It seems complicated enough as is (vault age, uptime, etc.) and if you consider how “performance” should be defined (where do you measure it), it clearly becomes very difficult. For example if you are farming out of Brisbane you may have great performance as seen by your GET
customer from Darwin, but poor for someone in Finland. And because 95% of your GET
s will come from other continents, you should never be able to break into the upper half…
It’s complicated and any direction can have unpredictable consequences. People can participate (or not) based on their own values and interests, so those who find the final approach acceptable will stay.
I think it’s okay to discuss this - the devs can tell us if what is the plan and then we can stop.
We can and should also discuss what would happen in the opposite case (if only above average performance is rewarded). Let’s say only above-average vaults get rewards. So if you drop below the average, you’re f***ed and you can delete everything or “tune” your system or upgrade it hoping you’ll break into the upper half. But no matter what you do on the individual level, at any given time 50% of farmers wouldn’t get any hits, just (free) PUT
s, so there would be both centralization/specialization (successful farmers could add more capacity) and a lot of churn (imagine hundreds of TB’s being deleted every week?).
The rebuilding of missing replicas causes GET
s. I can’t recall if I mentioned this before, but I wonder if farmers get paid for GET
s caused by replica rebuilds. I previously guesstimated the data hit rate could be in low single percents (say, 1 or 2% per month so you get 80 GB of GET
s on 8 TB of stored capacity). But if only top 50% (let alone the fastest) of vaults get rewarded, farmers could join and leave en masse and those who stay could get more GET
s from rebuilding than from paying users.
Without these details and more clearly specified in a proposal, there is too much room for over-simplification and oversight to host any constructive debate on the topic. Be patient, it’s time will come.
Either the top vaults will be preferred, or they won’t.
I just analyzed the both scenarios and I hoped to show that the details impact anything.
A non-cached chunk has 4 replicas. Let’s say you read 2 non-cached chunks.
- If you direct your requests to the best vault for each of them, you get fast performance.
- If you don’t do that, then one of the requests will likely get to a less than best vault. The larger the request, the more likely it is that you’ll get your file at the speed served by the slowest of the qualified farmers.
Imagine requesting a 1 GB file, which is then obtained from approximately 1,000 vaults. You’ll be done when the slowest chunk arrives. It’s a simple concept.
Next, you consider whether you can handle 1 GB of data in near real-time, because even the slowest working vault could send you the chunk you requested within 2 seconds. This is what gives the devs some flexibility (say, direct to fastest vaults if the file is under 10 MB, direct to Tiers 1 & 2 for 10-100 MB, and direct to any for files over 100 MB), it’s not exactly rocket science.
But I don’t think these details are not essential for people to state their support or objections to one of these reward mechanisms (even regardless of which will be implemented by SAFE). (Although this should be discussed in another topic).
I guess you probably found the answer, but since I already started looking, see about self_encryptor
here:
This library splits a file into encrypted chunks and also produces a data map for the same. This data map with encrypted chunks enables the file to be reconstituted.
Because safenet is open source, I don’t think the network will have a choice in the manner of supporting the commercial farmers. If the network makes commercial farming unprofitable, the project would then be forked into a version more accommodating to commercial interests, the entrepreneurially minded could then move over to the forked version and use their google-tier data centers to kick safenet’s butt performance wise until the network’s AI decides to play nicely with the pro-farmers.
@smacz - The importance of latency depends on if the safenet wants to be an all in one solution. I myself would be perfectly fine with safenet focusing on latency-tolerant data and then having another project focusing on providing safenet’s features to latency-sensitive data.
It’s not clear server farms would kick safe’s butt, either on performance or appeal to users. Let’s wait and see grow the network performs before we worry too much about how it could be beaten. Every crypto project fork I’ve heard of has come a poor second, or died altogether.
Any profit they make will come from users, so it’s not obvious that a commercially driven fork would outcompete an already growing SAFEnetwork. Again, we have the luxury of being able to see how the early network grows and how commercial farmers respond to it. Personally I expect them be slow - some smaller entrepreneurs, yes, but Google etc, highly unlikely.
I don’t think we can analyze our way through these questions so we have to make a best guess to start, and be ready to adapt according to real measurements and activity.