Safecoin resource (consumption) model

Here’s a quick summary/intro to justify this topic:
a) Some sort of bid/ask process will be necessary to properly price resources
b) Because of the huge amount of resources the network will trade/share, it will pay to exploit even slightest inaccuracies in its pricing so reward for mischief will be handsome
c) Currently caching and intermediate nodes aren’t paid because they’re supposed to contribute to the network. But they contribute both memory and network bandwidth and should be paid for their work.
d) If the project turns out successful as we expect, resources that are not rewarded will gradually become scarce and impact the network in ways we can’t predict (for example, imagine farmers whose farming effort gets less lucrative because GET’s become slower so they earn less despite low network utilization)

My conclusion is that it may be a good idea to create a complete list of inputs and outputs, so that a fairly complete and realistic economic model can be built.
I think the Project has these models, but I don’t know if they’re interested in moving (not necessarily in v1.0) to a model that rewards all participants in the I/O chain and possibly incorporates a transparent and distributed exchange where these resources can be traded.

4 Likes

This is just a brainstorming exercise for an alternative consumption model. Perhaps it might be considered in v2.0

In order for the end-game economy to be self sustaining, we need to balance the cost/consumption equation. I have not thought of everything, which I think is impossible. Here’s some ideas below.


After Cap Implementation
In order for this consumption model to work, Safecoin has to be in circulation beforehand. Once the Safecoin cap is reached we would switch over to this economic model.

GET Request
When a user initiates a GET request, they incur a resource cost to the Network. That resource is provided by everyone servicing the GET request, which is then consumed by the user.

  • The user pays the Network based on their GET usage. Payment is actually recycled.
  • Recycled coins go back into the Network pool (unowned coins).
  • The Network pool awards (assigns coin ownership) to those who serviced the GET.
  • Because the relationship is 1 to 1, spamming GET requests for profit is pointless.

Private GET Pricing in Safecoin
Most GET requests will be tiny 1Mb data chunks. We would have to use micro payments to account for each 1Mb chunk. Autonomous GET pricing is determined by Network utilization. Addendum, private data GET will be charged, public data GET will be free.

  • If the Network is in a state of overutilization, the GET price goes up.
  • If the Network is in a state of underutilization, the GET price goes down.

PUT Storage
The amount of data a user can store is based on their Vaults (available_space + free amount). De-duplication will help mitigate free amount abuses. Users earn Safecoin by running a vault, which allows them to pay for their own GET usage. We might get to a point where the Network is large enough to remove the storage limit.

  • If they provide as much service as they use, their account will be balanced.
  • If they provide less service than they use, they will pay Safecoin.
  • If they provide more service than they use, they will receive Safecoin.

Safecoin’s Fiat Value
It’s too early to tell if Safecoin’s fiat value will be volatile or stable after we reach cap. So I won’t bother with this part.

2 Likes

Thanks for taking time to do that.

I am wondering whether it’d be useful to create a list of all “workers” in the chain.
Farmers are already covered. I’d like to consider (let’s say for v3.0) additional categories as well:

  • Network providers (“wagoners”?) - wagoners could be rewarded for their daily net throughput (when positive). Obviously there are many elements that could be measured, but for the sake of brevity I won’t go in to that. Users would pay for network throughput and maybe even farmers should pay (50/50?).
  • Cache providers - I haven’t been able to find a clarification whether cache is stored only in RAM or also on disk, but in either case it should be paid for. Caching nodes could charge the user based on the number of cache hits, or this could be abstracted and they could simply charge the same way “wagoners” do (maybe a fraction of GET price, or maybe even the same price as GET price)

Does "recycled’ in GET means that unlike in PUT, where it’s “burned” (and farmers who become hosts for the initial chunk can’t get those Safecoins), in GET 's the coins actually end up in their wallets?

I wonder how that will play out. If you don’t recover your backups often, you’ll probably issue few GETs to others, but if most other users are like you, you’ll also serve very few GETs. Because you stand a 25% chance to receive any such request and a 100% chance to pay for every such request, wouldn’t you need to do 4x much “work” to balance your own account?
Also, if you have 100GB to backup (on the SAFE network) and 100GB to share, you’d get 0 for PUTs (if those Safecoins are burned) and pay 4X as much for your own PUTs to other nodes on the network.
Maybe I’m missing something but it seems you’ll need to do “more” work to pay nothing (since you actually consume 400% more storage than you give, that’s to be expected).

That could be tricky, but it’s too early to think how exactly. Among other things the exchange rate will depend on how much storage will be available the moment we enter production and so on.

1 Like

Yes, any node (including the caching one) who serviced the GET should be paid as if they were a vault. This is why running the API should establish a vault by default. If someone doesn’t want to operate a vault, they can assign 0 available_space and still get paid for caching anyway. I doubt they would complain. :smile:

We don’t burn coins at all. Payments made to the Network are recycled so it becomes available to be paid out to vaults.

Private GET Payment Structure

  1. User pays the Network Safecoin for their total GET usage per day/month/year.
  2. The Network recycles that Safecoin… meaning ownership is wiped and goes back into the unowned pool.
  3. The Network pays per day/month/year to the vaults based on their total GET serviced.

The exact Safecoin recycled does not end up with the exact farmer who serviced it.
(Safecoin Payment) ~> (Network Pool) ~> (Payment to Vaults)

That is a good question. They are paying for 400% more for the 4 copy redundancy.

One of our community members suggested giving the user variable redundancy options. In other words, they can choose: 1 copy redundancy (pay 100%), or 2 copy redundancy (200%), or 3 copy redundancy (300%). This adds some complexity to the autonomous network. I don’t know how that will affect the functionality of the Network. But if it can be done, it would make economical sense.

This part I am not sure about, I see it like this. The network must not lose data, if it does then it would be bad for many reasons. So it should calculate the most efficient redundancy itself. Just from a design perspective really, but if we allowed a non safe level of redundancy then it may prove catastrophic. I figure the network will know best.

One route is 4X cost for private data and zero for public ?

3 Likes

Agreed,

My first priority would always be in favor of a solid functioning Network. So the less than 4 copy redundancy is probably a bad idea.

That is a great compromise! I do like the free public usage and 4X for private. Farmers do bear the cost of servicing public usage but at the same time, they are indirectly helping the global community.

2 Likes

I’ve wondered about this division since I first heard you suggest it. Can you elaborate your thinking @dirvine. I’m not sure why “public” is chosen to have a free ride, other than it is an easy to distinguish.

I question it because it seems that a lot of public data will be public in order to create profit, and its a shame not to charge for commercial use. Tax the b*****s, is I think a very important principle when we’re talking about money making robots, er corporations, compared to an individual storing their family photos in a private location, for example. I realise this is probably infeasible from a technical point of view, but I would like to understand your rationale.

1 Like

I feel its instantly useful to us all, or should be anyway. It will archive if not. I see your point about rubbish being there, but popular data should be what we reward somehow (thinking).

1 Like

Good point @happybeing.

Public data can be monetized commercially. I’ll brainstorm it and see if I can come up with some scenarios and possible solutions.


Profit through APP Usage
A social APP interfaces public data. The owner of the APP earns Safecoin from user activity while the farmers are burdened with public GET requests. Good for the APP owner, bad for the farmers.

If we remove the APP usage reward, the incentive no longer applies. This also removes monetary incentive from the APP builders. If our community is moving toward free open source, then perhaps builders will make an APP because they want to use it. I think donation/crowd funding should replace APP usage. @frabrunelle has some good ideas on this subject. They also have the option to charge users directly.

So now we are left with the farmer burden. This part is difficult. If farmers do not earn enough Safecoin to cover their resource costs, they will abandon their vault. My initial plan was to charge per GET, regardless of public/private. Maybe the Network can charge x1 for public and x4 for private. In this way, farmers still get compensated for resources provided.


Profit though ADS
I expect public video/audio sites to be on the Network. They normally use Ads to gain revenue to help cover costs and make a profit. If other sites provide the same service without Ads, wouldn’t users leave them?

Public sites may not need Ad revenue if they operate their server as a vault. Hopefully, the Safecoin revenue will cover their resource costs and make them profitable.


Profit through Public Data Collection
This one is difficult because public data will be collected. The very first examples may be web spiders crawling public sites and blogs to index data. If they have to pay x1 for each public GET request, then farmers would still be compensated. Maybe this would be our way of taxing these data collectors.

Some may disagree, but I’m not entirely against companies trying to learn about peoples demands. I appreciate when a store provides a product I am trying to find. I would prefer they just ask me instead of doing it covertly, tracking my spending habits.

So coins paid for PUT’s are recycled (given to farmers and eventually circulated back to the network) but disk space is not (files that have been posted are never automatically deleted). Okay, got it.
I think they should be (storage should be paid per period of time and prorated, “orphaned” files which can’t find a sponsor should be auto-deleted).
This auto-deletion would work well for “public” files (if the Project insists that there should be such a category - I’m against it - see below) and that could be easily arranged. Interested parties could datamine/AI-check such files and where interested pay for them to claim ownership. If such public files are garbage (useless to anyone in the world), then they’d be left to expire.

Sorry for constantly repeating myself, but IMHO it’d be good to minimize anything that changes how the system works (EDIT: I mean how a free market would have it work).
Any individual’s ingenuity (in general) can rarely match the power of the free market. Only through free action of all participants can optimal approaches be discovered. I mentioned before that I believe even small inaccuracies will be exploited because it pays very well over EB’s of data.

Related to this particular matter:
a) Should I pay 1X for my PGP-encrypted file that is “public”? Probably not.
b) I would argue that instead of creating “push” for beneficial uses and then policing that those aren’t abused, it is more efficient to create a level playing field and make it easy to create positive feedback loops. So for example I’d rather see that everyone pays the full price and then create a tips API or bounty that makes it easy to “Adopt a copy of this file for X months for only Y Safecoins”.
c) Consider potential for abuse over a large data set. Think about getting for free 1% of 2 PB, then think about getting 25% (in this case that may be the manipulator’s “cut” of the difference between 1X and 4X). I could Zip my huge private adult video collection, chunk it into 100 MB chunks and use a small army or “Amazon Mechanical Turk” uploaders to upload it as “public data”. And if the first 100 MB or so is free, I could probably pay just few bucks a year to have my data backed up to cloud. Sadly we can’t even begin to imagine the types of scams people will come up to freeload.

Considering how (relatively) little universally useful public data there is, it should be really useful to have that paid for in full using a fundraising drive by Maidsafe foundation or users. What those guys from outernet.is are doing - books, paintings, etc. - 50 TB could be enough for all pre-internet stuff that’s been digitalized so far.

And last but not least, by giving space away or at an artificially suppressed price you indirectly penalize farmers who have no say in it. Why should some freeloader from the EU (no disrespect, it’s just an example) get a discount for storing “his public data” (a contradiction in itself) on a disk drive that belongs to a farmer from Burundi (even though the cost of sponsoring that may be spread over the entire community, the guy from Burundi will still have to pay)? If the guy from the EU has some valuable data that the public can benefit from, why can’t he find someone to pay for it? Can’t there be “The EU Citizens’ Foundation for Public Data Access” or something like that where he can go to obtain funding for his data?

I could go on, but I hope this is already enough. (If you want another interesting angle on this, here it is: IMO nothing can be “public data” (assuming it’s not encrypted which is what freeloadoers will do) unless it’s in Public Domain / BSD type of license. How do you ensure that? Who will do that job when he paid not more, but less than the normal MaidSafe price? How can falsely labeled Public Domain data be deleted? And it would be very disturbing to be forced to host someone’s “public” data which isn’t licensed under BSD or BSD-like license so that people who use it must append various licenses based on restrictions in “public” data (and who could even tell if those licenses are correct - maybe a ton of them would just be fake attribution licenses for works of others meant to generate clicks and (attribution) links to various crappy sites).

@dyamanaka: check this out as ideas and inspiration:
http://www.cloudbus.org/cloudsim/ (and this blog post for a disk-specific example)
GitHub - uwol/computational-economy: An agent-based computational economy with macroeconomic equilibria from microeconomic behaviors

This reply will probably go Off-topic. Again, this is just brainstorming.

Freemium Storage Model
When a user wants to increase their NSL (Network Storage Limit) they pay Safecoin to the Network, where it is recycled. Let’s say, you create an account with NSL 100Gb (free) by default. If you want to store 150Gb of data, you will need to pay Safecoin to increase your NSL to 150Gb. Now you can PUT 150Gb of data onto the Network. Hopefully, this explains why someone would pay for premium storage.

Some people will make multiple accounts to take advantage of the free storage. We had big discussions on this here Network Storage Limit VS Network Reserve and here Proof of unique human.

Consumption Model
This alternative model does not charge for PUT. Instead, the NSL is determined by (available_space + free amount). If you want to increase your NSL, you’ll have to dedicate more hard drive space. Technically, it’s an unenforceable limit, if data is not deleted. This might be the reason why we opted to use Safecoin payment for PUT instead.

My goal is to deter users from storing more data than they are supporting. If we consider the loss of Safecoin earning potential, and de-duplication… a DOS attack (over capacity) becomes less likely. It is not a perfect solution though. We talk about it here DOS Attack caused by Data Overload.


I am interested to hear your ideas on a consumption model.

1 Like

I briefly glanced over the 2nd link which mentioned “voice recognition”. IMHO that’s not going to work because “freehoaders” (freeloaders + hoarders) can easily create sites where users do this type of microtask and get paid few thousand Satoshis per account. Similar task-farming is done today with ad clicking and would be done within few weeks a new exploitable resource appears on the Web.
If this is meant to be used every single time (when the account is accessed), it would be very annoying, but also create privacy risks for those who don’t want to let the system be able to ID them by their voice (and record it).

I agree people would want to pay for free storage but I wouldn’t give them discounts for “public” data for reasons explained above.

I like when a payment is done as soon as the service has been delivered, so right after PUT has been successfully executed.
Otherwise (with this second approach, for example), if there’s no way to delete data and we don’t charge for PUT, how do we deal with a guy who uploads a crapload of stuff and disconnects right away? He’d have to pay or escrow tokens or the network would have to be able to delete data.

In the Fremium model maybe GET’s should be rewarded as well (so that Les Miserables who get chunks of popular files get rewarded for their disproportionately large contribution to the network and caching nodes could get some Safecoins as well).
I remember in System Docs it is said that the system will prevent abuse by bots that issue GET’s, so if that’s going to work as expected I’d like to see GET’s count as one’s contribution to the network (it can start as a very small value).

For some of these ideas and values (e.g. to pay caching nodes for network throughput) could be gradually introduced (e.g. in Q2/15 start paying a little for bandwidth, in Q3/15 a bit more). As metrics will be taken it will be possible to analyze that data to see how well the network performs with different values and rewards.

In the future I would prefer a model (I’m not sure how I’d call it) where we could measure useful resources and float them on a market, and then correct prices, ratios and service models would emerge on their own. Of course that’s not an easy thing to do, but I’d like to see it come true within 2 years from now. That kind of system would be self-regulating by the virtue of human or AI decisions to buy scarce and sell plentiful system resources.

1 Like

A lot of public data will/should be linked data.

Here is the link to the Hans Rosling talk that Tim references.

@happybeing, all public (government-produced) data was already paid for (largely from corporate tax and income tax from greedy people who work at those corporations).
As GET’s are paid for, how is that still not enough? No one seems to ask whether it’s right that some Farmer Fred profits from GET’s for public data, but it’s a problem that for-profit users pay for the same GET’'s.
I don’t see why would for-profit corporations have to be singled out. The more they use the network the better because they will pay the bulk of the cost.

Did anyone look at licensing and redistribution rules for “public” (I consider that to be “government produced public domain”) data?

@janitor Your response makes no sense to me.

How are for profit companies being charged for storing public data, if it is free to store public data?

EDIT: I’m also unclear if we understand the same thing by “public data”. What I understand is: public data is data that is publicly accessible, private data is not publicly accessible. Is that what you understand, or are you talking about something else?

Thanks @chadrick well worth watching. That’s a whole ‘nother can o’ worms man. Not sure how it works with our privacy aims! He ends by suggesting the breaking down of “walls” between social networks. I see that in theory that could be something that empowers us to direct - e.g. to say send this photo from flickr to friends on fb or something (within the flickr “app”), but it seems like a new order of privacy issues too. I get putting public service data, science research results etc out in an accessible form (when I was working in software XML was going to do this but the standards weren’t adopted outside a few APIs). I know he’s active on surveillance and privacy etc, but I didn’t get the whole picture here. I felt physically uncomforable! Like - don’t trust this asshole - which I know he isn’t :-). Glad to have watched it though.

Yes, data would have to be deleted for this model to work. DELETE is possible but I think we are leaning away from it because it’s a headache for the Network. @dirvine often talks about ignoring the delete function. If you think of it as a one way road, the Network functions pretty fast as long as we don’t back track. Deleting data would require the Maid Managers to trace back where data was PUT and have it removed. If the vault went offline, then what? Headache…

Cases for deletion

  1. If a user uploads public data up and decides they made a mistake. Shouldn’t they be able to delete it?
  2. Eventually users will die or abandon their account. Data that is no longer accessed is considered trash, which the Network will archive overtime.
  3. What if a user wants to clean out their storage space to make room for something else? They should be able to delete their old data.

Hopefully, in v2.0 deletion will not be a headache. At that point, my model would become viable. We are making compromises in favor of a better user experience (faster Network), that’s the trade off.

Charging for PUT vs GET

IMO, charging for GET more accurately equates resources consumed by the user to resources produced by the farmer. Right now the Network subsidizes the farmer so it is not an issue. But it may become an issue as we approach cap.

The theory is people will always need more storage, and will buy more storage with Safecoin. Since that safecoin is being recycled, we should not reach cap or it should take a really long time. Eventually the free amount we started with will be too little to be of any worth. But this is just a theory.

GET are rewarded in the Freemium model. Maybe we got lines crossed somewhere.

Freemium Model This is the model we have planned.
PUT = Safecoin charged by the Network from the User
GET = Safecoin paid by the Network to the Farmer.

Consumption Model This is the model we are brainstorming.
GET request = Safecoin charged by the Network from the User
GET reply = Safecoin paid by the Network to the Farmer.

Does that clear it up more?

Well, I defined what I meant by public data.
Now I see you meant “publicly accessible MaidSafe files/objects”.
So it’s a case of misunderstanding, but I stated my case clearly to make sure we’re on the same page so that we can avoid it :smile:

About publicly accessible data on MaidSafe, you’re right, if it’s possible to store some (up to certain limit) for free, then no one will pay. (That’s why I argued against it above). However GET’s would still cost.
Again, read my explanation why I’m against freeloaders and on top of already mentioned or, if you don’t want to read that, I can make you another quick example why: a filthy capitalist will pay an Amazon Mechanic Turk to post data for them. So even if there was a limit, corporate users can easily pre-chunk “their” public data to these Mechanical Turks to PUT them on the network for free and then corporations can have them re-assembled after GET in format that’s usable to corporations. They wouldn’t even need to encrypt that data to freeload on that goodness.

But once again, if non-obfuscated or non-encrypted publicly available data was posted, if the Project wanted, it’d have to be licensed liberally (I said Public Domain and BSD license make sense to me).
And don’t forget that somebody would have to be owner of that data.
a) If greedy corporations take public (government-produced, free for all) data and post it online in non-obfuscated form, how is that bad? You can use it and you didn’t have to spend a second of your time, effort and bandwidth to put it on the network!
b) If greedy corporations take their own data and share it with the public under a license that’s not restrictive, what’s wrong with that?

You seem to confuse capitalism with the current system, but I won’t go there because I said I’m done with political topics. I’m just saying that I would prefer to charge everyone (which solves the greedy corporation problem and greedy individual problem in one go) and I’m asking you to show how paying corporations do not contribute to the network.

@janitor say you don’t want politics or philosophy but go into exactly those issues. You argue against free riders and then put forward a justification for people to profit from free storage. I questioned the reasons for that. I often get confused reading your posts, perhaps because you are using different understandings of things like “public data”. I think @fergish’ glossary will help - please add. :slight_smile: May I suggest:

public data - data stored on SAFE in a public share, that is accessible by anyone
private data - data stored on SAFE and accessible to the owner and specific nominated individuals or groups

Thanks @dyamanaka :-). Now, something I’m headscratching. Your comment about delete leads me to think I’ve misunderstood something rather important. If we’re not going to have delete, is the PUT charge based on uploads, including re-saving rather than just the amount the user wants stored.

For example, I save a 10MB spreadsheet. As a SAFE user (or a Dropbox user) I’m charged for 10MB.

I edit the spreadsheet and re-save it ten times during a session. On Dropbox I’ve not been charged any more. What happens on SAFE? Am I going to be charged for 10MB or 100MB + (for ten additional saves of 10MB)?

Or maybe the file is cached locally and only saved at the end of the session, in which case maybe the alternative is being charged for 20MB? But if that’s true, and my machine crashes during the session, have I lost my changes?

1 Like