I think that the main difference is that per byte will be better(cheaper) for a lot of people, but per put is better for the network (higher burn rate).
I voted PUT on the assumption that there is a fixed cost per PUT.
Thatās only an approximation, so my suggestion would be that the unit storage charge be designed to keep the fee per unit as close to the network cost per unit as possible. The aim of this is to be the fairest to users and the network (farmers), and I think also the hardest to game or exploit as an attack.
Yes no matter the size of the PUT chunk there is a certain overhead.
I meant to include that in my post, but its only 3 hours from next year and Iām not exactly thinking of everything.
I think though that some benefit in cost to store for say forum comments of say 200 bytes might be good for those who are not doing big file storage but love tweeting or FB posts.
(I am editing my previous post to suggest a base cost for put + variable amount depending on size)
I woukd say it depends on which best reflects the work done by the network to both store and retrieve them. This will make the network more resistant to attack and more economically viable.
From a user perspective, Iām not sure it makes a huge difference, as some stuff will be more than one PUT. The user may not understand why this is the case. So, we gain some simplicity with a standard PUT cost, but lose it again as data stored gets larger.
You raise good points here. I voted per PUT for obvious reasons. However, I think that Puts should effectively be a fixed size so that it can by marketed as a cost per MB and PUT never mentioned outside of the inner geekdom.
Given the 4kB sector size of storage and the 1kB optimal packet size for network transmission, I would say go with an easy multiple of that. The challenge is that smaller chunk sizes place more load on the network vaults. The first size up from the 4kB packet that feels right and reasonable from both a marketing and technical perspective is the 1MB chunk we have now.
Iām not sure that itās obvious. Can you elaborate?
I can understand the desire for per PUT since thereās fixed overheads to deal with.
I can understand the desire for per byte since hitting ālikeā is surely less costly than uploading a minute of audio, despite both taking only 1 PUT.
So Iām really unsure which way this should go. A combo of both āfixedā overheads and āvariableā size would be good but is maybe not so user friendly and is also complex to turn into a storecost algorithm.
For me, one issue is forcing folk to fit as much per put as possible, encouraging less chunks and less work for the network. Dust type transactions could be an easy attack.
Mostly network overhead and spam dust protection as dirvine pointed to. Also hdd and ssd sector size considerations. Any PUT less than 4kB will typically consume 4kB on disk anyhow.
It would seem that you might be able to find a happy medium in something like an appendable data chunk. Iirc you would have 1 thousand 1k data lines in the 1M AD chunk. You pay a PUT to create the AD on the network, but after that filling each of the 1k lines one by one is free or again costs 1 milliPUT. Just spitballing here but it seems like it would be rather efficient since each line is roughly the optimal udp packet size. When the AD is full you would need to buy another one and repeat. The end effect is that you are still paying a fixed price per MB stored, and have some control at the dust level. This wouldnāt control spam as well as charging 1 PUT for each 1k line but it isnāt a free for all either.
EDIT: It probably makes more sense to have an AD or MD with 250 lines at 4kB each. Would save a lot of write amplification on vaults with SSD arrays.
Although that is fine from a technical perspective it loses the simplicity of having a fixed cost per MB for the layman. It would be nice to be able to look at your safecoin balance and the current storecost and know exactly how many MB you can store at that instant.
Also, not sure if it is valid to discount larger sizes unless you use variable chunk sizes up to GB range or higherā¦ Personally, I really like the concept of fixed 1MB sector sizes for the big SAFE hdd in the sky (metaphorically speaking).
The social media nut does not think of MB. So a pretty mute point for the majority of PUTs when the network is worldwide. More emails, FB, Twitter, Forum, Blogs, photos, commenting, and other such things uploads are done than in file uploads. Most photos are reduced size photos of 20K to 500K since most are memes, funny cat photos, social media postings etc. (Yes there will be a lot of full size uploads to places, but dwarfed by the small social media stuff)
So people are more likely to see the number of puts available than the number of MBs they can upload. If they get more Puts than estimated then they will be happy.
Yes initially MB will be the driving factor, but when the 2 billion FB users come over to SAFE then that will be totally different.
Iām not convinced the culture of micro-puts like ālikesā will transfer to SAFE at all. Costs add up. Even paying separately for each e-mail will have quite a barrier to entry. Do you all think intermediary services that offer e.g. a bulk of a certain number of mails or ālikesā for a fixed cost might get created?
Max PUT size is 1Mb right? So what happens if I try to upload a 3.1 Mb file? Iām guessing that the file upload succeeds, but I get charged for 4 PUTs. Is that right?
Briefly: Max chunk is 1mb. Larger files are split up into chunks all of which are stored, and a data map created as part of the self encryption process. (On phone so donāt have link to the primer for this just now)