I think there are risks, and it would be wise to have plans in place in case the economics don’t work out as hoped for, but there are reasons I am not concerned about this issue, and I think these factors were missed out by ChatGPT’s analysis, leading to wrong conclusions.
-
Perpetual growth seems to me like a fair assumption, which holds for Autonomi as much as it does the internet. We all expect internet data to only increase. Will people stop emailing / updating websites / blogging / backing up photos / messaging on social media / uploading videos or music etc, etc? Will video & audio bitrates fall over time, or keep increasing? If it seems obvious to assume continuous new data flowing onto the internet & growing demand for data, why should this not follow into the Safe Network (which would lead to continuous network expansion)?
-
Cost per MB of data overhead (the data nodes need to store before earning) will fall over time due to:
- the falling cost of storage / bandwidth over time (no signs of this stopping any time soon… the cost of storing 1gb is negligible today, but was less so 10 years ago)
- the likely falling access frequency of data as it ages (old videos get less views than new, people access old photos, music, and data sets less frequently than new).
- the possibility of specialist archive nodes that store large quantities of data that isn’t frequently accessed would lessen the overhead burden of standard nodes.
These factors I think make it likely that the network will find a good balance where the old-data overhead is steady and not a problem.
I like the old analogy of a tree; the wood towards the centre of the tree is heavy, but doesn’t use many resources to maintain. The majority of resources go into growth and maintenance of the newer and more active parts of the tree, which are supported by the hard work done in the past to create the core.
So, while I don’t think the economics of pay-once-store-forever are likely to make the network economics unsustainable, there are risks with anything new. We don’t yet know at what cost level the network will arrive at when the ratio of old-data to new-data begins to stabilise, but that cost level will determine the long-term competitiveness of storage on the network, and therefore what use cases it will sustain. If it’s very expensive vs other ‘cloud’ alternatives, it will only be used when the user values the benefits that Autonomi offers vs alternatives for any given application.
If the cost of storage is high, the network could still be sustainable with a steady flow of data that requires what the network offers. Potentially services analogous to Bitcoin & Ethereum’s ‘L2s’ will appear for less important / temporal data storage linked with the Safe network (e.g. where nodes run a sub-network that works like seeding on other P2P networks, so when people want to stop hosting that data, it will disappear… unless any individual values it enough to pay to store it on the core Autonomi network). ‘L2’ equivalents could be great for chats / messaging etc, where each participant in a thread can store the thread data locally, but when everybody leaves the conversation, it dies, unless someone backs it up.
I also think it’s really important to thrash out the economic aspects of the network. I wonder if it’s worth moving this discussion to an economics focused thread so it doesn’t derail this one, and can remain focused on network economics?
Thanks for your input. Did you also read the extended argument part, that is when things got ineresting. You have to manually expand it because the post became so long so I placed it under hiding.
@Neo can you move the economic discussion to a separate thread.
I need to take a step back from this discussion, I can’t push my brain any further, the chronic fatigue has got worse, headache, brain fog and tiredness. It may take days for me to become better again.
I tried to highlight what might be a serious concern, that will be my part in it. I hope there will be a discussion now or in the future about this concern. Because as I think you also hinted at, that there has not been much discussion on how pay once store forever might effect the network, mid to long term.
On macOS and Linux, with node-launchpad
, the nodes run as user-mode services, as opposed to system-wide. The disadvantage of system-wide services is that they require root access to create and administer, whereas user-mode services don’t. So, node-launchpad
runs the services in user-mode to avoid prompting for passwords, which was a problem from both a UX and technical point of view, since we could not easily obtain the password from the TUI framework we are using.
The disadvantage of a user-mode service is that it requires an active user session. If you really want to run nodes permanently, you need to use sudo safenode-manager
to create and administer the services, which will run them in the background, without any user session. Of course though, even with a system-wide service, if you want to leave them running, the computer needs to be running. So you should have a computer that you want to leave on all the time (though you don’t need to be logged in).
My intuition would be that for a user-mode service, if the screen was to power off and go to lock screen, that would still be part of an active session, and thus the services would remain running. So if that isn’t working properly, we’d need to investigate there. However, if the computer itself goes to sleep, obviously the services are going to stop running. We should make sure we recover gracefully from that though.
I hope that maybe clears things up a bit. Please feel free to follow up if anything I’ve said here is not clear.
Lets start over
When a new node joins it will take from other nodes some chunks thus the other nodes no longer need to store those chunks. IE they get space back to store more.
Over the whole network this results in every node having approximately the same storage (within bounds)
Thus your node will gain new chunks then as more nodes join then it will offload some chunks to the new node.
This is how the network is designed to run and is the way it is running.
It is to be run over 12 weeks starting last friday
The pricing includes this overhead. Just like you buy goods from the shops. You are not just paying the amount of the actual product but also the storage/transport/management/wages/etc overheads. Same for the network, the store costs includes storage/management/transport/overheads
Just like a physical food/goods store there are risks, what if not enough shoppers arrive, what if there isn’t enough outlets for all the goods just purchased.
Obviously the goal is for the whole world to move over to this new internet, but even 1% or 0.01% of the world would show success. The DATA produced is growing exponentially so once we get over a million or more people using and 10% running nodes the growth is guaranteed. At 100K people w/10% running nodes then its not guaranteed.
Problem is as network gets older you get a snow ball rolling down the hill of old data, payed for once that others have to store for free forever, as the older data i spread across the network and to newer nodes.
Please give example of price per GB for 5 copies stored forever, what will that cost be?
This has been discussed over and over again.
The cost of storing old data drops by about 1/10 every 5 years. So you can do the maths and see that for old data store you only need to store 1.1 x the cost to just store new data and that more than covers it. Then use a higher factor and you cover costs for using equipment etc. Remember this is designed for spare capacity and so the actual costs is near zero anyhow.
TBH I doubt this project would work very well if everyone had to buy “new” equipment (PCs) and bigger internet connections (or rent VPS, or co-locate their “new” PCs in DCs). Prob wouldn’t work at all, just too expensive compared to cloud storage, and be more like the other blockchain storage that uses blockchain for indexing and cloud for actual storage
So disk drives new are like 20-50$ per TB, lets take the high end of 50$/TB == $0.05 per GB and it lasts for 5 years. In 5 years going by 60 year trends in storage costs that TB will cost 5$ and be $0.005 to store the GB on a new disk, then in 5 years $0.0005 and forever the disk drive costs for that 1GB today is $0.055555555555… using current 60 year trends. But future is seeing drive prices drop faster due to SSD tech
So if only using storage costs for new storage that is NOT your spare resources is $0.0555555555 per GB if you buy a new drive to put into your computer to store chunks.
But using spare resources then the real costs including electricity is much lower and if you have SSDs its even lower. Additional electricity due to node running is a tiny tiny fraction of using your existing equipment. Maybe 0.05%
And then your node is receiving store costs for new chunks. Yes this relies on the network being used and people adding nodes to earn. People will add nodes since when network fills up people pay more to store so people add more nodes to get some of that and so the cycle continues
From a user perspective I def get not asking for a password BUT I also think its impractical from a user perspective to always leave a laptop open or to keep a screen from going to sleep.
Is there a way that the Launchpad could require a restart before actually installing and use the password from the user logging in so Launchpad could be system wide? Then install and connect in the background or pop up the TUI at login?
It seems like a fine balance for UX but I really think the point is people wanting to earn nanos and so far as a user not earning any I have been a bit frustrated on missing out on those sweet sweet nanos!
Anyone ever get this error when trying to add services with sudo on linux or Mac?
Error:
0: missing field `user_mode` at line 1 column 4051
Location:
/Users/runner/work/safe_network/safe_network/sn_node_manager/src/cmd/node.rs:86
Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.
I can get services added easily without sudo but that is where my ultimate problem lies. My nodes won’t earn nanos unless I leave my laptop open and screen on. Hence, I need to get services added using sudo.
Yes it’s very important to take care of yourself. Focus on relaxation, nutrition, and silliness. I hope there is someone around you can hug. In the “watch this video” section I will put up a video for you to watch.
What if the user logging in isn’t an administrator on the computer because they’ve set up properly? This happens when you setup your Mac with the first account being a ‘superuser’ that can create other accounts, do updates, install software, change network settings etc. and then you create one for day to day use. That’s the proper way to do it. It annoys me that Apple don’t make either enforce this or make it obvious that this is the proper way of doing it.
I’ll have a look tonight and see if as an unprivileged user a safenode keeps running when the screen is locked. I’m sure it did when I had one running on a Mac before.
I’m sure you’ll have to leave the lid up because when the lid is closed sleep is triggered and it goes to sleep so only processes that are allowed to stay running and check things are active and then only occasionally. If you attach another monitor you can have a Mac run with the lid closed but that’s maybe not what you want.
I didn’t, but have caught up.
They are interesting arguments, though I belive the factors I mentioned counter the critical argument that;
My view is that this percentage will stabilise over the years due to what I highlighted in my previous answer (falling storage cost per GB, growing size of new data over time).
So it remains to be seen where this stabilisation point where percentage of old data vs new data is, but I expect the network will approach it as it approaches full adoption. If this point renders the network too expensive for many use cases, then options need to be considered for ensuring sustainability / competitiveness.
I like the suggestion of adding further revenue streams for nodes. Small transaction fees may be a good way to do this, and would compensate nodes for the DAG overhead, which is a cost to nodes that is proportional to transaction volume.
In the future there may be compute and other things nodes can provide to add value and earn rewards.
The ‘vicious cycle’ threat seems real in a shrinking network, so that will need consideration, though it’s probably not something that will happen in the short term (saying that, as there’s been some discussion about; if all beta data is carried over to launch, and rewards at launch turn out to be smaller than Beta rewards, node count could fall significantly, testing this hypothesis very early on. If nodes allocate 5gb on average, and only 1gb were used in Beta, this likely won’t be a problem… but if near 5gb per node is used, it could well be).
Sorry you’re not well, and hope you can get some quality rest.
Obviously no predictions can be perfect, and we have to wait for the evidence in practice to know for sure, but I believe the pay-once, store “forever” economic model may be fine because of two potential factors:
- Appreciation of the token price overtime vs fiat may mean that the price originally paid by the client is actually still fair and
- Technological improvement over time may make the cost of storing past data marginal.
But this prediction relies on the assumption that storage on the network will be vast and cheap. This assumption may break if the technical planning for the network does not take that into account now. For instance, the current 2 GB node size is fine for testing but needs to be increased to a just-right size that still promotes decentralization/data permanency while ensuring that bandwidth, networking, and power do not centralize the network into the hands of those with the means and knowledge to handle those issues. Not to mention that the available storage on the network will be a lot smaller as the available storage in private homes cannot be made available on the network due to router limitations.
So a higher node size will have to be considered. Some analysis should be done to approximate the optimal size given average node loss frequency, current bandwidth and retail router tech. Maybe 50-100GB? => the average home with router limited at 20 nodes can provide ca. 1 - 2Tb to the network.
On the economic side specifically, my current concerns are two folds:
- Paying for a chunk even though the upload failed is very unfair to clients, can be taken advantage of by bad nodes, and people will not like it
- The 15% “royalty” is an unnecessary price increase on clients (can it even be enforced? So some nodes could drop it to have an even lower price and unfair competition compared to standard nodes without modified price)
You cannot set Mac to stay running with screen closed? That would make Mac even more stupid than I think it is
This is good news and gives me some chance of being able to upload. Thanks. I’ve run out of nanos again though and only about half way through the first batch (20 chunks of about 50 in total).
It doesn’t really explain why so much is taken from my wallet though. Those prices look less unreasonable, but I’ve paid much more than that if I look at how many chunks have gone up, and how much my balance has come down. I think around 1,000 per chunks
So maybe fees are being lost when uploads can’t go through in one go? This plus the wide range in price (mine all ~400 while riddim uploaded eight chunks, almost all for ~10) also seems to be a real issue.
Thanks for looking at this. I will try again when I have some nanos. Looks like I’ve spent about 8,000 to upload 8 chunks so far but I am trying to upload about 50.
Have you observed the nodes stop running when the screen switches off? That seems wrong, and we should investigate. However, I think if you wanted to run nodes all the time on a laptop, you would need to adjust the power settings to make sure it doesn’t power off. Screen off should be fine, but obviously if the power is off, the nodes are gonna stop.
The TUI might work system-wide if you configure your user account to use passwordless sudo, but that’s not a good idea for your regular user account. The password prompt when using sudo
is part of the operating system design and we can’t really subvert that. In the longer term we might be able to figure out how to use the TUI with the password prompt.
Also, btw, there were many users who were asking us not to start the nodes automatically upon the OS starting. So we went with that as a default. It’s also the default for the underlying node manager now. You need to use a flag on the add
command to opt in to that. So for now, you need to start your nodes up yourself when you log in to your OS.
I think you may possibly have an old registry file in the system-wide location that is not compatible with a newer version of the node manager.
What you probably want to do here is start from a completely clean slate, then switch over to system-wide.
Try running:
safenode-manager reset
sudo safenode-manager reset
That will hopefully clear out both the user-mode and system-wide services. If you still get that error, we’ll need to manually clean the directories I think. Let me know how it goes. I will be back to work full time as of tomorrow, so we will get you up and running with something, don’t worry.
@qi_ma We, or I should say @riddim has been doing some experiments and I have a theory about the cost issue.
He’s tried uploading the same files with the same result. These are three text files (<1k) and and two images (~100k):
-rw-rw-r-- 1 mrh mrh 108726 Apr 24 11:58 another-ant.png
-rw-rw-r-- 1 mrh mrh 65 Apr 24 11:58 index-address.txt
-rw-rw-r-- 1 mrh mrh 126140 Apr 24 11:58 index-ants.png
-rw-rw-r-- 1 mrh mrh 971 Jun 11 18:01 index.html
-rw-rw-r-- 1 mrh mrh 386 Apr 24 11:58 more-ants.html
so all are going to be split into three small chunks which makes a total of 5x3 = 15 chunks. Leaving aside why it ends up trying to upload 20 chunks, @riddim then tried adding a bit to the end of each text file to see if that makes a difference.
He found this gave much smaller estimates and he did some other experiments. Both suggest to me that when small files are being chunked there’s a high chance that some of the chunks are ending up on a small subset of nodes.
One of his tests suggests that with fairly random similar small files, chunking them ends up creating chunks that are already uploaded, which fits with thinking small files are generating similar chunks:
The following gave estimates of 30, 40, 10, 30, 30, 10, 51, 33…
for i in {1..10}; do echo "somethingnotempty" >>tst; safe files estimate tst | grep Transfer; done
This is odd because it appears some were only having to pay for a single chunk, which in turn means that those chunks were already on the network. That suggests that when chunking small files the xor addresses of the chunks might be similar and ending up on the same nodes.
Another theory is that store-cost is somehow factoring in the size of a chunk. That seems unlikely but I haven’t looked at the code yet.
So just some more data and things to think about. When I get some more nanos I’m going to see how adding to the size of the HTML files affects the cost, first of estimate and then to upload.
Finally there’s the question I set aside, of why uploading 5 small files appears to result in 20 chunks? That could be my script so I’ll look into that.
I did both of those to be sure so that doesn’t seem to be it. So a bit at a loss. I’ll keep at it.
It’s not powering off, just going to sleep when the laptop is closed. I don’t end a session or anything. My only “evidence” (which isn’t really proof of anything specific) is that I’ve run several nodes since the start of beta and again after full reset with sudo and non sudo since the nano situation has gotten better over the last couple days and I get zilch. I’m seeing other compatible setups getting nanos. I just have a hard time believing all my nodes are that unlucky.
To my understanding Jim is also a Mac user and having the same problem. I’d def discuss with him