Yeah as best I know it is all speculation based on a comment by Jim. But the comment was more something said in passing than concrete plans with a definitive date… as far as I can tell.
Both branches (04.07. And 10.07. Release safe_network/sn_networking/src/record_store.rs at release-2024-07-10 · maidsafe/safe_network · GitHub) have 4k max record count - 2gb
Looks to me like the other issues have been agreed to be more pressing and node size changes are not part of the next release
Not to my knowledge
They have been very cagey about it. And that is to put it lightly. Any queries have been treated as if they didn’t exist. Maybe a way to allow them wiggle room for what increase they choose in the end.
In this case I do not think so. When Jim first ummmed and arrred his way through even mentioning increased node sizing he sliiped out saying 20GB. Now was that just a “What figure to say? and plucked one out of …” or what he had been reading in slack channels? Who knows, we certainly don’t and the response has been cold - absolute zero cold.
To be fair I don’t think they knew for a while either and were trying out various sizes.
The only thing I ever got was a side remark in that they think overall people will run just as many nodes.
AFAIK I only ever said 20GB, but whose counting LOL
Yes in theory if you were trying to run 100 nodes to give 200GB worth of storage available then 10 nodes would fulfil that purpose. And in doing so you have 1/10 the connections required in the router. BUT if you then run 100 nodes giving 2TB total then the connections would be the same. But they have mentioned that they are also looking at the number of connections and messages needed per node and attempting to streamline (optimise) that further.
It has been mentioned that an increase is definitely in their plans for the upgrade
If it is increased 10 times then those running 1000 nodes will still have 5000GB needed to be available for logs if they run for some time and they will now need 20TB available for storage
Imagine those running 5000 or 6000 or 8000 nodes how much storage they will then need.
The ones running VPSs with only 80GB or 160GB storage, they will be curtailed a lot. In the past they could comfortably run 30-40, or 60-80 nodes but if 10x node size increase then it’ll be 3-4 or 6-8 nodes only (and that was relying on not using the 5GB per node but only 2GB for data and logs
On a slightly different note, with every wave one user getting 1 referral per week and every wave two user getting 2 referrals per week I suspect we are going to see nothing but posts of “I have a referral” soon.
Is the point not to bring outsiders in? Looking an awful lot like tagging users as they are already walking through the door
Hey welcome to the club, I know you are already here but take this invite
I guess with 1850 referrals available per week, it’d be good that new arrivals in the discord have access to a referral. But the rate of growth of the discord is not near the number of referrals available. So if people want those bonus then they’ll have to go looking elsewhere
When wave 2 is full (soonTM) then there will be 3850 referrals available each week.
Small point on node size. We always talk about GB when the limit is number of records, and the size of records isn’t considered although we know it is often much less than the maximum chunk size of 1MB (currently).
We can easily collect stats on actual chunk sizes but AFAIK nobody has been sharing this if they are doing so. I have looked but only briefly so don’t have stats.
Also, real record size may change when real use patterns become established, but one thing we can be sure of, it will be less than the maximum of 1MB. The question is how much less (on average).
At some point we should be keeping an eye on this as it could be quite a difference.
A point of order on your max record size
currently its 4096 records x 1/2 MB (max)
But yes you are correct, it often will be less than maximum.
I took the liberty though to use GB since one must, or good practice to, make sure you have space for the maximum. So in fact I was correct in what I said since its space needed, not space that will be used in most situations. For those monitoring space usage we might be able to sneak in extra nodes for a given space.
That is a very good point! I hadn’t had a look since the first day to check that records were being stored. On the node with the most records there is everything from 322 bytes to the max of 513KB:-
ls -lh .local/share/safe/node/safenode1/record_store/
total 12M
-rw-rw-r-- 1 ubuntu ubuntu 49K Jun 13 06:21 06ab755c6ff38839a0f4fda11a929e36b6e7737776e41c9efaaf33a19af682fb
-rw-rw-r-- 1 ubuntu ubuntu 8.0K Jun 13 20:45 0b10ff68ab86e9f9e3539008138f6ac9edd32dbd67b2b685d19be60c410e2602
-rw-rw-r-- 1 ubuntu ubuntu 19K Jun 14 20:34 120d73e9cd2ac03412a65d372944b24c9e17e760696aa579d49c688a7aacee16
-rw-rw-r-- 1 ubuntu ubuntu 513K Jun 18 10:22 1a111e1cd509b551f7709bcad5946cb23c2eaf350bcb6644982797ea6eb159f6
-rw-rw-r-- 1 ubuntu ubuntu 12K Jun 12 22:38 1b172e8c4721017f118db35aaa3b56fc954c6e2feed5a8d2084ba2435cc7c6e7
-rw-rw-r-- 1 ubuntu ubuntu 513K Jun 14 20:34 1f182b747c8c093af843cde67376c453f328603f1ab04faee063e8161b4a57b1
-rw-rw-r-- 1 ubuntu ubuntu 153K Jun 22 07:14 1fc9fe15d2e32647b273fc6e2967c509d9106026e0b474e2c1801eff9c428de6
-rw-rw-r-- 1 ubuntu ubuntu 15K Jun 28 20:01 22c7566bf7a984db2c4b10b685c44997d25e0189475db16f890e87665908f85f
-rw-rw-r-- 1 ubuntu ubuntu 7.9K Jun 22 07:14 24a735d1c5b7f94df9493ca86410ea0e750198b6dadc1978cd2ef5603e81048b
-rw-rw-r-- 1 ubuntu ubuntu 322 Jun 12 22:38 25f715c70c39294ecd17b7f524107f8b53ee58b75cdac74242dfcbe027ff4c16
-rw-rw-r-- 1 ubuntu ubuntu 513K Jun 12 22:38 27439311033ae496c2a3ef67ed769e2e3fe840cf936d121e42ac144969e059f4
-rw-rw-r-- 1 ubuntu ubuntu 78K Jun 12 22:38 2a8ba498413b3fb09542b973d93adbe5d0baa0aec3505416f71bdfa067bb7d39
-rw-rw-r-- 1 ubuntu ubuntu 41K Jun 12 22:38 2b207ac48e0db34f2084f619f1222df4abec1433e9b1fe47e9a3512dad2ff936
-rw-rw-r-- 1 ubuntu ubuntu 338 Jun 12 22:38 348c394d73bc58d82435cd233e146013967c4e5e04506fd92a090d8400e28f76
-rw-rw-r-- 1 ubuntu ubuntu 338 Jun 12 22:38 3ae5c011c9bfdf413ce4db817cbd3d976949ac911abdc9b8ddb05e2248178a63
-rw-rw-r-- 1 ubuntu ubuntu 18K Jun 12 22:38 3b54f3a22bb12baeb1f535d8b0a4e36e4041b39a24fa6ba9ebce7de8b05f124b
-rw-rw-r-- 1 ubuntu ubuntu 513K Jun 22 19:08 4a491e6da0f0e9e595d88d107a1813de6b59821f43a9611ad4f80bbd1f775c96
-rw-rw-r-- 1 ubuntu ubuntu 513K Jun 22 07:14 4a905b32f25b6d90ccd64cc361f45bdf6cd8faab4a4eb1843894ca44cba79afe
-rw-rw-r-- 1 ubuntu ubuntu 351 Jun 14 20:34 59e2a32b80e212cffae480a8339ed98c45da791f86e8945420ee16fd13b07b33
-rw-rw-r-- 1 ubuntu ubuntu 13K Jun 13 16:00 671de2e6fa322eda9b4c304237aec71a3ba681adc3d5d92635a78dfaaf960479
-rw-rw-r-- 1 ubuntu ubuntu 341 Jun 12 22:38 6be6c7f4b1b3e3e0dfd68a7fd607a343285de753510037e6ae5ac6cf5d551d99
-rw-rw-r-- 1 ubuntu ubuntu 513K Jun 22 07:14 6f0cef947a3d6b7c399fa709b0f989c10957710e5a6b4cd592dd932f26ac644a
-rw-rw-r-- 1 ubuntu ubuntu 53K Jun 12 22:38 78cc79b2d3ef262cfc5af45edbdc70e4c6610cd734f2cbe34d4a393d6a41799b
-rw-rw-r-- 1 ubuntu ubuntu 347 Jun 12 22:38 7c239b90f3df9df79b31a57ecc605074b65ac21bd8ddcdb2361dd10471132750
-rw-rw-r-- 1 ubuntu ubuntu 513K Jun 28 19:50 8d60075561b594604957b9a20cc0c90c8ae391946fed2d771965c4e292965198
-rw-rw-r-- 1 ubuntu ubuntu 513K Jun 28 20:01 8d8310bbadbd91dae9d8f52886e17db8ff81f9ccccf1fb5cc35c14ec079f1ad6
-rw-rw-r-- 1 ubuntu ubuntu 513K Jun 22 07:14 92c85d4e319f93385f4f534d52205cbd2338110ff8cfae10dec852a9b1bfee5a
-rw-rw-r-- 1 ubuntu ubuntu 16K Jun 12 22:38 934bba15077ea5e41d4512730d314c2af1de24d3b3ce4cbf4fa59ff1d2c8e2dd
-rw-rw-r-- 1 ubuntu ubuntu 513K Jul 6 08:32 990a0b94acf32cddd59799074cdfbf4c260cde4c90b4b2f7f3fb91a89dad9e8d
-rw-rw-r-- 1 ubuntu ubuntu 30K Jun 12 22:38 a08615723edb0f895f6a7f6e50dab68426adac23cfd6b4c7ef669428307806d1
-rw-rw-r-- 1 ubuntu ubuntu 182K Jun 12 22:38 a3605a7cfd699d0c37c7beb91021f651fed6382f65d30e972653f91974af2e2d
-rw-rw-r-- 1 ubuntu ubuntu 338 Jun 12 22:38 aac62de3767de6f4b30a2fa11064aa6ea9e95cba0201e0ce3eb7d6b218086e76
-rw-rw-r-- 1 ubuntu ubuntu 14K Jun 12 22:38 ac286444501b410e7c64f7d40f54306e9e32512380c3836c558b9d6ab5f6116f
-rw-rw-r-- 1 ubuntu ubuntu 513K Jun 14 20:34 acab134b63b8dd60afbafb688db7760ac3bcb3230d8fab77ae42dccc1b28d44a
-rw-rw-r-- 1 ubuntu ubuntu 342 Jun 15 16:12 b0640e4ed30d8a3d3774568e645b8aa7beaf9edf9e44c25fd47822269a4407a2
-rw-rw-r-- 1 ubuntu ubuntu 339 Jun 12 22:38 b19165882d6610aaa9c57922f7eb1a35a233496bb5723338b658427f045cf49f
-rw-rw-r-- 1 ubuntu ubuntu 86K Jun 12 22:38 b27081e4e77b5015c4ac74f91ce386e1ee95036139ccac8ee48a90365e48593c
-rw-rw-r-- 1 ubuntu ubuntu 175K Jun 12 22:38 ba71d810c41314bc4123189071335e59c5ff66012990f79051325eb7fbff007c
-rw-rw-r-- 1 ubuntu ubuntu 513K Jun 14 20:34 bd97d7100acea8b7829a303660196d4d6d5ab2a46ffbd2d6caa27db21c164d40
-rw-rw-r-- 1 ubuntu ubuntu 513K Jun 16 15:43 c33f2cffbdcdd562d494ebdf5e45ce1a9ad9848e61a68f43230d7d9b931348f8
-rw-rw-r-- 1 ubuntu ubuntu 513K Jun 15 16:12 c8a66840ea209494fa5c2d676eb06e91e6f7627c6cb41a099d6b9f54c631ead2
-rw-rw-r-- 1 ubuntu ubuntu 30K Jun 12 22:38 d33d00da3602c44af13d13e77754e3f19c41c67f6514694f8ee1bb5e1315153c
-rw-rw-r-- 1 ubuntu ubuntu 79K Jun 12 22:38 d43338dd98a17301c4e131dc65eac2935c6682ee28e5ba9a9996f3604f18c787
-rw-rw-r-- 1 ubuntu ubuntu 513K Jun 26 19:56 db414ae3689166ea332b95d8f1ebe8f865511373c8f3a2a5f9699fae68be2824
-rw-rw-r-- 1 ubuntu ubuntu 513K Jul 1 20:04 dc46bb7b0ff7fd0f4983ccbcdb1f83bb9804a43a9cd274767729dd875ddf995f
-rw-rw-r-- 1 ubuntu ubuntu 513K Jun 12 22:38 e01f136e6d06f27585052395d898e3ff5b00873af041e755f9c8199485c04e13
-rw-rw-r-- 1 ubuntu ubuntu 513K Jun 16 15:43 e56176a8cb6e33efc6cfeba7e6f2e2b2f34e16d260b40a10c4dd101b51366035
-rw-rw-r-- 1 ubuntu ubuntu 513K Jun 22 07:14 e99992ab3ecea30bce936120675b20d40cad19669bf9ba91ba09778f3d54f1e2
-rw-rw-r-- 1 ubuntu ubuntu 513K Jun 22 07:14 e99d7cc15da570bbff571e852d917c90a060d7fd0ee537d1f30d1dee84b861da```
Your mission, should you choose to accept it…
Not until there is enough data on the network.
Yep. I’m only running 25 nodes but my VPS has only a fraction of the storage needed. Launchpad wouldn’t let me run more than one! (Even though I had enough free space for two ).
How does this play out?
Is it unreasonable to say 70% of the network is vps?
(Why does saying that out loud feel like we are failing at the vision, topic for another day)
If the network storage capacity is very much under-provisioned in a effort to run max nodes is this a march to the end of cliff with no way back?
I don’t see it like that, and I think I tend to be pessimistic and find flaws easily!
As demand rises nodes fill but those VPS can’t handle as many. So this will adjust rather than hit a cliff. Fewer nodes on each VPS, fewer VPS, higher earnings for remaining nodes, more VPS, more people feeling it is worthwhile running nodes on existing equipment. Etc.
Fingers crossed, the network is designed so that the numbers will shift until the supply and demand are nearer to balance again.
I hope you are right!
Every vision meets with reality sooner or later, but I don’t think the percentage of VPS is so high.
VPS are global market, they are relatively cheap for people with expensive electricity and connection, for people in different countries they are stupidly expensive.
I had done some calculations for me and with old HW I have around and current electricity price, just for the electricity it is for me more than 2x more expensive to run nodes at home compared to renting VPS.
If I buy new energy efficient HW I will beat VPS costs after roughly 5 years.
I need to find somebody who has a house with fiber internet and solar on the roof.
I have a question about “resetting” nodes: I understand that it’s cleaner and easier to just start over with a clean slate when using safenode-manager or launchpad, erasing all data and generating a new peer-id.
However, the term “reset” is very confusing because - is it really necessary to delete everything, if you can just re-use the old node’s data and run with a new version of safenode? As far as I know, we are not re-starting the network?
Is there some memory about old nodes’ past that we need to get rid of for some reason? If becoming shunned too many times were permanent, I would understand.
Because of my stupidly low 80/20 ADSL I’ve been stopping 2 to 4 of the 10 nodes I have running for the evening most evenings so Netflix can be used. (Next step is to script this).
It’s the same 4 I’ve been abusing like this. They still get PUTs and GETs. I don’t see any difference in their behaviour. They’ve not earnt anything but then nothing at home has for the last few days.
If when uploads start in earnest and they don’t earn after a while when others do I’ll be worried.
We are restarting the network with zero data
Keeping the data doesn’t work even in a running system (at this time) because the node comes up in a new region of xor space and is no longer responsible for chunks it kept. So it will never be asked to give them because no one will look for those chunks from the node. The chunks will stay inactive.
But on top of that at this time the node software AFAIK does not take stock of the chunks it has when starting for the first time. That is still to come.
For security when a node restarts after a period of time not running it should be restarting with a new xor ID / peerID so the nodes that shunned it are not going to be shunning it. PLUS it should offer up its chunks it held in case the network does not have them. (you know after a massive outage)
Thank you for the explanation.
Answers to these questions would really help me understand this better:
Is it so that all “state” of my node, including peer-id, is only stored in the memories of the other nodes in the closed group of other running nodes? Or is there any state (other than a bunch of records) stored on my harddrive.
What determines which version of the network I am connecting to when I restart an existing safenode? The bootstrap nodes that a specific safenode version contacts via the bootstrap node list?
No, it is stored in your node.
But the shunning is done by other nodes within the neighbourhood of that xor address. So it is those nodes that keep their own table of shunned node xor addresses. All keeping their own status & state
So if your node returns with its original xor then most nodes in the neighbourhood (about 50 or so nodes) will not ever contact your node. It is basically useless. Your node was shunned when it stopped responding to the other nodes either asking something of it or health “r u ok”
And if it resets with a new xor address then the chunks it held are no longer anywhere near that xor address and no node will ever suggest your node as holding them (except rare random case). Thus those chunks are useless.
Then I said that one future plan is that a node that resumes after being off for a bit will reset its xor address and then offer up the chunks it has to the nodes close to the chunk’s xor address. This is what would be considered disaster recovery where a HUGE portion of the network went off line. OR the random but rare case where a smaller outage has a few chunks only kept by 5 nodes in that area.
There are keys and version numbers that determine the network you are in and the version of the protocols. Thus if you were in the “evil overlord’s network” and tried to connect to “the good guys network” then the good guys network would not communicate since the network keys are different.