New network going up now, testing 4MB chunks. Time to reset your nodes

IF Autonomi Network community considers a starter system ‘hand me down’ running MS Windows with 8GB of RAM and 256GB SATA SSD as the minimum high volume fringe safenode operator case, and is the type of ‘student’ hobbyist they want to attract to get network node count up really fast,

then imo the 64GB safenode size is likely way too large,

given having run this configuration, I have noticed as I progressively dropped node count from

first 10 nodes,

then to 8 nodes with the 2GB size and

now to 3 nodes, running this ‘32GB’ configuration,

That the statistical likelihood of this current 32GB configuration of 3 nodes consuming 105GB 42% of the systems formatted 246GB useable space earning any nanos drops dramatically,

my back of envelope calculation say there is a more than -400% likelihood such a small 3 node operator will earn enough nanos to allow them to spend those nanos on uploads.

Which means it really is not a viable use case configuration option at this point.

For me this is disappointing for Autonomi, because it means the project is straying from Maidsafe’s original promise to “use just any old system laying around to run a few nodes”.

Thoughts Anyone on whether the Community should support this type of ‘fringe’ operator use case?

Keep in mind, I must keep 20% of the 246GB formatted space free (49.2GB) so the flash will operate properly in peak concurrent process use, which for the “student” node operator would be running a Brave browser with 4-12 tabs open, maybe a game and email in the background, while the three safenodes are operating at their max too… worst case. :wink:

3 Likes

Here we go again…

I noticed a big increase in network utilisation in the house about 30 mins ago so checked the page.

However, the 2 files that were put up for testing still download and really quickly so maybe not all is lost.

ubuntu@ip-172-30-1-42:~/downloads$ safe files download "BegBlag.mp3" 304b74b76536e89910262ade48020a4ab2724bdaf353f3b42de625fee2477228
Logging to directory: "/home/ubuntu/.local/share/safe/client/logs/log_2024-10-13_14-54-41"
safe client built with git version: 412b998 / stable / 412b998 / 2024-10-07
Instantiating a SAFE client...
Connecting to the network with 25 peers
🔗 Connected to the Network                                                                                                               Downloading "BegBlag.mp3" from 304b74b76536e89910262ade48020a4ab2724bdaf353f3b42de625fee2477228 with batch-size 16
Saved "BegBlag.mp3" at /home/ubuntu/downloads/BegBlag.mp3
File downloaded in 2 seconds 552 milliseconds
Completed with Ok(()) of execute "Files(Download { file_name: Some(\"BegBlag.mp3\"), file_addr: Some(\"304b74b76536e89910262ade48020a4ab2724bdaf353f3b42de625fee2477228\"), show_holders: false, batch_size: 16, retry_strategy: Quick })"
ubuntu@ip-172-30-1-42:~/downloads$ time safe files download "AnarchyInTheSouthside.mp3" 1c280f04cf0d0321dcc7de2f9893dead87daf26f4f597d366a8d2b142438144a
Logging to directory: "/home/ubuntu/.local/share/safe/client/logs/log_2024-10-13_14-54-48"
safe client built with git version: 412b998 / stable / 412b998 / 2024-10-07
Instantiating a SAFE client...
Connecting to the network with 25 peers
🔗 Connected to the Network                                                                                                               Downloading "AnarchyInTheSouthside.mp3" from 1c280f04cf0d0321dcc7de2f9893dead87daf26f4f597d366a8d2b142438144a with batch-size 16
Saved "AnarchyInTheSouthside.mp3" at /home/ubuntu/downloads/AnarchyInTheSouthside.mp3
File downloaded in 8 seconds 680 milliseconds
Completed with Ok(()) of execute "Files(Download { file_name: Some(\"AnarchyInTheSouthside.mp3\"), file_addr: Some(\"1c280f04cf0d0321dcc7de2f9893dead87daf26f4f597d366a8d2b142438144a\"), show_holders: false, batch_size: 16, retry_strategy: Quick })"

real	0m9.208s
user	0m7.757s
sys	0m3.044s
4 Likes

My concern is that it will end up like BTC which was supposed to be decentralized mining by common people. It is now heavily centralized.

How do we prevent this from happening with Autonomi?

If we have a mass of centralized providers and they are told to shutdown for whatever reason this could be the end of the network.

3 Likes

Relevant perhaps is,

“Shoot for the moon. Even if you miss, you’ll land among the stars.”

If we all achieve exactly what we dream up that would be amazing, not often the case.

3 Likes

It’s completely different given how specialist Bitcoin mining became, whereas CPUs, RAM, SSDs are all available anywhere for roughly the same cost.

If an amazing Internet connection becomes a necessity for running nodes it’d be a bit more restricted, but still hugely decentralised vs Bitcoin mining where you need 1) early access to specialist hardware 2) very cheap electricity 3) regulators who don’t mind you doing it, all leading to significant centralisation of mining.

It’s a very different situation with Autonomi, but still worth working to make sure as many people as possible can continue tribute to the network without compromising it’s performance.

2 Likes

Yeah I have been crapping on the inability of DPoS projects to actually become dPoS since 2019. All of those systems , its the operator or pool with the biggest stake that wins every time.

In essence the Autonomi Network DAO Algorithm and how it works to reward nodes is key

The DAO Algo of AN needs to encourage POSTIVE entropy TO AVOID negative entropy ( ending up with a few BIG assed node operators) the latter which is a decrease in the number of possible microstates, as the system becomes more rigidly confined(= a few big node operators))

That means the DAO encouraging and embracing the fringe use case of the ‘student’ node operator running a few nodes on a crappy hand me down machine and NOT penalizing such nodes with shunning for being too slow or too few, so that the Algorithm actually pays out a little more to these fringe use case small node operators, forever to create a balance against big nodes.

What I am suggesting in many ways is similar to a secondary voting /handicap systems like the US Electoral College vote.

In order to do that, ALL nodes ‘proof of resources’ and node count need to be factored in by the DAO given network state needs for more or less capacity,

and small operator nodes need to carry less weight (Little jockey , Slower Horse, still a chance to win) than a big operator (Little Jockey , FAST Horse, lots of horses and jockeys, greater statistical chance of winning something)

Which improves their little system, few node count chances of earning nanos/rewards/tokens, relative to Big Nodes

Where potentially, the smaller nodes are for example, registered as ‘home’ networks, which would be one way of categorizing their XOR configured address, couple with the number of nodes they are operating (and still maintain privacy)

say anything <12 nodes and under, as an example, running as home operated nodes.

Given network state, that is at any given moment, the DAO needs more or less capacity,

this type of handicapping of big nodes makes sense

only IF Autonomi Network Community really wants and MAIDSAFE really wants

a truly distributed private Layer 1 Network to flourish,

used by every one.

Imo based on my own testing that fringe ‘student’ use case of a small operator running a few nodes the past 5 months,

In the Autonomous Network’s DAO Algorithm’s current form,

Autonomi will trend toward the negative entropic state

That is a the network state that becomes more rigid, really quickly, with a few Big Operator Node Pools running thousands of safenodes to earn most all the rewards, with each Big Operator trying to out duel each other with higher node counts and better responses during peak rewards periods(network getting full) to gain more rewards.

3 Likes

@DavidMc0

If it is profitable to run nodes in data centers and out perform home nodes, we will have conglomerates buying up server and bandwidth into the future which will centralize the network.

@rreive

I fully agree. I’ll tell you right now people are not going to run nodes from home if they have low or no chance of earning. We are already seeing those that have access to vast resources vacuuming up the majority of tokens. It goes against S.A.F.E. It is worrying and disappointing.

2 Likes

Where I live there will never be

“If an amazing Internet connection becomes a necessity for running nodes it’d be a bit more restricted”

Which is also the case in most of the developing world.

Having a Meritorious System of rewards for different types of contributions is a good design goal, which is quite different from a Meritocracy based on one scoring system (Like Academia).

Autonomi is the former now tending toward the latter these last few months of Beta,

Which gets me thinking, maybe its time for a pause after launch to have the Maidsafe devs install a feedback loop (which fixes what Cosmos attempted) to re-engage the Autonomi Network community, so they can better guide and help the Maidsafe team become even more effective?

a Meritorious collective of many different contributor types providing feedback in a regular timely fashion is much better than a pluralistic programming and node operator mob ‘scatter gunning’ the current forum constantly, non? :wink:

Otherwise, the current SOTU I fear is going to burn out a lot of people quick.

2 Likes

Why doesn’t launchpad prune and restart heavily shunned nodes?

3 Likes

Good idea, it would be better to put that action in the hands of the node operator through node launchpad as you suggest, provided they can adjust the DAO Algo to support it, and that the local action is time fed into the network with randomized time interval delays between node launch confirmations?

2 Likes

because it’s not anms. Aatonnomicc’s Node Manager Script

This should be a good start point for pruning and restarting heavily-shunned nodes.

3 Likes

I’m going to take a look at this. Thank you for the heads up.

2 Likes

Do shunned nodes eventually stop communicating with the network due to their status and shunned?

1 Like

This is what the traffic on the router port that the Rpi4 is on looked like.

Green line is In on the Router so Out on the Pi.
Blue line is Out on the Router so In on the Pi.

It was 5 nodes to start with. The 3 peaks around 1000 to 1200 today were me starting another 3 nodes.

So the 2 periods of silliness are clearly visible.

I thought I would be fine starting the other 3 but then when things went bad again the 80/20 ADSL was constantly at the max upload of 20Mb/s so while I would have got away with it while things were stable. If it were a working day I’d have to kill some to do meetings.

1 Like

True, but hundreds of millions of people in many countries with good Internet using proprietary hardware is still lends itself to far greater decentralisation than the requirements for Bitcoin mining do… but I very much hope ‘from home’ operations anywhere with half decent Internet will be able to play a part and be rewarded.

This is key. It doesn’t really matter if datacentres outperform home nodes, as long as home nodes still get a share of the earnings in line with their contribution.

I guess it’s best for decentralisation if storage cost becomes the limiting factor cost wise in terms of earning… if it’s bandwidth, it may well give datacentres a bigger advantage.

6 Likes

This is the target for sure.

1 Like

As a data point for the devs.

Background: I woke to see the network had collapsed and saw around half my nodes having no routing table peers and saying the network was around 1 to 5 nodes in size. (information from the /metrics of the node itself)

It seems these nodes got isolated from the network due to too many of routing table peers dying and the node considering too many of its neighbours as bad. The RT peers dying of course could be them seeing the node as bad and not talking. But considering the network went to around 1/10 the size again its also reasonable to consider some nodes would lose all its RT peers in the collapse.

The report is more about how each of those nodes were incapable of recovering from that situation and not even going back to the initial contacts. I am assuming the initial contacts being Maidsafe’s DO nodes that they would still be fine. Of course it could be they thought my node was bad too and shunning it.

3 Likes

okay … aside from the strange effects that I see 6 chunks downloaded for a 6 MB mp3 file … when I download it …

here’s a repeated download of this 6MB file:

safe files download "Patosh-RoughNight.mp3" 157632cf709643b9c0962632b838a7dd4cfb74b73a5af6dbaaedb81bb32042e3

everything between 5s and 19s possible … everything at a speed of 3MB/s … no matter how long … how does that make any sense?! (and it’s still a 6MB file … should be 2.[something] seconds at 3MB … 5s means 15MB transferred … in the best case that I only saw once and didn’t manage to take a screenshot in time …) … is that connected to communication issues that happen during those longish TXs of large chunks and re-transmissions (?) … at 19s it’s 57MB transferred for a 6MB file …

6 Likes