Announcement: Latest Release Feb 27, 2025

Can you explain this a bit more. Just to help the wider audience (and me of course :smiley: :smiley: )
So can you clear up any misgivings here please

replication

  • Nodes replicate data from closest nodes (i.e. nodes in the 5 range).
  • To accept data as valid replicated we check the data comes from a majority of close nodes

Quotes

  • We ask CLOSE nodes + 2 (7 nodes) for a quote (THIS PART SEEMS TO FAIL OFTEN)
  • When the client (who does not involve relay) receives 5 of these he selects the cost to pay
  • The client provides the quotes and decision as proof
  • Then he sends the 5 nodes he got a quote from the data
  • Those 5 nodes then try and persuade all CLOSE nodes to accept the data (considering the majority of close nodes may not have been involved in the quote)

Is the above correct as to our current approach?

11 Likes

yes, it is

8 Likes

This has some merit, but ironically caused by the ant process itself.

With previous ant v0.3.7 I did ant file cost (4GB) and the process immediately consumed 120% of CPU, then settled to a steady 100% (Mac M4)

Maybe the client program itself has issues aka memory leaks etc, seperate from any design issues.

Is it Mac Silicon compatible or relies on Rosetta?

5 Likes

This is 100% true, here is how the update process happens for me for 500 nodes on a machine with 96 cores. 2 hours can’t refresh node registry and loads the whole machine at 98% which I’m sure the team knows because even @Shu starts his thousands of nodes with a custom script.


Check out the Dev Forum

4 Likes

To be fair, that description is discussing a client side issue.

2 Likes

That says a lot…were running under real world conditions out here.

5 Likes

One thing I have trouble understanding, is what do we gain by having such things as pricing curve and paying to the middle one of 3 or five quotes? This seems a bit complicated to me.

Compared to each node setting their price to whatever and the client then choosing the cheapest?

3 Likes

This has long been my suspicion.

The focus needs to be on home user uploads, with commodity gear, not data centres. Many folks on the forum have iterated this since the test nets.

Personally, I wouldn’t be surprised if the core problem relates to QUIC trying to send too much data, as discussed in other threads.

I’m repeating a big upload on a box wired to my mikrotik router to see if it has more luck. Wifi seems stable now and I can see the box is busy doing its thing.

8 Likes

If a specific node sets the fee to zero and continuously receives data, that data will be replicated and spread across the entire network. To maintain perpetual data retention, it is not feasible to use such a self-replicating approach.

4 Likes

So all the uploads done through the rewards period have been via Digital Ocean droplets…works perfectly.

2 Likes

Not really. DO was extensively used, but we also have built a lot of tools for that to emulate upnp nodes private nodes (behind NAT that cannot be connected to) and a few others. Most testnets have been a mix.

So given that, it’s still true that the test envoroment can only partially simulate the real network. So we all know what humans will do and the current network shows that. So good bad indifferent etc.

An area I am keen on though is treating the network now as a single product. So not have devs work on part X Y and Z with closed testing and no visibility of side effects (i.e. fix uploads and kill downloads etc.). It’s not a simple thing, but has to be done. So we are looking deeper at this part now and we will have some team reconfiguration to allow everyone to work in more cohesive and “product centred” approaches and away form the targetted and specific way we have had to work to get here. Lot’s of ball sin the air, but we cannot drop any of them.

A target area for us will be uploads and that will include payments as well. As that happens we will stress test the API (and especially the docs) to allow us to confirm all of that is all OK. There is another large infrastructure project to improve testing/release and our ability to run up testnets and scaling tests that we currently do, but we can see huge improvements there as well.

So a lot to happen and the focus from R&D to product based continues to bed in. It’s quite a chunk of work, but this will be an interesting time where all the devs get together much more on issues and no longer a single dev to a single issue. We will improve on that a lot and that works already being started, planned and agreed upon. It will be a huge change for the better for everyone.

16 Likes

We (the Old Gits) have long felt that the testing the team does on VMs is skewed and not reflective of real world conditions. Been saying this for a loooong time now.

6 Likes

I believe that is to stop malicious nodes setting the lowest price and grabbing all the traffic.
Cheapest isnt always best -except in Lidl.

5 Likes

That’s actually a good point :+1:

But how much those malicious nodes would be needed to make a difference? Data cannot anyway be stored anywhere else that the close group.

Below some more of my thinking.

I mean, we already have an incentive to always try and run more nodes, as that increases the probability to catch a payment. This means, that the network is not likely to ever get full in regards of amount of records / node.

If folks shut down nodes, that already means more rewards to others. So it’s self balancing there.

And since we don’t enforce the actual HD space, it is likely that the margin of HD space available for the nodes to use, will always be about the same, no matter how much data there is stored. The actual HD space getting full or not, is a choice of humans running the nodes.

Based on the recent experience, the competition for rewards will be so hard, that the payment is not going to be big compared to costs of running a node. I don’t think that increase of price per chunk will change this.

So, I think that we will effectively reach them same incentivization, no matter which way we do it, with pricing curve, or free markets.

4 Likes

I remember you being happy about the fact that R&D had ended around four years ago. From my perspective, I naturally welcome the shift towards a more product-focused approach and appreciate the team’s efforts.

At the same time, I support you—someone who has been more dedicated than anyone—continuing to be creative and ambitious on top of Autonomi, even pursuing goals that may seem unattainable. :wink:

3 Likes

In my country, we can buy a small mini PC for node operation for $160. It can run 70–100 nodes while consuming only 15W of electricity. Since almost everyone has a 100Mbps internet connection, running a node requires only a small investment and is quite easy. This gives individual users an advantage.

If you’re okay with using a little more electricity, you can run nodes on your existing computer without any additional cost. Why do you think this is not competitive?

4 Likes

Good to have the skipper back on board.

3 Likes

I think there’s a misunderstanding somewhere. I don’t think I said it is not competitive, and I don’t think it is not competitive.

What made you think so?

1 Like

Oh, I have some miss understanding. Sorry :slight_smile:

2 Likes

This is always true until we launch, so we did :smiley: :smiley:

10 Likes