Update 27th March, 2025

We had an excellent session on Discord last night in which our newest team member QA (quality assurance) specialist Victor (@vphongph) introduced himself.

“My job is about breaking things so it’s fun!” he said, which was a little worrying. Then he clarified: “In my case QA is about ensuring the product works and won’t fail.” Phew.

We answered your submitted questions (plus some extras since @rusty.spork clicked the start button an hour early - that’s a lot of dead air to fill :sweat_smile:. Now we know what @JimCollinson eats for lunch :chipmunk:). You’ll find a summary of the non-lunch-related Q&A below.

Earlier today there was a Spaces session on X in which @bux chatted with a number of decentralisation luminaries on the topic of the intersection of decentralised storage and AI. x1x1

By the way, next week there will be no dev update as some folks are away, but you can expect some announcements about new releases.

Discord Stages Q&A

Here is a summary of the Q&A session on Wednesday. Thanks for all your questions.

Can we have an update on the status of uploads/downloads as many people are still having issues?

Issues around upload are still under investigation. We have proposed fixes in place that work in test networks, but we need to ensure these work at the current scale of the network. We are expecting to make a release in the next week to address these. So keep an eye out for the updates to your nodes in the coming week, and help us all by upgrading promptly so we can get rolling!

What is the status of APIs?

Work is ongoing to ensure the Python API has parity with the Rust API. This is mostly done. The Nodejs API is not complete yet but it’s close. We also have some new tools and functionality to help with data types and building on the network which we are staging for the upcoming release.

When will the developer [alpha] network be live? Will community members be able to add nodes to it? If so, how will payments work for uploads? Will it be Arb or Sepolia?

The developer network will go live ready for the Impossible Futures POC builds on 22nd April.

This is a MaidSafe hosted network, but if people would like to contribute resources for free to support developers, that would be lovely, although it is not necessary, and it will not result in any type of bonus or payment for those providing it. It will be utilising Sepolia.

What is going on with DAVE?

He’s not currently functioning as he needs networking to be changed over. This isn’t a lot of work, just a few rungs down on the priority list. But we’ll be looking to address it within the next fortnight.

There has been lots of talk and debate on emissions. Will we reduce them further or adjust the white paper?

Emissions are an important part of the network economic design, and its sustainability over the 12 year period and onward from there. Big operators are providing for the network. We don’t want to do anything drastic which might have negative consequences. So no changes planned at this stage.

Have you thought about setting an emission threshold to payout to something like $10 or even $100?

Short answer is we just don’t need to. The gas fees for doing these emissions are tiny. It’s simple for us, and better for you, so we’ll stick with the way it’s functioning.

It has been discussed that older nodes are potentially earning more ANT then new ones, can we touch on that?

In many ways this is a desirable function of the network. We want the network to reward participants for acting as good citizens. We no longer have node age explicitly, but many of the principles remain the same with Kad. Nodes that stick around, and don’t get shunned, are woven into the network more, and can therefore expect on average to be earning more reliably. That’s the short and simple answer, but it’s more nuanced than that.

Impossible Futures

So far we’ve had in excess of 30 applications to build on the upcoming alpha network in the Impossible Futures Challenge, which is great news.

Our next milestone is the 22nd of April, and work is underway on our microsite that will support and showcase builders, and let community and backers evaluate and vote.

To support this venture and promote the network we have launched a new video podcast series where we talk to disruptors in their field. You can see the first one here in which @JimCollinson speaks to Edmund Sutcliffe, a regenerative farmer. We believe the network will be useful across many different sectors, often in quite unexpected ways, and we’re keen to broaden the conversation to as many thinkers and disruptors as we possibly can. Please do try to catch the podcast.

General progress

@anselme and @vphongph have been doing some research into the original Kademlia, libp2p and the current ant-networking to find overlaps, differences and where we can make ours better. They built an experimental bare libp2p test client without using ant-networking, only using raw libp2p and Kad, and managed to get connected to both Autonomi local and production networks, getting closest peers and chunks, seeing buckets fill up, as well as connected nodes. Serious code simplification incoming! :scissors:

@qi_ma tested his PR 2856, which features an enhanced routing table refresh scheme to detect churning more quickly and address the issues with detecting node versions. Qi says it is “much more swift and accurate on drop out detection.” He helped with the design to handle churn while upscaling and raised a PR with @shu to help measure the effectiveness of various refresh schemes via our ELK dashboard. Qi and @dirvine have also been in conversation with the libp2p team, after identifying an issue with the routing table refresh implementation. We have a work around for now, but it will be good to have that built into the code by the libp2p team. See also David’s discussion here.

@chriso has been testing @qi_ma’s PR 2856 which introduces a liveness check to routing table refresh. This is designed to balance the dual goals of maintaining an accurate and updated network view while minimising resource overhead - so far to good effect.

He also had a chat with the community about the upcoming testing alpha network for builders. Great to see some of you offering to provide nodes without payment at this time :folded_hands:. There’s absolutely no obligation for anyone to do this, obviously, and it’s a credit to the community that you’re willing to help out. We will be contributing a few thousand nodes ourselves.

Chris also provided a workaround on setting logging levels, crude but at least partially effective.

Ermine focused on optimising the antctl status and clearing up some leftover work in RPC removal PR.

Lajos worked on the Impossible Futures contracts and started setting up some things for the blockchain integration into the NFT claims frontend.

@mick.vandijke investigated an issue where chunks are unexpectedly big. The reason for the error is that we have assumed that chunks produced by the self_encryption crate would have max chunk sizes of what is specified in the variable MAX_CHUNK_SIZE (how foolish, right?). In reality, the specified max chunk size is only used as a limit for the raw, uncompressed, unencrypted chunks. Encryption using AES with PKCS7 always adds 1 to 16 bytes of padding to the chunk contents. So if the chunk was already the max size, it could now be max size + 16 bytes. Then before we encrypt the chunk, we try to compress it using Brotli. In the worst case, compressing chunks actually makes them bigger.

@roland fixed all the bugs in the infrastructure downloader script, a major boon to our testing.

And @shu worked on uploader and downloader dashboard integration, providing a high level summary of: uploader & downloader verifiers; timestamp of last type of unique error per service type; record activity per day/hour; number of services running by service type; successful and non-successful uploader & downloader verifier attempts; and much much more.

53 Likes

Thanks so much to the entire Autonomi team for all of your hard work! :flexed_biceps: :flexed_biceps: :flexed_biceps:

23 Likes

Great update, lots going on, we will reach our goal! Thanks all!

20 Likes

Firstly, as always, a genuine thank you to all involved here, whether name-checked or not.
Excellent to see progress on the APIs and the start of the Impossible Futures programme. Hopefully this will get picked up by more disruptors like Ed Sutcliffe in other unexpected sectors as well as the ones we all thought of ages ago.

Now for the bad news. I believe the decision to keep emissions at the present absurdly high level is flawed and may even be FATALLY flawed. This decision ONLY benefits the big whales and makes an absolute mockery of the principle that node-runners are rewarded for providing resources - CPU cycles, bandwidth and STORAGE.
The present whales are NOT providing storage in any meaningful way. In fact I doubt that ANYONE other than the those running LaunchPad are providing even 1/10th of the mandated 35Gb per node.

@neo likened it to a stack of sheets of 2D A4 rather than what is required, cubes of storage. Read his post and subsequent discussion here Emission Update and Kicking off Impossible Future Program - #27 by neo

Long and short, we are pissing away ANTs and ETH gas to whales who are actively hampering the network with their mega millions of useless nodes while those who follow the rules and allocate 35Gb/node as LaunchPad mandates are actively discriminated against.
This is in direct conflict with the original aims of the project to utilise ALREADY EXISTING SPARE CAPACITY and what us long-termers signed up for all these years ago.
Right now I and and some others feel BETRAYED by this pandering to those with deep pockets.

The refusal to consider reducing emissions is particularly poor and warrants reconsideration and a much more detailed response than the arrogance of

It is precisely at this stage that changes MUST be made while they still can.
Who benefits from these huge emissions?
Certainly not the small user who can dedicate say a 1 or 2 Tb HDD to the network. This is exactly 180 degrees from what @dirvine proposed all these years ago.

Likewise the refusal to alter the payout schedule.
Who NEEDS payouts twice a day?
Certainly not the small user. Whales with cashflow considerations love it to bits though.
Only paying out when the credits accrued exceed some threshold - $100 has been mentioned - saves pissing away our resources on ETH for gas, arrogantly dismissed as

is simply an insult to our intelligence.

I can understand why a large no of nodes (even though they are almost entirely USELESS) may have some benefits for marketing. These false claims stand up to no scrutiny whatsoever. Looks wonderful on the 2nd slide of a fancy presentation. Shame that when we get further into the meat of any presentation the truth that the network right now is simply UNFIT FOR PURPOSE becomes glaringly apparent and once again a triumph of marketing over engineering brings a promising project to a premature end and tears all round.

Theres more, lots more but this is already far too long. Lets see what responses this brings…

16 Likes

What he said. :backhand_index_pointing_up:

7 Likes

Knowing the exact cause of price fluctuations would imply you can predict a complex system with precision — and that’s impossible. The community may be convinced that emissions are the reason for the price drop, but that’s merely a hypothesis, not a correlation coefficient of one. Focusing on the unpredictable “price” makes it difficult to have productive conversations.

It’s very encouraging to see the team becoming stronger and more focused. :slight_smile:

8 Likes

Perhaps consider that we are not seeing the full picture.

One hypothesis may be that simply because it doesn’t make sense from our vantage point it may make perfect sense if the large players are partners who joined in good faith and are now waiting it out like all of us.

Whatever it is, it makes sense to the team for reasons that we do not see.

14 Likes

Can I get a bit more clarification on this? Namely:

  • Is the PR 2856 with the enhanced routing table refresh scheme part of that release?
  • Is the current latest release supposed to help with uploads at all, even if we get all the nodes upgraded?
  • When the next release comes, will it still take one more release until the next one is the latest eligible for random rewards? If so, could it be hurried a bit?
4 Likes

Some criticize emissions for causing an artificially large network in node quantity. An argument is that there aren’t enough uploads to warrant the node quantity, so it’s a waste.

But some problems can only be exposed, debugged, and solved when the network is at scale. So isn’t it better to inflate the network early, identify and fix those issues, then when uploads come in earnest the network is stable, reliable, and ready for them?

The alternative is to fix scale related issues when the network organically grows to that size, when stability and reliability are most needed.

I think emissions are doing their job, but then again I don’t oversubscribe (underprovision) machines so I don’t see the lavish payouts others apparently see.

5 Likes

Bit shouty but he nailed the concerns and the level of annoyance.

Like Josh, I can only assume there is some information we are not privy to or some next level thinking and analysis has been done.

If it’s just hope, trust, being focussed on other things or thinking things will just work out then it might not end well.

Meanwhile, there is less incentive for people to bring in 100 nodes. Even if they do decide to it’s less feasible with the Launchpad because of its insistence on 35GB per node that won’t be used for a long time and maybe never. And apparently no incentive to bring in any fewer nodes because of the way quotes are used to determine the emissions. Those things are a big fat fail.

6 Likes

Well then, within the limits of “commercial confidentiality”, is it not long past time that the management team shared these reasons with us?

@Dimitar reports that despite his years and years of stellar efforts, his set of node-runners have largely given up. And who can blame them?

The unseen. as yet unknown, “big boys” are notionally about to invest heavily.
Tell me, how does the size of this investment compare to the true cost of the years of effort put in willingly for free by all the old team? Not all of whom are still around to see what has become of their work over the years?

Well they will wait for a while, cos as we see above, Joe the Bulgarian Punter and his 1-2Tb HDD have said “Sod this for a game of soldiers” and are effing off elsewhere. They are nt downloading, they certainly are not uploading cos despite the claims above that

[quote=“maidsafe, post:1, topic:41462”]
The gas fees for doing these emissions are tiny
[/quote] they sure add up for those spending their own money to buy ETH. That is of course when uploads actually work, which I still think is seldom cos I havent tried for a couple of days…
Lets see the “big boys” paying to attempt uploads…

5 Likes

Personal opinion below:

if the amount of tokens distributed by emissions isn’t changing per day for foreseeable future, and network size keeps growing, the expectation that one is profitable or earning a decent amount is just flat out wrong. You will earn some tokens and at a certain frequency, it simply may not be to your satisfaction, especially if you don’t have a large number of nodes (participation % in the network).

I am personally OKAY in knowing that I contribute a fraction of the 50M nodes out there, and therefore will be receiving only a few tokens… my expectations aren’t sky high here…

No one can control who the whales are, and how many there are, and when they choose to participate and when they don’t. Let the one’s participating earn what they can earn, they are all playing by the same set of rules as is (the code that is enforced and implemented within antnode).

Once the network has more data, if you still can’t provide a healthy node, you are out of the game, simple as that…

It feels like folks are upset because they aren’t earning enough… and its easy to blame the mega whale, yet there is no concrete evidence that there exists 1 or 2 or N mega whales that own a large % of the network… but its not the whale’s fault… or the network’s fault that its at 50M network size… that most folks never even expected to hit such a network size at such an early phase off post TGE…

In general, I am all for the upcoming tweaks and changes with multiple PRs en-route dealing with antnode and ant at multiple layers of the stack, and overall simplification and improvements… but as far I know thats not aimed at stopping these mythical whales..

In my mindset, if you are giving healthy nodes and playing by the rules, you deserve to earn a decent amount of tokens. However, what is healthy now as a node, may not be healthy in the future.. if you oversubscribe etc.

In essence, say you change the rules so there is less whales due to a new rule, well there will be new types of whales that just out do your individual # of nodes, purely in sheer size due to their ability to draw upon tons of resources, and now these individuals or groups have become the target off being labelled as mega whales… its non-ending.

11 Likes

Sometimes its necessary to raise your voice a bit. Remaining silent and “polite” all too often means your legitimate concerns are ignored.
OR your PoV does not get the scrutiny it deserves and your mistakes/misconceptions are not corrected…

6 Likes

This is repeatedly the assumption - that this is about money for the community - but it patently isn’t. People are mostly raising concerns based on their perception of what is good or bad for the network, app developers, users and the project.

Some are no doubt primarily concerned about their earnings, but it isn’t the main reason for most who speak up and certainly not what keeps @Southside animated.

21 Likes

If the team wants an inflated amount of nodes, why does Launchpad impose a 35 GB cap? I don’t use Launchpad and I could overprovision (but don’t because my understanding up until now is that it’s cheating).

7 Likes

All I meant is everyone is playing by the same rules, yet we are criticizing heavily the emissions strategy here.. for many reasons (some are due to personal reasons and some are not).

Regardless, the network paper voiced an emission model as far as I know and its being carried out… short term it may not seem fair to a certain audience or group… but let it play out… is my personal opinion.

8 Likes

How so @Shu, the folk against this are campaigning for reduced earnings.

I for one am more excited about dev net than main net.

It’s absolutely not about money and honestly makes no sense when you consider that the people voicing concern have spent years volunteering endless hours, weeks and months to this project to be told that suddenly its all about money.

9 Likes

Please don’t take one single statement out of my entire post and tie it to the above line.

In general, I have no issues with whales, there will always be the next whale, the next target, the next label, that one audience will not feel well about.. etc.. it doesn’t end.

6 Likes

It’s not that one line from you, it appears to be the sentiment from Maidsafe.

2 Likes