We had an excellent session on Discord last night in which our newest team member QA (quality assurance) specialist Victor (@vphongph) introduced himself.
“My job is about breaking things so it’s fun!” he said, which was a little worrying. Then he clarified: “In my case QA is about ensuring the product works and won’t fail.” Phew.
We answered your submitted questions (plus some extras since @rusty.spork clicked the start button an hour early - that’s a lot of dead air to fill . Now we know what @JimCollinson eats for lunch
). You’ll find a summary of the non-lunch-related Q&A below.
Earlier today there was a Spaces session on X in which @bux chatted with a number of decentralisation luminaries on the topic of the intersection of decentralised storage and AI. x1x1
By the way, next week there will be no dev update as some folks are away, but you can expect some announcements about new releases.
Discord Stages Q&A
Here is a summary of the Q&A session on Wednesday. Thanks for all your questions.
Can we have an update on the status of uploads/downloads as many people are still having issues?
Issues around upload are still under investigation. We have proposed fixes in place that work in test networks, but we need to ensure these work at the current scale of the network. We are expecting to make a release in the next week to address these. So keep an eye out for the updates to your nodes in the coming week, and help us all by upgrading promptly so we can get rolling!
What is the status of APIs?
Work is ongoing to ensure the Python API has parity with the Rust API. This is mostly done. The Nodejs API is not complete yet but it’s close. We also have some new tools and functionality to help with data types and building on the network which we are staging for the upcoming release.
When will the developer [alpha] network be live? Will community members be able to add nodes to it? If so, how will payments work for uploads? Will it be Arb or Sepolia?
The developer network will go live ready for the Impossible Futures POC builds on 22nd April.
This is a MaidSafe hosted network, but if people would like to contribute resources for free to support developers, that would be lovely, although it is not necessary, and it will not result in any type of bonus or payment for those providing it. It will be utilising Sepolia.
What is going on with DAVE?
He’s not currently functioning as he needs networking to be changed over. This isn’t a lot of work, just a few rungs down on the priority list. But we’ll be looking to address it within the next fortnight.
There has been lots of talk and debate on emissions. Will we reduce them further or adjust the white paper?
Emissions are an important part of the network economic design, and its sustainability over the 12 year period and onward from there. Big operators are providing for the network. We don’t want to do anything drastic which might have negative consequences. So no changes planned at this stage.
Have you thought about setting an emission threshold to payout to something like $10 or even $100?
Short answer is we just don’t need to. The gas fees for doing these emissions are tiny. It’s simple for us, and better for you, so we’ll stick with the way it’s functioning.
It has been discussed that older nodes are potentially earning more ANT then new ones, can we touch on that?
In many ways this is a desirable function of the network. We want the network to reward participants for acting as good citizens. We no longer have node age
explicitly, but many of the principles remain the same with Kad. Nodes that stick around, and don’t get shunned, are woven into the network more, and can therefore expect on average to be earning more reliably. That’s the short and simple answer, but it’s more nuanced than that.
Impossible Futures
So far we’ve had in excess of 30 applications to build on the upcoming alpha network in the Impossible Futures Challenge, which is great news.
Our next milestone is the 22nd of April, and work is underway on our microsite that will support and showcase builders, and let community and backers evaluate and vote.
To support this venture and promote the network we have launched a new video podcast series where we talk to disruptors in their field. You can see the first one here in which @JimCollinson speaks to Edmund Sutcliffe, a regenerative farmer. We believe the network will be useful across many different sectors, often in quite unexpected ways, and we’re keen to broaden the conversation to as many thinkers and disruptors as we possibly can. Please do try to catch the podcast.
General progress
@anselme and @vphongph have been doing some research into the original Kademlia, libp2p
and the current ant-networking to find overlaps, differences and where we can make ours better. They built an experimental bare libp2p
test client without using ant-networking
, only using raw libp2p
and Kad, and managed to get connected to both Autonomi local and production networks, getting closest peers and chunks, seeing buckets fill up, as well as connected nodes. Serious code simplification incoming!
@qi_ma tested his PR 2856, which features an enhanced routing table refresh scheme to detect churning more quickly and address the issues with detecting node versions. Qi says it is “much more swift and accurate on drop out detection.” He helped with the design to handle churn while upscaling and raised a PR with @shu to help measure the effectiveness of various refresh schemes via our ELK dashboard. Qi and @dirvine have also been in conversation with the libp2p team, after identifying an issue with the routing table refresh implementation. We have a work around for now, but it will be good to have that built into the code by the libp2p team. See also David’s discussion here.
@chriso has been testing @qi_ma’s PR 2856 which introduces a liveness check to routing table refresh. This is designed to balance the dual goals of maintaining an accurate and updated network view while minimising resource overhead - so far to good effect.
He also had a chat with the community about the upcoming testing alpha network for builders. Great to see some of you offering to provide nodes without payment at this time . There’s absolutely no obligation for anyone to do this, obviously, and it’s a credit to the community that you’re willing to help out. We will be contributing a few thousand nodes ourselves.
Chris also provided a workaround on setting logging levels, crude but at least partially effective.
Ermine focused on optimising the antctl
status and clearing up some leftover work in RPC removal PR.
Lajos worked on the Impossible Futures contracts and started setting up some things for the blockchain integration into the NFT claims frontend.
@mick.vandijke investigated an issue where chunks are unexpectedly big. The reason for the error is that we have assumed that chunks produced by the self_encryption crate would have max chunk sizes of what is specified in the variable MAX_CHUNK_SIZE
(how foolish, right?). In reality, the specified max chunk size is only used as a limit for the raw, uncompressed, unencrypted chunks. Encryption using AES with PKCS7 always adds 1 to 16 bytes of padding to the chunk contents. So if the chunk was already the max size
, it could now be max size + 16
bytes. Then before we encrypt the chunk, we try to compress it using Brotli. In the worst case, compressing chunks actually makes them bigger.
@roland fixed all the bugs in the infrastructure downloader script, a major boon to our testing.
And @shu worked on uploader and downloader dashboard integration, providing a high level summary of: uploader & downloader verifiers; timestamp of last type of unique error per service type; record activity per day/hour; number of services running by service type; successful and non-successful uploader & downloader verifier attempts; and much much more.