What you need to know for update

The time has come everyone! The next phase of the Beta Rewards program! Here’s everything you need to know, including:

Steps to take now
What we’re testing this week
How rewards work in Phase Two

For Node Launchpad Users:
Download the latest version of the Node Launchpad (v0.3.17).
Open it, and press O to access the Options screen.
Then, press Ctrl R, type reset, and hit enter. This will stop your nodes, reset, and upgrade their software with the latest network updates.
Reboot your device, and reopen the Node Launchpad.
If you haven’t added your Discord Username yet, press Ctrl B to do so.
Press Ctrl G to restart your nodes. Keep in mind that nodes are larger this time and will require more of your computer’s resources. We recommend starting with one node and gradually adding more.
Once your nodes are running, use the /rank command on Autonomi to check your progress.

For CLI Tool Users:
If you’re using the CLI tool (Node Manager), you’ll need to reset your nodes as well. The full reset command guide is available on our documentation site.

Please note that this time, each node requires 32 GB of storage, plus the usual 3 GB for logs, so ensure your device has enough resources before starting multiple nodes.

Phase Two Rewards
Phase Two has a simpler, more streamlined rewards structure:

Open to all participants.
A single, weekly leaderboard (no more waves).
Rewards are earned based on how many nanos your nodes have earned.
No more boosts, invites, or referrals—it’s all about your node earnings.
500,000 tokens are up for grabs over the next four weeks.

What We’re Testing
Over the next two weeks, we’re focused on optimizing the Node and Chunk sizes of the Network. The goal is to maximize capacity while maintaining performance. Here’s how it works:

Week 1: We’ll be testing larger node sizes with the same chunk size you’re familiar with. You’ll notice a significant increase in storage requirements—from 2 GB to 32 GB, plus 3 GB for logs. Because of this, you might run fewer nodes due to higher demands on bandwidth and memory, so start small and scale up gradually.
Week 2: After analyzing the results from Week 1 and our extensive ongoing lab tests, we’ll implement the presumed optimal node and chunk sizes in Week 2’s public test, proving the solution at scale.

Following these tests, we’ll begin integrating the ERC20 token into the network, giving you a chance to get hands-on with the new EVM data payment system.

Good luck everyone, and have fun building the unstoppable web!

12 Likes

What a day, you guys rock!

$ git log --oneline --since="1 day ago" --all | wc -l
46

(that means 46 commits to the repository in last 24 hours)

5 Likes

At this point this does not seem to be the case. I’m not 100% sure, but it seems that the CPU and memory usage per node is no greater than with previous testnet.

I think the higher usage of resources applies only to HD space, or kicks in later when nodes are fuller.

Still, at the moment the launchpad does not allow me to start more than 8 nodes, even that leads me to only use 280GB of 434GB available. When I try to start more, the screen just hangs on “Starting nodes”, but no more gets actually started. CPU, memory etc. are well within the bounds though. :thinking:

1 Like

cpu/memory/bandwidth usage could be higher per node, because there will probably be fewer nodes at all, and I assume network upload rate can be similar as in phase 1.

1 Like

Nope. 10 chars

1 Like

You can have 100 nodes and at the same time be racing towards 100000 :slight_smile:

Edit: OK, I went on Discord and saw 60k stats. Impressive. I suspect it’s highly overprovisioned.

2 Likes

@Josh, what are the actual numbers and source for them?

1 Like

ntracking currently shows 100k, I don’t remember @riddim site? Do you. Be good to compare.

Jim noted 60k a while back.

1 Like

Anyway, if the uploads are not happening like 16 times more, I don’t think the larger max. capacity changes anything any time soon.

Personally, the HD space was never a limiting factor for me, the router thingies were. Now I can run just as many nodes as before, at least for a while.

If and when the troubles start, I think there needs to be a lot more data in the network, and that data used too. Downloading also, not just uploading, I mean. And that point there are going to be many of us in the same mess and the network will crash and burn. Which is a good result to have, as a test.

3 Likes

https://network-size.autonomi.space/

A bit above 100k - still rising

3 Likes

At this point this does not seem to be the case. I’m not 100% sure, but it seems that the CPU and memory usage per node is no greater than with previous testnet.

I think he means that as the nodes become fuller they will do more work sending records to clients and other nodes. Up to 2GB worth of records there would be no difference but with 32GB nodes there is clearly the potential for more to be called on by clients and replicated to other nodes as they join. Or indeed to absorb records into when other nodes leave.

1 Like

RAM usage seems down on the last network, though your point about RAM required when the nodes are fuller is valid.

Frankly, its too early to say with any certainty.

2 Likes

Oh they are larger. The max record went from 4096 to 131072, so yes they are very much larger.

Like always they only use as much disk as they need. As more data is being stored the nodes will grow in the amount of disk space they require. The 32GB is the expected maximum size but could go over this depending on the situation.

At this time it seems there is over 100K nodes. Not much fewer than the maximum we saw before.

1 Like