Exciting times indeed! Hopefully most are experiencing expected rewards, however I am not in the group.
I have run nodes during the development phase and Post “launch”:
Node Launchpad Versions:
2024.10.26; 0.4.1
2025.01.05; 0.4.5
2025.01.08; 0.5.2
2025.01.12; 0.5.3
2025.02.11; 0.5.4
I have yet to received any Atto or ANT rewards that I can identify. I did follow the instructions and have a wallet in MM setup and linked. Completed, basic maintenance and troubleshooting such as updating and resetting nodes. So I am either missing something or something might be a tad off? Especially if the rewards are intended to be received by even the “small” players…
I have run anywhere from 1 to 15 nodes at a time on Windows.
Yes rewards ARE a problem for the small/home user right now. Its being worked on right now
and hopefully there will be some policy changes to also assist by moving the incentive from running huuuuge nos of nodes to actually putting some chunks/dat onto these nodes.
Sorry, I am new here. I am facing the same issue. I am running 300 nodes, on 3 sbc. The reward balance all ‘-‘ or 0.
Last rewards was on yesterday morning I received 0.38051ANT, but then totally nothing till now. Reset all nodes, restart, nothing seems right……
The random node rewards have dropped off due to the high number of nodes. I think 0.38 ANT is the min. I’ve been running 50 nodes for a week i got up to 1.9 ANT in a 12 hour window. Most recently I haven’t received anything for a few days. Once uploads catch up to the network size we should be getting a small trickle of ANT much more consistently.
Please note that in recent days the number of nodes has increased exponentially https://network-size.autonomi.space/ which has affected the level of rewards for users with a small number of nodes but the problem is already identified as Southside wrote, so we hope that such a rapid growth of the network will be diagnosed as another experience and the redistribution of rewards in a balanced way will be restored, thank you for joining us
Also, I would like to point out that if I run 300 nodes on my home network. Basically the latency growth really high, like 700ms+
Is that because of my router not strong enough?
My latency from my router to ISP’s gateway is at .5ms with RTTsd of .2ms. It hasn’t been impacted but its quite a beefy router. Currently, running around 8000 nodes.
Not sure what endpoint you are checking for the latency, but 700ms is quite high.
I am already on fibre, but running 300 nodes, my home network feels laggy and sometimes without response😩maybe because of eero router, I should replace it.
Til now without rewards, I reduce running nodes to 150☹️
Yes early on we were seeing a lot of that due to routers not having a large enough NAT table to handle the number of connections.
A lot of work has gone into the node to improve the connections, but still the higher the number of nodes the more NAT entries there will be.
So yes your symptoms were a good indicator of too many nodes for the router to handle. And usually the NAT table being the problem.
Reduce node count and see how that is going a day or so later. ATM external nodes will still be trying to contact the old nodes so there sill be a little load on the router as it starts rejecting them or unnecessarily passing them on if you have port forwarding.
Is this planned or just your idea? I would think supply/demand would play a role here – if people are setting up too many nodes that far surpass the demand to store data, people would be paid less. Seems like a a pretty major issue otherwise, no?
Its my idea,though Im hardly alone in it, some like it, others don’t and think we should leave it to see how things play out.
If we can quickly get a better idea of how much of OUR funds are going into a small no of pockets then I’m sure the community will come down on one side or the other.
Right now there seems little incentive to run a small number of nodes and that HAS to change otherwise the original vision of many millions of folks running nodes on their otherwise under-utilised home PCs is under serious threat.
Im expecting some kind of statement in the next day or two from @Bux to clarify. It is always possible that the whale(s) are “friendly” and nothing is being said officially in case we are accused of a pre-mine. We’ll see.
It might be a bug in metrics that is under reporting no of records stored.
If anyone feels up to it, manually look in record_store and count the no of entries for each node and compare with what Launchpad, Formicaio anm or antctl is telling you.
Im working on a script that will do that but the vcarious tools store tham at slightly different paths
For Formicaio its $FORMICAIO_BASE_DIR/formicaio_data/node_data/$Node_Id/record_store
anm is I think /var/antctl/services/antnodeXXX/record_store/
Launchpad and antctl is at $HOME/.local/share/automomi/node/12D3KooWblahblah/record_store/
Please correct me if Im wrong,
I’ll soon find out anyway
Instead of faffing with paths I just got DeepSeek to search the entire filesystem for any dir named record_store and count the files in that. Its inefficient and slow but confirms what I thought - metrics is only counting from the second chunk in any record_store dir
#!/bin/bash
# Temporary files for accumulating totals
tmp_nodes=$(mktemp)
tmp_records=$(mktemp)
echo 0 > "$tmp_nodes"
echo 0 > "$tmp_records"
# Find all record_store directories starting from root
find / -type d -name "record_store" -print0 2>/dev/null | while IFS= read -r -d '' dir; do
# Update total nodes count
current_nodes=$(<"$tmp_nodes")
echo $((current_nodes + 1)) > "$tmp_nodes"
# Count files in this record_store
count=$(find "$dir" -maxdepth 1 -type f -printf '.' 2>/dev/null | wc -c)
# Update records total if files exist
if [ "$count" -gt 0 ]; then
current_records=$(<"$tmp_records")
echo $((current_records + count)) > "$tmp_records"
echo "${dir}: ${count}"
fi
done
# Read final totals
total_nodes=$(<"$tmp_nodes")
total_records=$(<"$tmp_records")
# Cleanup temp files
rm "$tmp_nodes" "$tmp_records"
# Display results
echo "Total number of nodes: $total_nodes"
echo "Total number of records stored: $total_records"
Use at your own risk etc etc - Any problems, blame these damned ChiComms responsible for DeepSeek
I got output that ended like this
/home/willie/projects/maidsafe/formicaio/formicaio_data/node_data/616d37676470/record_store: 1
/home/willie/projects/maidsafe/formicaio/formicaio_data/node_data/363152526178/record_store: 1
/home/willie/projects/maidsafe/formicaio/formicaio_data/node_data/364b4a657956/record_store: 2
/home/willie/projects/maidsafe/formicaio/formicaio_data/node_data/375142436545/record_store: 1
/home/willie/projects/maidsafe/formicaio/formicaio_data/node_data/316f4f48417a/record_store: 1
/home/willie/projects/maidsafe/formicaio/formicaio_data/node_data/496657546e67/record_store: 1
/home/willie/projects/maidsafe/formicaio/formicaio_data/node_data/6b70556d5171/record_store: 1
Total number of nodes: 238
Total number of records stored: 60
60 records stored is a lot more believable than 3 which is what Formicaio is pulling from the metrics crate.
Indeed but its late and I am crap with the find command
If you KNOW where all your rcord_store dirs are then of course it can get put into the line beginning find /
Its different for each of the 4 ways we have of wrangling nodes.
Anyhow it seems there is quite a discrepancy between what that reports and what metrics is telling us - aaand we are storing more records than we think - isnt doing much for earnings but the network is not quite as moribund as it seemed. Which is nice.
While it dilutes any holdings we have, it seems it’s coming from the emissions pool, which is a pool of 20% of tokens to be released over 12 years. It’s not coming from the token holders pool, so it’s not really ‘our funds’, but pre-allocated funds.
This pool is a lot smaller than it was previously planned to be as well, and the distribution schedule has been tweaked to be fewer in year 1 vs previous plans, so the situation is a lot better than it would have been before the recent Whitepaper V2 changes were made.
It’s also true that they’re being used as the team had always said they would. As someone who’s regularly tried to argue against the usefulness of emissions, it seemed like I was pushing against the tide in the community and a lot of the community support the emissions plans and feel they will be beneficial, even though nobody could articulate why they’re needed / beneficial in sound economic reasoning.
Yss, rewards would need to be much more granular to benefit small node running, but more payments = more tx fees.
Unfortunately it seems that instead of achieving the vision of making use of spare resources, emissions give an incentive to make wasteful use of otherwise useful resources (e.g. hosted servers) by providing loads of nodes that aren’t needed to fulfil demand, at the expense of token holders.
I hope there are clarifications from the team, but to be fair it seems they’re just carrying out what was planned, with the impact of emissions being excasserbated by the lack of ability to upload to stress / fill nodes due to upload issues (which the team is working hard on), UX issues (Which community devs are working on), and ETH fees costs (which highlight the need for Native ASAP).