I’ve had 40 nodes running on a cloud provider for a week and not earned any nanos.
Tbh I just turned them on and left them… poor things!
Feels like FW but ‘ufw status’ shows ports specified during setup are set to Allow: sudo ufw default deny incoming && sudo ufw default allow outgoing && sudo ufw allow ssh && sudo ufw allow 45000:45039/udp
Not desperate for the nanos but would like to know what’s wrong with the config.
There’s plenty errors and warnings in the log attached but I don’t know what the issue is other than what must be a problem during the handshake between nodes.
Safenode1 log below plus Vdash graphic coz its pretty.
My VPS provider has their own firewall set outside the VPS and I had to allow ports I needed through their management web interface. Maybe you have something similar.
I think I remember from a testnet a while ago that a node would work in terms of PUTs and GETs but not for either giving quotes or earning if it the FW rules were block connectivity to other nodes. It was something specific about quotes or getting payments. I might be remembering wrong though.
But I think there might be something wrong with the setup because I see you are specifying specifc ports to use and yet the node isn’t running as a relay. My reason for thinking this is that I don’t see log entries with relay server event in your logs.
you can see if the node is going to try to run as a relay by looking for this line in the very first log file:-
[2024-07-09T09:48:05.774988Z INFO sn_networking::relay_manager] Setting relay client mode to false
If it says ‘false’ like that then it’s going to try to be a relay.
If it says ‘true’ it’s just going to be a client and use other nodes which are relays.
What was the command you used to start the node? I’m wondering if you specified ports with --node-port or just used --home-network.
If these have been going for a week then they need to be taken down (reset) now anyhow.
Then update through safeup
Then start new nodes.
But definitely looks like a firewall is stopping the clients contacting your nodes. Either mistake in your FW rules or VPS supplier has their own you have to changed too like @peca said
A node putting and getting chunks for the network, but without ability to earn should warn the user of this. Should be reported as a bug and get fixed.
The node doesn’t know if it should have earned in a certain time frame
The node doesn’t know if there is one billion nodes and 1TB per day being uploaded so most nodes receive nothing for weeks. Or if there is 1EB being uploaded per day
Basically there is no metric to use to see if the node should have earned something in the last day or week.
The plan or dev suggestion is that eventually there will be a health function for each node so that it can test to see if connections from unsolicited sources work or not.
Frankly, the health function should have been in before starting this beta. So many people left in the dark for weeks. The ARM issue has been resolved now, but how about people running unknowingly behind a misconfigured NAT.
Its not like I haven’t tried to have this raised in priority or at least a way for helpers to be able to “ping” the quoting function of a node during beta to help those who ask if they have their nodes set up right. The NAT detection isn’t suitable for this either.
To be fair the devs are doing their best and their priority is to make the network bug free and such a tool is not high enough on that list.
I am busy with another couple of things for myself and so have not gotten into the rust to modify the client to do this “ping” function.
We are all busy now, and for the core developers especially it’s no fun to work under this pressures of the beta schedule and large numbers of new people flowing in. This is why the basics should have been in place before opening the floodgates.
New people will be a lot less understanding and patient than those of us who have passed the 18-year training here. And first impressions matter.
Anyways, I hope the health check feature will arrive soon.
Looked to have solved my issue but still no nano’s despite what I understand as data being stored (from vdash). Put an ‘allow all’ on the cloud VM’s inbound FW (host still has UDP and port range FW).