Haha - okay - arbitrum one seems to be very fast =D then it’s just about fees - cool ![]()
Contrary opinion perhaps.
The narrative has always been that we will be cheap because of using all these abundant spare resources.
But it is a premium service, secure, private, permanent, sensorship free, your data is not sold to anyone who wants it etc.
A premium service can therefore reasonably charge a premium price.
Maybe nobody wants to upload 83 pics of what their cat did today but there is a bunch of valuable data that needs a safe place.
My observations of the crypto world are that the less actual usability the better for the price.
A stupid meme doing nothing? Perfect, 1000% up! A store of value that no one uses for payment (Bitcoin) even better - 1 million % up! Real product with measurable use (storj) oh sorry same price 7 years in a row.
A premium product with the maximum possible decentralization with a high cost of fees stopping adoption but with a story of how there will be a native token in the future - only the sky is the limit!
Check out the Dev Forum
It hasn’t exactly been going according to plan on the execution front at home… lol… partially because it didn’t help I was trying to do all of this in odd hours off the night, as I felt like the work had to get started as the clock is ticking…as TGE is arriving!
I finally managed to roughly load balance all the hosts on the 2 different circuit breakers. In the course off all this, I had to do a non graceful power off of some machines. Turns out, the superblock on a LVM path on a OS boot volume got hosed… and I couldn’t get it to repair. I had to move the data disks from that host to another node, and that triggered another rebuild/rebalancing process. Managed to move the configuration files for different LXCs and VMs off that host to another host in the cluster safely, and get rid off that host itself from the cluster configuration files, until a future rebuild off that host.
Where does this leave me at? While the cluster was completely down for 2 hours or so, I managed to upgrade all off the 2x1Gbps to 4x1Gbps, as I realized I didn’t have enough spare SPF+ NICs for 2x10Gbps, so I opted to double the minimum bandwidth across the cluster. I do have all the fiber optic wire at home and sufficient spare ports on the switches, but will order more SPF+ NICs for the individual hosts a bit later (future time and date).
Then I realized just now, I forgot to retrofit 1 remaining host from 2x1Gbps to 4x1Gbps, darn! Somehow, I didn’t even notice the server tucked away in the rack at the lower shelves in the middle of the night. So, now I got to get that retrofitted, as the re-balancing rate throughput hasn’t yet increased over the 200MB/s. Its processing at 200MB/s with an ETA off 5 days (cutting it super close to TGE). Until then, I am in a waiting stage to resume all antnodes
.
The good news will be that it should be able to hit closer to 7,500 to 10,000+ nodes after all said and done from this location.
I have a HP ProLiant N54L that I used as NAS a few years ago but that I unplugged since. I could add some RAM to upgrade it.
So it could be upgraded to something like this:
- AMD Turion™ II Neo N54L, 2 cores @ 2.2 GHz
- 2*8Gb RAM
- 2*2TB HDD
I’m afraid my CPU will be the bottleneck, how many nodes do you think it could handle ?
My observations are that if you take the CPU score from this site and divide by 20, you get roughly the number of nodes at around 40-50% CPU load.
Check out the Dev Forum
Yes, personal documents like wills for instance, are a good use of Autonomi, where one leaves the keys to the uploaded, paid for once will, to their attorney settling the estate, when one ‘leaves their meat sack’, so nobody can alter your own self dated and signed will (which is a real problem out there, like for ever).
That’s a premium service.
Lots of examples out there.
Well ‘leakage’ as you describe it, is really a choice, ie- one might need fiat to pay bills, buy a gift for someone, or buy a coffee with a Bitpay card, so ‘one leaks’, which actually makes the value of the ANT ERC-20 token rise in value because of that utility, triggering additional interest to buy/convert ANT, because it’s useful.
I think ANT will be used for other forms of exchange for both goods and services within the ANT community of node operators and uploaders ,
which will build up liquidity to support such internal exchanges with a Rise in ANT Token value. ie “here is some ANT wallet to wallet transfers at Face to Face meet at a cafe or restaurant via mobile”, for whatever it is one want to exchange that ANT for (cover the bill, acquire an item, or food stuff, whatever)
The other aspect is I see no banks in the middle of any of the above exchanges. ![]()
I’m sure the CPU will be the showstopper. I have an N54L (that I use for FreeNas) and I know the CPU is not exactly great. About 50 Nodes though? I’m sure you’ll get to 25.
The spikes will occur most often when a node has to do something, like send a chunk or receive one. I doubt relay nodes will make a difference over port forward.
Also we have or will have more direct communications with relay nodes in that the relay node is only used to establish the connection and then is not involved. That process may involve some network comms but nothing to notice in cpu usage. Once connection is established then its completely no different to comms with RT nodes and so unnoticeable.
I’d say it will be extremely difficult to watch CPU usage and see a difference. It’ll be overshadowed by the very random chunk store/retrieval cpu spikes
Always use port-forward and UPnP if you can, home-network skould be considered a last resort as the network needs port-forwarding nodes as relays.
I asked chatgpt to make me a script for stopping and removing nodes in an interval because as you need to add an argument for every service.
You are supposed to be able to repeat the service-name option multiple times in the one command line
I’m trying to run some nodes.
Computer setup:
- AMD Turion™ II Neo N54L, 2 cores @ 2.2 GHz
- 1*4GB RAM
- 2*4TB HDD
- Internet: fiber 1Gb/s up and down
- Ubuntu server with Prometheus/Graphana for monitoring
Nodes:
- 40 nodes started.
- After the initializing period (ideal for my conf is 130 seconds interval) I get those figures:
- CPU Busy: ~35/40%
- Sys load: ~40/50%
- RAM used: ~70% of 4GB
- SWAP used: 65%
- HDD used: 24%
- Nodes have between ~2 and ~150 peers with an average around ~30 ==> Is that normal?? I used to have more like ~250 peers per node with my previous VPS? Maybe because of my ISP’s router limitation?
Another question: Since it went from safexxx no antxxx binaries, I can’t get vdash working. It doesn’t show peers statistics with number of attos, records and so on. Is that normal?
Did you ramp up gently with for example 5 nodes to start with, then if things are running fine add another 5, etc. to get to 40? That is a good idea.
Did you update vdash to the latest version with cargo install vdash ?
Yes indeed, that is the way I’m doing. 10 nodes by 10 nodes, with 130 seconds interval between nodes.
And for vdash, yes I have the latest version. I’ll retry to install it from scratch.
Number of Attos will never be correct. 3 nodes are paid for each chunk uploaded but only one of them will be showing it in the logs (& /metrics) and Launchpad suffers the same fate.
Records only shows up when a quote is requested from the node. This can take a day with the current size of the network, but should have one or more of the 40 show records within hours
Make sure you are using the 3.5 version of the node. Antup will get it and antctl will automatically be downloading it, so no worries there. Its only if you run a custom script, then you will need to use antup (antup node) to get latest version
Peers shown in vdash is more like connections current and is not routing table peers, unless that was changed at some stage.
Anyone able to run a big number of nodes (6K+) on the same machine without container?
I’m trying and failing to do so on Ubuntu server 24.04, after ~5400 nodes, I can’t start any new one (I tried with anm script from Ntracking and formicaio).
For information these are the logs that I have from formicaio for nodes that can’t start:
Killed process for node 674270314b51: ExitStatus(unix_wait_status(25856))
Process with PID 1553573 exited (node id: 674270314b51) with code: Some(101)
Failed to spawn new node: Os { code: 11, kind: WouldBlock, message: "Resource temporarily unavailable" }
Failed to create node instance 6363/10000 as part of a batch: error running server function: Failed to create a new node: Resource temporarily unavailable (os error 11)
I have plenty of free ram, disk, and cpu (though the CPU usage relative to the number of nodes is quite high compared to my laptops) and my router is more than fine.
Also, I’m far from hitting the max number of processes and I have some room for number of threads (I noticed each antnode process is using 259 threads on my server which is far more than the 19 that I have on my laptops, is there any way to reduce that?)
I’ll switch to proxmox with LXC containers if I can’t get it working but I’d rather avoid having to maintain multiple VMs.
@d3su ANM and NTracking were never designed with going into multiple k of nodes in mind.
over 1k nodes NTracking will start messing up its display as each time a node has its metrics port queried there is a 1 second sleep so over 1k nodes it will start to over run the time frame which at present is 20 minutes.
ANM was recently upgraded to handle up to 9999 nodes theoretically id be interested to hear how it went if you tried to run more than 1k nodes ?
Yes, I’ve seen the commit, I tried it after that.
The nodes started ok up to ~5000 nodes and after that the new ones were not able to start.
I was not able to report anything with NTracking even at the start (telegraf seemed to be running ok, influxdb and graphana are on an other host and the dashboard is displaying ok all the stats from my other laptops running nodes through antctl).
But otherwise the biggest problem with the script for a big number of nodes is the minimum delay of 1 minute (I tried with 0.5, it didn’t work).
That’s why I tried the python bindings, but they are not working for starting nodes at the moment.
So I switched to formicaio which is great for starting a big number of nodes but I’m also limited to ~5000 nodes.