Yes, that $750 is just the mining fees, I believe. Node runners will surely want their cut on top of that, or they will be running nodes for free. The native token should pass that whole $750 to node runners and/or to uploaders. That’s a big economic benefit.
That’s going to be a major issue. Didn’t those passwordmanager folks have petabytes of data to upload ?
I am running around 5500 nodes at the moment, and its doing about 200MB/s reads on disk i/o in steady state (host isn’t maxed out on CPU). NAT session table around 500K.
I am wondering if I am getting bottlenecked by some of the LAG NICs on some of the hosts, as some are at (2x1Gbps), others at (4x1Gbps), and some at 1x10Gbps.
I will probably spend this weekend upgrading all the hosts to 2x10Gbps… see where the baseline off the Read MB/s fall or rise.
In addition, the load isn’t balanced equally on the 2 circuit breakers I am using so I need to power down machines, and re-adjust. Its currently, at 75% on 1 circuit and 25% on another. If I can get it more close to 50% and 50%, then I can spin up more nodes .
Feel free to estimate the Blockchain TXs fees for uploading 1pb of data to the network (and maybe doing a rough estimate on how long it’ll take with e.g. 100 txs per second - might become an interesting calculation)
Okay - was too curious and did the time part. 1 pb - 3 nodes get paid - 100 txs per second -
1_000_000_000 MB / 4 * 3 / 100 / 3600s/h / 24h == 86 days to get 1 pb into the network
Maybe arbitrum is faster… But wouldn’t bet on it… I think the Ton Blockchain died a while back when it was at 150 txs per second…
The Blockchain is the bottleneck (oh well and ofc it would cost a fortune)
And that estimate isn’t considering the smart contract execution for price discovery/emissions… Not sure if that adds to the tx count or not…
OK this is Linux correct?
sounds like a decent plan!
Yes, kubuntu, I removed the graphical environment from it and I only use https://cockpit-project.org/ to manage it
Check out the Dev Forum
Haha - okay - arbitrum one seems to be very fast =D then it’s just about fees - cool
Contrary opinion perhaps.
The narrative has always been that we will be cheap because of using all these abundant spare resources.
But it is a premium service, secure, private, permanent, sensorship free, your data is not sold to anyone who wants it etc.
A premium service can therefore reasonably charge a premium price.
Maybe nobody wants to upload 83 pics of what their cat did today but there is a bunch of valuable data that needs a safe place.
The issue, at least for me, is that the majority, if not entirety, of the price paid for uploads, whatever they are, should go into the network (node ops et al) and not leak into other (blockchain or other) projects.
My observations of the crypto world are that the less actual usability the better for the price.
A stupid meme doing nothing? Perfect, 1000% up! A store of value that no one uses for payment (Bitcoin) even better - 1 million % up! Real product with measurable use (storj) oh sorry same price 7 years in a row.
A premium product with the maximum possible decentralization with a high cost of fees stopping adoption but with a story of how there will be a native token in the future - only the sky is the limit!
Check out the Dev Forum
It hasn’t exactly been going according to plan on the execution front at home… lol… partially because it didn’t help I was trying to do all of this in odd hours off the night, as I felt like the work had to get started as the clock is ticking…as TGE is arriving!
I finally managed to roughly load balance all the hosts on the 2 different circuit breakers. In the course off all this, I had to do a non graceful power off of some machines. Turns out, the superblock on a LVM path on a OS boot volume got hosed… and I couldn’t get it to repair. I had to move the data disks from that host to another node, and that triggered another rebuild/rebalancing process. Managed to move the configuration files for different LXCs and VMs off that host to another host in the cluster safely, and get rid off that host itself from the cluster configuration files, until a future rebuild off that host.
Where does this leave me at? While the cluster was completely down for 2 hours or so, I managed to upgrade all off the 2x1Gbps to 4x1Gbps, as I realized I didn’t have enough spare SPF+ NICs for 2x10Gbps, so I opted to double the minimum bandwidth across the cluster. I do have all the fiber optic wire at home and sufficient spare ports on the switches, but will order more SPF+ NICs for the individual hosts a bit later (future time and date).
Then I realized just now, I forgot to retrofit 1 remaining host from 2x1Gbps to 4x1Gbps, darn! Somehow, I didn’t even notice the server tucked away in the rack at the lower shelves in the middle of the night. So, now I got to get that retrofitted, as the re-balancing rate throughput hasn’t yet increased over the 200MB/s. Its processing at 200MB/s with an ETA off 5 days (cutting it super close to TGE). Until then, I am in a waiting stage to resume all antnodes .
The good news will be that it should be able to hit closer to 7,500 to 10,000+ nodes after all said and done from this location.
I have a HP ProLiant N54L that I used as NAS a few years ago but that I unplugged since. I could add some RAM to upgrade it.
So it could be upgraded to something like this:
- AMD Turion™ II Neo N54L, 2 cores @ 2.2 GHz
- 2*8Gb RAM
- 2*2TB HDD
I’m afraid my CPU will be the bottleneck, how many nodes do you think it could handle ?
My observations are that if you take the CPU score from this site and divide by 20, you get roughly the number of nodes at around 40-50% CPU load.
Check out the Dev Forum
Yes, personal documents like wills for instance, are a good use of Autonomi, where one leaves the keys to the uploaded, paid for once will, to their attorney settling the estate, when one ‘leaves their meat sack’, so nobody can alter your own self dated and signed will (which is a real problem out there, like for ever).
That’s a premium service.
Lots of examples out there.
Well ‘leakage’ as you describe it, is really a choice, ie- one might need fiat to pay bills, buy a gift for someone, or buy a coffee with a Bitpay card, so ‘one leaks’, which actually makes the value of the ANT ERC-20 token rise in value because of that utility, triggering additional interest to buy/convert ANT, because it’s useful.
I think ANT will be used for other forms of exchange for both goods and services within the ANT community of node operators and uploaders ,
which will build up liquidity to support such internal exchanges with a Rise in ANT Token value. ie “here is some ANT wallet to wallet transfers at Face to Face meet at a cafe or restaurant via mobile”, for whatever it is one want to exchange that ANT for (cover the bill, acquire an item, or food stuff, whatever)
The other aspect is I see no banks in the middle of any of the above exchanges.
All I’m trying to say is that I look forward to when Autonomi, and its native token, is fully self-supporting and self-contained, so that all of its economic activity is funneled into Autonomi, and none into blockchain projects like Ethereum.
I’m sure the CPU will be the showstopper. I have an N54L (that I use for FreeNas) and I know the CPU is not exactly great. About 50 Nodes though? I’m sure you’ll get to 25.