Will DAVE be a βNICEβ offspring of HAL ?
My own experience with the last version running 3 nodes on MS Win11 2H2024 edition was about 4-5 attos every 24 hours, The new release I am only running 2 nodes, as this notebook is memory constrained (8GB of RAM Windows hogs 50% of it, plus its 10thgen i5) as such I have only had 2 nodes running under 24 hours as the new release is observed by me to be 25% hungrier per antnode to use more CPU % share, although it looks as if memory consumption is actually down a bit, maybe 10% per antnode (after they settle), so will share info in a couple of days to better answer your question.
Her you go, time day is btm rightβ¦ looks to be about 2 attos for one antnode, the recent added antnode has not earned anything yetβ¦
The equation to calculate the cost, based on the metrics, has been written to chain.
Both client and node can now query the smart_contract, which uses the on-chain equation
to calcuate the cost for them to utilize or verify.
DAVE has been around before HAL and was its mentor
Was 1968 but titled 2001.
And the name βHALβ was a play on βIBMβ by doing I-1=H, B-1=A and M-1=L ?
HAL 9000 in his, its final plea:
HAL I know l've made some very poor decisions recently...
HAL ...but l can give you my complete assurance...
HAL ... that my work will be back to normal.
Iβve been running 24 nodes running for nearly 3 days now and have not received a single data payment. Iβm having trouble figuring out if this is just the new normal, or if I should give up and get rid of the nodes. During the leaderboard program I was averaging about one per day per node. I was expecting to have over 50 by now, not zero.
You should be getting something. I have 15 nodes running and have had 7 x 1 atto payments in the last 24 hours. Sounds like there is something wrong.
You are probably on the old network, one of my contacts had the same problem whit 70 nodes and a reset, deleting folders and restarting fixed it. What are you using for the nodes?
Check out the Dev Forum
One thing that doesnβt seem to get talked about much here is network speeds. What kind of download speeds are users going to get from their data stored on the autonomi network when nodes are running on (possibly thousands) of home servers with not-so-fast upload speeds? CPU and RAM donβt seem like a huge limitation compared to broadband speeds of the nodes serving data all of which will be very variable? Wonβt megacorp inc node servers dominate the network eventually since they will have the best speeds, and therefore make the network more centralised?
Sorry to ping you Qi, Iβm really just wanting to identify the quote.
Still not a blockchain project?
Thereβs a continual insidious slide to use blockchain as a solution to remaining issues in order to launch a network, instead of solving those issues.
This further damages the project wrt the fundamentals IMO by taking us further from the possibility of doing without blockchain. I hadnβt foreseen this particular issue, but it is very much what I was concerned about when the September plan was revealed. Weβll probably see more of this, but not necessarily or only in the network itself. Developers who come with blockchain skills and mindset will further dominate the Autonomi ecosystem and make it harder to retrace.
If itβs ok for the network to depend on blockchain, we can hardly suggest apps should not, even when it conflicts with the fundamentals - which any dependence on blockchain does (by increasing the barriers to universal access).
Back in September my concern was that dependence on blockchain and centralised oracles, along with an increasing dominance of blockchain style businesses, apps and developers taking us further from universal access and towards the anathema, which crypto and blockchain has become for many.
Leaving fundamental problems with the non-blockchain approach unsolved is like burning bridges back to the original goals.
When starting these I did delete my nodes that I had from the leaderboard days. These nodes I even chose a different (larger) partition for their data dir. I used evm-arbitrum-sepolia
at the end of the antctl add
command (and verified that the rewards address is the same one I used during the leaderboard). I noticed that antup ls
shows version 0.3.0 for autonomi and safenode, and 0.11.4 for safenode-manager, so Iβve ran antup update
and have restarted my nodes. Iβd previously updated them with antctl update
.
$ antup ls
+------------------+---------+-------------------------------------+
| Name | Version | Path |
+------------------+---------+-------------------------------------+
| autonomi | 0.3.1 | /home/sbosshardt/.local/bin/ant |
+------------------+---------+-------------------------------------+
| safenode | 0.3.1 | /home/sbosshardt/.local/bin/antnode |
+------------------+---------+-------------------------------------+
| safenode-manager | 0.11.5 | /home/sbosshardt/.local/bin/antctl |
+------------------+---------+-------------------------------------+
$ antctl upgrade
ββββββββββββββββββββββββββββββββ
β Upgrade Antnode Services β
ββββββββββββββββββββββββββββββββ
Retrieving latest version of antnode...
Latest version is 0.3.1
Using cached antnode version 0.3.1...
Download completed: /home/sbosshardt/.local/share/autonomi/node/downloads/antnode
Refreshing the node registry...
β All nodes are at the latest version
$ ps -ef | grep safe
sbossha+ 5906 2922 1 Dec22 ? 00:16:14 /mnt/ssd2t/overflow/home/sbosshardt/.local/share/safe/data/antnode1/antnode --rpc 127.0.0.1:43523 --root-dir /mnt/ssd2t/overflow/home/sbosshardt/.local/share/safe/data/antnode1 --log-output-dest /mnt/ssd2t/overflow/home/sbosshardt/.local/share/safe/logs/antnode1 --upnp --port 58424 --metrics-server-port 42973 --rewards-address 0xe5aC7E2bdC62312D7C4989bb8a2A79dAA22653F9 evm-arbitrum-sepolia
sbossha+ 6438 2922 1 Dec22 ? 00:16:04 /mnt/ssd2t/overflow/home/sbosshardt/.local/share/safe/data/antnode2/antnode --rpc 127.0.0.1:39655 --root-dir /mnt/ssd2t/overflow/home/sbosshardt/.local/share/safe/data/antnode2 --log-output-dest /mnt/ssd2t/overflow/home/sbosshardt/.local/share/safe/logs/antnode2 --upnp --port 52512 --metrics-server-port 39383 --rewards-address 0xe5aC7E2bdC62312D7C4989bb8a2A79dAA22653F9 evm-arbitrum-sepolia
[...]
sbossha+ 8365 2922 1 Dec22 ? 00:15:06 /mnt/ssd2t/overflow/home/sbosshardt/.local/share/safe/data/antnode24/antnode --rpc 127.0.0.1:37979 --root-dir /mnt/ssd2t/overflow/home/sbosshardt/.local/share/safe/data/antnode24 --log-output-dest /mnt/ssd2t/overflow/home/sbosshardt/.local/share/safe/logs/antnode24 --upnp --port 45749 --metrics-server-port 34699 --rewards-address 0xe5aC7E2bdC62312D7C4989bb8a2A79dAA22653F9 evm-arbitrum-sepolia
My network link dosnβt stay saturated. About 3-5 spikes per minute is typical. (Note: βTotal Receivedβ and βTotal Sentβ would be higher but I rebooted yesterday after applying system updates).
When I run antctl status
from time to time, it typically shows most nodes having 2-3 dozen connections. A random 2-5 of the nodes have hundreds of connections (sometimes over 1000 connections).
$ antctl status
ββββββββββββββββββββββββ
β Antnode Services β
ββββββββββββββββββββββββ
Refreshing the node registry...
Service Name Peer ID Status Connected Peers
antnode1 12D3KooWSVEkL8BBp9ekwHw23CCQ9udum31xCQ8nTRED68bzGVJN RUNNING 732
antnode2 12D3KooWF2P8DLqCjNj7Er3i7QbavJr9opjjFsBnYvyogeaK1yp4 RUNNING 19
antnode3 12D3KooWMxkKnGwchwHKf6wnBozh5uUyB519aFYgZEyRpGbVyx9K RUNNING 14
antnode4 12D3KooWMCyJzZuK1cCpVUe2HEkueovSAMNqCs5cj7xMRzEgQKH5 RUNNING 33
[...]
The issue is not so much speed of downloading a large file, but the lag time for getting the first chunks. You see downloads will be in parallel. Like browsers download multiple assets (eg images) in parallel, a file downloader will be requesting multiple chunks at once and they will come down in parallel. In theory a TB file could be requested in a way that would choke a 5GB download link from thousands of nodes with 1MB upload speeds.
My immediate thought and also the following thought is this increases the resistance to native since this is off network stuff increasing.
I donβt see any mistake you made, I would reset safenode-manager and antctl each one separately and then manually delete the folders if there are any left, restart the computer and try againβ¦
Check out the Dev Forum
I worry about the ability to retrace a bit too. I also think, as long as we manage to stick to our guns (Iβll be here to hold accountable too) that then we just have the added cross platform functionality and I guess optionality.
Iβll always endorse and support native Autonomi and its native token first and foremost but I do see the added benefit of cross platform support.
Ethereum is quite mature in many ways and we canβt pretend like there is zero value there. I think we have such laser focus on how much better native token will be than the current state of the art that we choose to ignore rather than engage, or at least wish we could.
I see the other point too though. Adapt or lose out on opportunity and risk becoming completely irrelevant.
Do you realize no project out there has come up with a true competitor to blockchain that scales indefinately and is as fast/sufficient but also 100% reliable? The DAG solution autonomy worked on, in itβs current form, has to many flaws and design issues. It does not perform as well as systems that are specifically designed for coin transfer. I do not think it will ever come to fruition honestly, and am actually happy team went with an existing and proven solution.
This network is for DATA, not for payments (for now and for the foreseable future, unless team spends years of dev and does cuting edge research)
You are happy because you are not committed to the fundamentals.
If we and the project are committed to the fundamentals, then the priority has to be to get there and not head off in a different direction that creates ever more barriers to that.
@Nigel the issue for me isnβt whether or not ETH etc has value but whether on balance it gets us closer to or moves us further away from the fundamentals. My whole point since September is that the move creates a large barrier to those, and that this will increase over time for a large number of reasons related to that change.
#MeToo
Stand up for Native Rights - resist blockchain Imperialism
Thanks for the reply. Wouldnβt this would require thousands of copies of a file? And does autonomi ensure there are enough copies of a file to give an optimal download speed. How can it know in advance what kind of download speeds a node is going to give 2 weeks or a year from now? For this to work, thousands of coppies would have to be created for every data chunk account for low upload speeds from unpredictable nodes? How many differnt copies are made of each complete file on the network? Is it a fixed number? Is it big enough?
Only your personal stuff.
Much of a LLM (AI) is common, the vast majority of it.
That could be d/l to a PC if someone uses it enough, or run off autonomi if on another machine and donβt want to d/l the common stuff.
The personal stuff though has to remain private which requires a separate set of files that will be used in conjunction with the LLM common files. That too can be kept on the PC (and copy on Autonomi) or just on Autonomi.
That is my understanding of how the LLM (AI) data will be kept/used.
Also remember that an APP can have its own method of storing data chunks. So rather than renewing the complete file each change, there could be a layout that is one/more chunks for a type of data that may not change much and only changed chunks are ever uploaded to Autonomi and not whole changed files. Sort of just a bunch of random chunks where only the chunks that changed are written anew to Autonoimi