please don’t - the information is interesting to see. I also run my own version, and I can only see around 2Million nodes…Maybe if you run all datacenter nodes, they see more of the network as they haven’t been blacklisted ? there is something weird happening with home hosted nodes vs VPS and these random earnings as well. It would really help transparency if the algorithm used for these rewards is made available so we can scrutinize it, who knows there might be a bug we can help identify.
sorry too late … but the data really is completely useless … I tried 3 different approaches for getting the network estimate too … and all deliver very different results and nothing fits together …
Sorry but you can’t upload ANYTHING reliably recently
I managed a couple of trivial images last night.
Today I fail consistently with a simple 22k text file.
As useful as a chocolate teapot.
Marketing ran a long way in front of engineering and now we have this utter farce of boasting about 24m nodes while the big node runners hoover up the emissions.
I am as supportive as anyone re the overall project but right now there are major problems and they seem to be getting ignored by @Bux and @JimCollinson and all we get is BaghdadBob trumpeting about 24m nodes and pathetic claims of some stupid no of petabytes of storage.
NONE of which is useable.
I fully appreciate @chriso@Josh and the rest of the devs are working hard to sort the uploads problem and I thank them most sincerely for it and wish them immediate success.
BUT it is now time to admit that the “launch” no matter how “soft” has been a truly abysmal decision without the structured testing I and others have been calling for since a long while now.
Tomorrows update better contain some actual substance not marketing pish to further insult our intelligence.
worker@noderunner01:~$ ant file upload ./get-docker.sh
Logging to directory: "/home/worker/.local/share/autonomi/client/logs/log_2025-03-05_17-08-48"
🔗 Connected to the Network Uploading data to network...
Encrypting file: "./get-docker.sh"..
Successfully encrypted file: "./get-docker.sh"
Paying for 3 chunks..
Error:
0: Failed to upload dir and archive
1: Failed to upload file
2: Error occurred during payment.
3: Cost error: MarketPriceError(ContractError(TransportError(ErrorResp(ErrorPayload { code: -32000, message: "execution reverted", data: None }))))
4: Market price error: ContractError(TransportError(ErrorResp(ErrorPayload { code: -32000, message: "execution reverted", data: None })))
5: server returned an error response: error code -32000: execution reverted
Location:
ant-cli/src/commands/file.rs:84
Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.
worker@noderunner01:~$ ll get-docker.sh
-rw-rw-r-- 1 worker worker 22592 Feb 28 18:13 get-docker.sh
From a Hetzner VPS - can’t blame this on iffy home networking
Its one of the reasons I have not looked harder at your or @Traktion, @happybeing 's work
Until we can get reliable up/downloads there is zero point.
I hate sounding like a broken record or the “I told you so” guy but all this should have been stress tested to death back Oct-December last year before ANY talk of launch, soft r otherwise.
I think it was 2-3 weeks ago. I tried some ubuntu ISOs last week and got nowhere with those big files, but uploaded an archive with an IMIM blog, including video and photos after 3 or 4 attempts and a fair bit of patience.
I can give it another go this evening.
One thing though - the latest version does tend to take down my wifi. Previous version did not. So, live.2 seems worse for me. In fact, above was with live.1, which ages it…
This is what I am saying though, I am having almost no issues (have had none all day so far) Yes they are small files I am just now trying a 34mb file for the first time. just maybe the issue may be with the cli.
would help if the roadmap on the main site was updated, at the moment we fell of a cliff end of Feb 2025 - would be really helpful to see what’s coming next even if it’s a copy paste of the last 12 months… the lack of information doesn’t give a good look.
This is true. Certainly don’t want to interrupt those working at the codeface.
So I am not calling for any statement NOW.
But I AM looking for something other than moonbeams from @JimCollinson and/or @Bux tomorrow.
And I DON’T want to hear about millions of nodes or fanciful Petabytes.
Well this is certainly true, but its a small team with LOTS on its plate both the devs and the “management”. What do you want them to stop working on while the roadmap gets updated?
It would be nice but IMHO its not critical right now. It will be soon though.
It has been very hardware and connection dependent in the past, so probably still true.
However, my uploads using the native rust libraries also show issues. I had an upload of a photo fail with a regular message. It also can take my wifi down…
Edit: btw, it’s a good wifi signal, at 200-300Mb/s, through fritz.box wifi only router (doesn’t do NAT on there).
The issue is, it just really couldn’t have been tested without the push to launch that we had. We’ve only became aware of all these issues with the larger size networks that we can’t really afford to simulate on our own.
Believe me, as a team we are painfully aware of the fact that we need data on the network but we can’t get it there yet. It’s our big focus now. We all have a vested interest in trying to get things fixed, and that’s what we’re going to push for.