Latest Release March 20, 2025

I have 13k or so of these lines on one machine
./7344626b3465/logs/antnode.log:63234:[2025-03-22T00:28:21.759605Z DEBUG ant_node::node 899] Not enough candidates(1/50) to be checked against neighbours.

I’ve never seen that log message before so it grabbed my interest.

Need to find the grep flag that prints out some no of lines before and after the search term, may tell me more.

1 Like

Try -C 3
for 3 lines of context

2 Likes

NeoGPT beats ChatGPT any day :slight_smile:

4 Likes

THanks It generally appears in this sequence


./514b54796350/logs/antnode.log-44920-[2025-03-21T18:55:02.574460Z DEBUG ant_node::node 389] Periodic storage challenge triggered
./514b54796350/logs/antnode.log:44921:[2025-03-21T18:55:02.596903Z DEBUG ant_node::node 899] Not enough candidates(1/50) to be checked against neighbours.
./514b54796350/logs/antnode.log-44922-[2025-03-21T18:55:12.570327Z INFO ant_node::log_markers 69] IntervalReplicationTriggered
./514b54796350/logs/antnode.log-44923-[2025-03-21T18:55:12.623435Z INFO ant_networking::driver 903] Set responsible range to Distance(148281681934825089055205350856814884719217590547536846655812482897382023)(Some(236))

Not enough candidates (1/50) sounds serious. Every time I see it the no/50 is very low from 0 to 4.
something no right …

Hows it looking. :slight_smile:

3 Likes

I am not sure, I was checking the forum to see if folk had seen improvements or not and it’s not clear there were or we just moved onto other issues. Do you feel the upgrade improved the rust cli etc? (in tests it did, but I have a theory about our tests)

4 Likes

I am in the same boat as you, couldn’t really tell from the forum posts. Been concentrating on ui while waiting so was kinda “hoping” that it would surface one way or the other from other users without having to divert my attention elsewhere.

I’ll focus on playing with the cli for a bit for feedback. Won’t have time today though.

4 Likes

Latest version of cli no improvement from my side tested before the weekend.

2 Likes

Quotes and payments were going fine but when ever tried putting a chunk the chunks were not uploading successfully.

2 Likes

It’s an improvement for sure, but there are still issues.

I had hit a wall with the last version where I could barely download any files at all. I couldn’t load imim blog, it was a struggle to download begblag, etc.

The new version unblocks that mostly, although it is very slow at times and I still get timeouts on bigger files (e.g. 100MB video is a struggle).

It seems like as the network versions have evolved, the old version cli/library version got progressively worse. The latest redresses that somewhat, but it is still a struggle.

So, I’d say it is worth it if the old version is as bad for everyone else as it was for me - it became unusable.

6 Likes

Just curious, has the very recent discussion been about:

… or something before that? And is there some estimation, when we could see the actual fix in the live network?

1 Like

Thats where I was/am at.

The node count keeps climbing despite emissions dropping and a bit of stick being deployed for versions.

I see from ip’s that there are people seemingly running huge amounts of nodes at a data center that I used which certainly does not have machines with the resources to run what are being run on them.

I fear that some or many of our problems are complete muppets running nodes.

5 Likes

client 0.3.10 I cant upload a small txt file is anyone else managing any uploads with this version ?

I am running on an arm oracle vps with no nodes running on the system

Logging to directory: "/home/ubuntu/.local/share/autonomi/client/logs/log_2025-03-24_12-31-06"
🔗 Connected to the Network                                                                                                                Uploading data to network...
Encrypting file: "test.txt"..
Successfully encrypted file: "test.txt"
Paying for 4 chunks..
Uploading file: test.txt (4 chunks)..
Upload of 1 files completed in 1923.752375424s
Error uploading file test.txt: PutError(Network(FailedToVerifyChunkProof(NetworkAddress::RecordKey("56ec98f248956186c2e614d2e59aad7e416d6238b7c4bea0b36ec975ab5278dd" - 422962dcec572973a0716a9d383597caab418918bc3723cc84d913e805cbffab))))
Error: 
   0: Failed to upload file
   1: Failed to upload file
   2: A network error occurred.
   3: Failed to verify the ChunkProof with the provided quorum

Location:
   ant-cli/src/commands/file.rs:77

Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.

3 Likes

I think that’s how it’s going to be ad infinitum. We just need a good measures to kick underperforming nodes out of the network. We cannot expect people to be any better.

And because I’m not any better myself, I cannot resist to share the flashback I just got:

5 Likes

I was clearly very wrong :laughing:.

What do we do, try a network that has no rewards then we ditch the bad behavior and rely on goodwill?

Might just work, no need for tokens at all just people sharing spare resources for an unstoppable network. :partying_face:

5 Likes

To be fair, that was still a testnet and mostly run by people, that could be expected to “behave”. (But I was interested in making some sort of prognosis what’s to come with autonomous network and anonymous people.)

The Whale Protector has become the Killer Whale :frowning:


Check out the Dev Forum

2 Likes

one of the clown whales is over at this service provider.

I am seeing single IP addresses showing up 375 time in a quote run for 400 random files so with my own IP showing up 64 times with 4k nodes that means they are running around 23k nodes on a single machine. and there are plenty of them

looking at the menu for servers available at that provider the largest Ram available on a machine is 128GB so that should top out at 3k nodes so Im wondering are they using swap or just how the hell its possible to squeeze that many nodes into a box.

also there are others using this service provider as well

@dirvine just tagging you in so you dont miss this

6 Likes

Yeah I used dedicated, I ran 1000 nodes on their 128GB machines at most, and I thought I was pushing the limits :rofl: I’ll say it once more.. muppets running nodes.

It was prior to new improvements but simple math says that is not reasonable.

4 Likes

Let’s upgrade the stick :joy:

6 Likes