Performance testing upload/download using AntTP

In light of the recent deployment, I thought an update on this thread was due.

I’m seeing different characteristics changes with this new version, vs the old version. Some may relate to the freshness of the change and the near complete roll out (at about 95% at time of writing).

  • On starting AntTP, the numbers are low on the first run. On the second run, they are much quicker. This suggests to me that routing is a little more fluid at the moment and it takes a wee while for the network to understand the best route.
  • From the second run on, the data throughput is a about half of the previous best (for the standard, small file, test run) and the latency has much more variation from best to worst.
  • The best latency figures are repeatedly in the 200ms range, which is about twice as fast. Again, this is to completely download a ‘small’ file (100-300kb, IIRC) and isn’t just a ‘ping’, but it gives us an interesting perspective into performance. This shows there are some very fast nodes, relative to the old network.
  • Related to the above, the max of 17,380ms vs 2,690ms and p95 of 5,510ms vs 1,320ms (of last test run above) appears to be where the time is lost - i.e. there are some very slow nodes, dragging down the numbers.
  • K6 starts counting responses 1 by 1, instead of batched of 10, as the throughput starts to increase. I’ve not seen this before and not sure what to make of it. It seems to coincide with more parallelism in the responses (i.e. less clustered / batched), with a more steady flow of data. I think this is a positive and shows less locking somewhere in the flow, but it’s hard to know.

It will be interesting to watch in the coming days to see if the initial runs increase in performance, along with the overall throughput. Given the lowest latencies have more than halved, this bodes well for smaller data performance.

$ cat ~/dev/anttp/test/performance/src/localhost-autonomi-http.js; k6 run -u 10 -i 1000 ~/dev/anttp/test/performance/src/localhost-autonomi-http.js
import http from 'k6/http';

export default function () {
  http.get('http://localhost:18888/cec7a9eb2c644b9a5de58bbcdf2e893db9f0b2acd7fc563fc849e19d1f6bd872/1_QdxdljdwBwR2QbAVr8scuw.png', { timeout: '600s' });
  http.get('http://localhost:18888/cec7a9eb2c644b9a5de58bbcdf2e893db9f0b2acd7fc563fc849e19d1f6bd872/1_dH5Ce6neTHIfEkAbmsr1BQ.jpeg', { timeout: '600s' });
  http.get('http://localhost:18888/cec7a9eb2c644b9a5de58bbcdf2e893db9f0b2acd7fc563fc849e19d1f6bd872/1_pt48p45dQmR5PBW8np1l8Q.png', { timeout: '600s' });
  http.get('http://localhost:18888/cec7a9eb2c644b9a5de58bbcdf2e893db9f0b2acd7fc563fc849e19d1f6bd872/1_sWZ4OWGeQjWs6urcPwR6Yw.png', { timeout: '600s' });
  http.get('http://localhost:18888/cec7a9eb2c644b9a5de58bbcdf2e893db9f0b2acd7fc563fc849e19d1f6bd872/1_ZT6qplX5Yt8PMCUqxq1lFQ.png', { timeout: '600s' });
  http.get('http://localhost:18888/cec7a9eb2c644b9a5de58bbcdf2e893db9f0b2acd7fc563fc849e19d1f6bd872/1_SxkGLnSNsMtu0SDrsWW8Wg.jpeg', { timeout: '600s' });
  http.get('http://localhost:18888/cec7a9eb2c644b9a5de58bbcdf2e893db9f0b2acd7fc563fc849e19d1f6bd872/1_bogEVpJvgx_gMHQoHMoSLg.jpeg', { timeout: '600s' });
  http.get('http://localhost:18888/cec7a9eb2c644b9a5de58bbcdf2e893db9f0b2acd7fc563fc849e19d1f6bd872/1_LFEyRQMHmxRnZtJwMozW5w.jpeg', { timeout: '600s' });
}

         /\      Grafana   /‾‾/  
    /\  /  \     |\  __   /  /   
   /  \/    \    | |/ /  /   ‾‾\ 
  /          \   |   (  |  (‾)  |
 / __________ \  |_|\_\  \_____/ 

     execution: local
        script: /home/paul/dev/anttp/test/performance/src/localhost-autonomi-http.js
        output: -

     scenarios: (100.00%) 1 scenario, 10 max VUs, 10m30s max duration (incl. graceful stop):
              * default: 1000 iterations shared among 10 VUs (maxDuration: 10m0s, gracefulStop: 30s)


     data_received..................: 1.4 GB 2.3 MB/s
     data_sent......................: 567 kB 931 B/s
     dropped_iterations.............: 594    0.975878/s
     http_req_blocked...............: avg=7.5µs   min=1.32µs   med=4.64µs   max=705.85µs p(90)=9.83µs  p(95)=12.17µs
     http_req_connecting............: avg=966ns   min=0s       med=0s       max=441.69µs p(90)=0s      p(95)=0s     
     http_req_duration..............: avg=1.86s   min=216.58ms med=1.12s    max=17.38s   p(90)=3.58s   p(95)=5.51s  
       { expected_response:true }...: avg=1.86s   min=216.58ms med=1.12s    max=17.38s   p(90)=3.58s   p(95)=5.51s  
     http_req_failed................: 0.00%  0 out of 3248
     http_req_receiving.............: avg=1.15s   min=112.26ms med=531.59ms max=15.7s    p(90)=2.13s   p(95)=4.72s  
     http_req_sending...............: avg=23.23µs min=4.9µs    med=17.7µs   max=2.82ms   p(90)=37.11µs p(95)=45.97µs
     http_req_tls_handshaking.......: avg=0s      min=0s       med=0s       max=0s       p(90)=0s      p(95)=0s     
     http_req_waiting...............: avg=710.6ms min=72.04ms  med=386.57ms max=6.63s    p(90)=1.79s   p(95)=2.32s  
     http_reqs......................: 3248   5.336112/s
     iteration_duration.............: avg=14.91s  min=5.38s    med=12.61s   max=37.62s   p(90)=24.91s  p(95)=29.64s 
     iterations.....................: 406    0.667014/s
     vus............................: 5      min=5         max=10
     vus_max........................: 10     min=10        max=10


running (10m08.7s), 00/10 VUs, 406 complete and 0 interrupted iterations
default ✗ [==============>-----------------------] 10 VUs  10m08.7s/10m0s  0406/1000 shared iters
2 Likes

Very useful.

The recent changes were designed to improve uploads so I wonder how that has been affected. I’ve not tried myself yet but haven’t kept stats for comparison anyway.

3 Likes

I don’t have any upload tests yet and it may be tricky due to 1) cost 2) de-duplication. IIRC, @neo had a thread on upload performance though. Maybe that process could be followed again for this version? :thinking:

3 Likes

oh wow :exploding_head:

scratchpad performance seems to have increased

…i am seeing some slowies too - but not that often :open_mouth:

image

with sometimes 2 seconds from write to populated state being read (local behind my router) this is supercool!

7 Likes

Nice!

Actually, scratchpads are a good upload metric we can track cheaply on main net. Maybe something could be fashioned through K6 to do the same with AntTP. I’ll take a look when I get time.

4 Likes

Fantastic!

3 Likes

Around 2 seconds to update a scratchpad sounds amazing.

To put it in perspective, how fast was it before this update?

Seems very promising!

2 Likes

Tbh I don’t know precisely - but I decided to start the WebRTC version of friends utilising an external handshake server due to roundtrip times (write start to successful read) of up to 2 minutes

The outlier now is 15s which is still well within the timeout times of WebRTC (if we don’t have 2 outliers within the same handshake attempt.. But then we have another chance at the next full minute (rn I’m doing connection attempts every full minute - that will be change to some negotiated other time offset to the full minute to distribute load … but in this initial phase the full minute makes testing for me simpler :slight_smile: ) - so from what I’m seeing right now autonomi speed will no longer be a serious issue for friends and the improvement is significant

6 Likes

I’ve been re-running the standard/small test a few times throughout the day and the network is definitely getting faster. We’re not far off the best runs I’ve had now:

$ cat ~/dev/anttp/test/performance/src/localhost-autonomi-http.js; k6 run -u 10 -i 1000 ~/dev/anttp/test/performance/src/localhost-autonomi-http.js
import http from 'k6/http';

export default function () {
  http.get('http://localhost:18888/cec7a9eb2c644b9a5de58bbcdf2e893db9f0b2acd7fc563fc849e19d1f6bd872/1_QdxdljdwBwR2QbAVr8scuw.png', { timeout: '600s' });
  http.get('http://localhost:18888/cec7a9eb2c644b9a5de58bbcdf2e893db9f0b2acd7fc563fc849e19d1f6bd872/1_dH5Ce6neTHIfEkAbmsr1BQ.jpeg', { timeout: '600s' });
  http.get('http://localhost:18888/cec7a9eb2c644b9a5de58bbcdf2e893db9f0b2acd7fc563fc849e19d1f6bd872/1_pt48p45dQmR5PBW8np1l8Q.png', { timeout: '600s' });
  http.get('http://localhost:18888/cec7a9eb2c644b9a5de58bbcdf2e893db9f0b2acd7fc563fc849e19d1f6bd872/1_sWZ4OWGeQjWs6urcPwR6Yw.png', { timeout: '600s' });
  http.get('http://localhost:18888/cec7a9eb2c644b9a5de58bbcdf2e893db9f0b2acd7fc563fc849e19d1f6bd872/1_ZT6qplX5Yt8PMCUqxq1lFQ.png', { timeout: '600s' });
  http.get('http://localhost:18888/cec7a9eb2c644b9a5de58bbcdf2e893db9f0b2acd7fc563fc849e19d1f6bd872/1_SxkGLnSNsMtu0SDrsWW8Wg.jpeg', { timeout: '600s' });
  http.get('http://localhost:18888/cec7a9eb2c644b9a5de58bbcdf2e893db9f0b2acd7fc563fc849e19d1f6bd872/1_bogEVpJvgx_gMHQoHMoSLg.jpeg', { timeout: '600s' });
  http.get('http://localhost:18888/cec7a9eb2c644b9a5de58bbcdf2e893db9f0b2acd7fc563fc849e19d1f6bd872/1_LFEyRQMHmxRnZtJwMozW5w.jpeg', { timeout: '600s' });
}

         /\      Grafana   /‾‾/  
    /\  /  \     |\  __   /  /   
   /  \/    \    | |/ /  /   ‾‾\ 
  /          \   |   (  |  (‾)  |
 / __________ \  |_|\_\  \_____/ 

     execution: local
        script: /home/paul/dev/anttp/test/performance/src/localhost-autonomi-http.js
        output: -

     scenarios: (100.00%) 1 scenario, 10 max VUs, 10m30s max duration (incl. graceful stop):
              * default: 1000 iterations shared among 10 VUs (maxDuration: 10m0s, gracefulStop: 30s)


     data_received..................: 2.7 GB 4.4 MB/s
     data_sent......................: 1.1 MB 1.8 kB/s
     dropped_iterations.............: 228    0.376005/s
     http_req_blocked...............: avg=6.43µs   min=1.24µs   med=4.76µs   max=813.8µs  p(90)=8.77µs   p(95)=10.53µs 
     http_req_connecting............: avg=394ns    min=0s       med=0s       max=512.51µs p(90)=0s       p(95)=0s      
     http_req_duration..............: avg=978.97ms min=318.34ms med=659.5ms  max=11.58s   p(90)=1.62s    p(95)=2.52s   
       { expected_response:true }...: avg=978.97ms min=318.34ms med=659.5ms  max=11.58s   p(90)=1.62s    p(95)=2.52s   
     http_req_failed................: 0.00%  0 out of 6176
     http_req_receiving.............: avg=803.54ms min=213.77ms med=519.75ms max=11.2s    p(90)=1.35s    p(95)=2.22s   
     http_req_sending...............: avg=23.41µs  min=5.17µs   med=18.57µs  max=3.35ms   p(90)=34.26µs  p(95)=40.28µs 
     http_req_tls_handshaking.......: avg=0s       min=0s       med=0s       max=0s       p(90)=0s       p(95)=0s      
     http_req_waiting...............: avg=175.4ms  min=69.59ms  med=108.84ms max=3.32s    p(90)=351.27ms p(95)=411.99ms
     http_reqs......................: 6176   10.185114/s
     iteration_duration.............: avg=7.83s    min=5.11s    med=6.94s    max=18.81s   p(90)=11.39s   p(95)=13.6s   
     iterations.....................: 772    1.273139/s
     vus............................: 1      min=1         max=10
     vus_max........................: 10     min=10        max=10


running (10m06.4s), 00/10 VUs, 772 complete and 0 interrupted iterations
default ✗ [============================>---------] 10 VUs  10m06.4s/10m0s  0772/1000 shared iters

The fastest latency has increased a bit to around 300ms, but the average has come way down, especially the outliers. We’re back to sub 1-second average download times for those 100-300 KB files now.

I’ll keep monitoring this and will try with some bigger files and restarting AntTP to see how much impact this is having post-release too.

6 Likes

hmmmm - I think I celebrated too early … maybe the speed was mainly due to the large node runners still being in ramp-up? …?

I hardly manage to stay below 10 seconds again … seems like the younger network (big runners still in the ramp-up-phase and servers not as burdened by insane amounts of nodes?) the network was faster than it is now … I’ll keep watching this … but this is a bit of a shocker now for me …

6 Likes

Once the get there load averages through the roof plus internet connections saturated and throw in some swap instead of ram it’s a killer :frowning:

Shame on them :angry:

2 Likes

I suspect this is due to overloaded hosts, running too many nodes for the resources they have.

A fresh network loads antnode into memory. As it is integrated into the routing table, it then responds quickly.

As the network ages, the nodes may get switched into virtual memory, especially as hosts squeeze more nodes onto the same boxes. This would introduce a delay.

I noticed that yesterday, my dormant AntTP was much slower than the day before. Getting around 200 or so iterations. After several more runs, it crept up each time to around 500-600 again, which was pretty close to the day before. It’s as if the nodes hosting the data had been ‘exercised’ and were responsive on the hosts to respond quickly again.

An alternative theory is that the data is cached on repeated attempts. I honestly don’t know what the state of caching is on the network right now. By basic understanding is, only churned data is cached at nodes that had the data previously right now. I don’t think there is any opportunistic caching by peers routing the request (which would be ideal with immutable data, at least).

Having said all that… I took the same dormant AntTP instance this morning, then fired the test at it, and got good results straight away (nearly 600 iterations):

$ cat ~/dev/anttp/test/performance/src/localhost-autonomi-http.js; k6 run -u 10 -i 1000 ~/dev/anttp/test/performance/src/localhost-autonomi-http.js
import http from 'k6/http';

export default function () {
  http.get('http://localhost:18888/cec7a9eb2c644b9a5de58bbcdf2e893db9f0b2acd7fc563fc849e19d1f6bd872/1_QdxdljdwBwR2QbAVr8scuw.png', { timeout: '600s' });
  http.get('http://localhost:18888/cec7a9eb2c644b9a5de58bbcdf2e893db9f0b2acd7fc563fc849e19d1f6bd872/1_dH5Ce6neTHIfEkAbmsr1BQ.jpeg', { timeout: '600s' });
  http.get('http://localhost:18888/cec7a9eb2c644b9a5de58bbcdf2e893db9f0b2acd7fc563fc849e19d1f6bd872/1_pt48p45dQmR5PBW8np1l8Q.png', { timeout: '600s' });
  http.get('http://localhost:18888/cec7a9eb2c644b9a5de58bbcdf2e893db9f0b2acd7fc563fc849e19d1f6bd872/1_sWZ4OWGeQjWs6urcPwR6Yw.png', { timeout: '600s' });
  http.get('http://localhost:18888/cec7a9eb2c644b9a5de58bbcdf2e893db9f0b2acd7fc563fc849e19d1f6bd872/1_ZT6qplX5Yt8PMCUqxq1lFQ.png', { timeout: '600s' });
  http.get('http://localhost:18888/cec7a9eb2c644b9a5de58bbcdf2e893db9f0b2acd7fc563fc849e19d1f6bd872/1_SxkGLnSNsMtu0SDrsWW8Wg.jpeg', { timeout: '600s' });
  http.get('http://localhost:18888/cec7a9eb2c644b9a5de58bbcdf2e893db9f0b2acd7fc563fc849e19d1f6bd872/1_bogEVpJvgx_gMHQoHMoSLg.jpeg', { timeout: '600s' });
  http.get('http://localhost:18888/cec7a9eb2c644b9a5de58bbcdf2e893db9f0b2acd7fc563fc849e19d1f6bd872/1_LFEyRQMHmxRnZtJwMozW5w.jpeg', { timeout: '600s' });
}

         /\      Grafana   /‾‾/  
    /\  /  \     |\  __   /  /   
   /  \/    \    | |/ /  /   ‾‾\ 
  /          \   |   (  |  (‾)  |
 / __________ \  |_|\_\  \_____/ 

     execution: local
        script: /home/paul/dev/anttp/test/performance/src/localhost-autonomi-http.js
        output: -

     scenarios: (100.00%) 1 scenario, 10 max VUs, 10m30s max duration (incl. graceful stop):
              * default: 1000 iterations shared among 10 VUs (maxDuration: 10m0s, gracefulStop: 30s)


     data_received..................: 2.0 GB 3.3 MB/s
     data_sent......................: 807 kB 1.3 kB/s
     dropped_iterations.............: 422    0.6914/s
     http_req_blocked...............: avg=6.32µs   min=1.33µs   med=4.16µs   max=695.81µs p(90)=7.83µs   p(95)=10.3µs  
     http_req_connecting............: avg=545ns    min=0s       med=0s       max=296.34µs p(90)=0s       p(95)=0s      
     http_req_duration..............: avg=1.31s    min=286.33ms med=862.69ms max=12.03s   p(90)=2.52s    p(95)=4s      
       { expected_response:true }...: avg=1.31s    min=286.33ms med=862.69ms max=12.03s   p(90)=2.52s    p(95)=4s      
     http_req_failed................: 0.00%  0 out of 4624
     http_req_receiving.............: avg=1.01s    min=179.69ms med=557.47ms max=11.81s   p(90)=2.11s    p(95)=3.52s   
     http_req_sending...............: avg=19.85µs  min=4.75µs   med=16.12µs  max=2.1ms    p(90)=30.06µs  p(95)=37.9µs  
     http_req_tls_handshaking.......: avg=0s       min=0s       med=0s       max=0s       p(90)=0s       p(95)=0s      
     http_req_waiting...............: avg=301.81ms min=103.23ms med=220.06ms max=3.52s    p(90)=572.52ms p(95)=683.01ms
     http_reqs......................: 4624   7.57591/s
     iteration_duration.............: avg=10.54s   min=5.39s    med=9.1s     max=23.86s   p(90)=16.53s   p(95)=18.17s  
     iterations.....................: 578    0.946989/s
     vus............................: 5      min=5         max=10
     vus_max........................: 10     min=10        max=10


running (10m10.4s), 00/10 VUs, 578 complete and 0 interrupted iterations
default ✗ [====================>-----------------] 10 VUs  10m10.4s/10m0s  0578/1000 shared iters

So, I guess it is sometimes an issue. At least with this new network. FWIW, a second run was closer to 500 iterations, which was actually worse too! ha!

Obviously, this is all on my close range wifi, at the end of a contended link, so caveats all apply. I don’t have a dedicated test rig or connection right now (although I could re-purpose a box when I get some time and wire it to my router - EDIT: I actually have a couple of powered of node runners, as I was running out of bandwidth. One of those would do the job! ha!).

4 Likes

you’re testing with immutable chunks that get hosted on many more nodes than just the closest 5-7 or so (and host the same state/correct result) …

i suspect your results are getting better due to excess copies (possibly in node caches) … I’m using mutable data that probably doesn’t replicate that much (or if it does holds old states often/needs longer to propagate the correct new state) when nodes are slow …

1 Like

Yes, for sure, mutable data is definitely a different beast. However, I’m not sure how much immutable data caching there actually is.

It would be great to hear some input from the team on what caching is in place. The last update I heard was from David, 3-4 months ago, IIRC. I would actually expect more advanced immutable caching to send repeated request performance through the roof, which I’m not seeing in the tests. However, I just don’t know that area of the code base yet.

So, I would have thought scratchpads would remain similar in speed (without extensive immutable caching), at least until they are heavily contended. That’s based on a limited understanding though, sadly.

EDIT: To add, the team did talk of removing slow nodes again in a recent update though. I’m sure we will see many performance improvements in the coming months.

4 Likes

those speed-tests up there are executed from a server doing absolutely nothing except for running that Jupyter notebook server … that’s ideal conditions … not executed from my local network …

I really hope so … I’m just not sure this endangers my plan to launch friends without a centralized handshake server before the judging phase … for all I see right now it does and it means I just invested valuable time into something I can throw away instantly again …

1 Like

I hear you - a burden to bear as a top 3 IFer! ha!

4 Likes

Thanks for the report. We’re going to look into it.

6 Likes

Thank you very much

In colony I started noticing some different errors related to pointers and scratchpads. I hadn’t seen these before the last update. I’ve only had this batch of errors happen once, so maybe it was a one off, but I had never seen them before. Sometimes I have to hit the pointer/scratchpad update several times for it to take, whereas before it always ‘just worked’. I’ll also note that updating these 2 mutable types takes on average twice as long as it did before the update. I notice th,ese problems on main net, alpha and local are working fine for me:

2025-07-02T01:52:11.911989Z ERROR ThreadId(04) autonomi::client::data_types::pointer: /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/autonomi-0.5.0/src/client/data_types/pointer.rs:288: Failed to update pointer at address aaa518a2cf8260f6bebc769c16b8147ea215adf569696497b7fc1f250823d89a49990187e83fc0f9ae1cf3d44afb7dce to the network: Put verification failed: Peers have conflicting entries for this record: {PeerId("12D3KooWSuTm6wx2myt5BVY6JGXyBZ7eVhYdbeLAub66iKBA5wTV"): Record { key: Key(b"\0\xd5\xa4s\xf9\x10\x9d\xfe\xbd\xdeo\xf6H\x0b_\xee\xf1bX\xa7X\xb4\"9\xe1\xf8;\xd3w\xf9\xc7\xb8"), value: [145, 2, 148, 220, 0, 48, 204, 170, 204, 165, 24, 204, 162, 204, 207, 204, 130, 96, 204, 246, 204, 190, 204, 188, 118, 204, 156, 22, 204, 184, 20, 126, 204, 162, 21, 204, 173, 204, 245, 105, 105, 100, 204, 151, 204, 183, 204, 252, 31, 37, 8, 35, 204, 216, 204, 154, 73, 204, 153, 1, 204, 135, 204, 232, 63, 204, 192, 204, 249, 204, 174, 28, 204, 243, 204, 212, 74, 204, 251, 125, 204, 206, 1, 129, 177, 83, 99, 114, 97, 116, 99, 104, 112, 97, 100, 65, 100, 100, 114, 101, 115, 115, 220, 0, 48, 204, 178, 36, 204, 145, 24, 30, 204, 166, 54, 204, 143, 204, 229, 116, 29, 90, 19, 204, 255, 57, 204, 226, 204, 135, 29, 204, 161, 60, 204, 190, 204, 208, 204, 220, 204, 178, 204, 238, 119, 204, 195, 204, 165, 20, 103, 35, 204, 211, 204, 233, 87, 93, 204, 190, 60, 87, 204, 191, 124, 118, 204, 155, 48, 204, 200, 71, 204, 193, 204, 206, 7, 220, 0, 96, 204, 148, 53, 22, 204, 238, 54, 204, 203, 204, 142, 204, 162, 111, 204, 176, 57, 204, 224, 204, 205, 26, 204, 243, 91, 204, 167, 22, 204, 190, 98, 204, 245, 127, 204, 253, 204, 200, 204, 153, 77, 204, 234, 57, 204, 142, 2, 100, 204, 253, 48, 204, 199, 59, 204, 199, 204, 130, 204, 185, 204, 180, 33, 204, 164, 204, 198, 204, 185, 86, 40, 21, 204, 180, 31, 11, 204, 237, 13, 204, 201, 204, 187, 204, 181, 204, 146, 204, 172, 204, 142, 93, 204, 158, 57, 86, 25, 36, 204, 188, 72, 80, 96, 101, 26, 204, 225, 204, 239, 100, 40, 204, 176, 204, 166, 104, 19, 204, 199, 21, 204, 151, 204, 224, 204, 250, 204, 174, 79, 204, 234, 102, 83, 111, 46, 68, 204, 167, 204, 221, 8, 122, 204, 149, 204, 252], publisher: None, expires: None }, PeerId("12D3KooWLmfdTmmTtKDwcKey28sybA1Kh9SShwAjf84e8pnPsSZT"): Record { key: Key(b"\0\xd5\xa4s\xf9\x10\x9d\xfe\xbd\xdeo\xf6H\x0b_\xee\xf1bX\xa7X\xb4\"9\xe1\xf8;\xd3w\xf9\xc7\xb8"), value: [145, 2, 148, 220, 0, 48, 204, 170, 204, 165, 24, 204, 162, 204, 207, 204, 130, 96, 204, 246, 204, 190, 204, 188, 118, 204, 156, 22, 204, 184, 20, 126, 204, 162, 21, 204, 173, 204, 245, 105, 105, 100, 204, 151, 204, 183, 204, 252, 31, 37, 8, 35, 204, 216, 204, 154, 73, 204, 153, 1, 204, 135, 204, 232, 63, 204, 192, 204, 249, 204, 174, 28, 204, 243, 204, 212, 74, 204, 251, 125, 204, 206, 0, 129, 177, 83, 99, 114, 97, 116, 99, 104, 112, 97, 100, 65, 100, 100, 114, 101, 115, 115, 220, 0, 48, 204, 178, 36, 204, 145, 24, 30, 204, 166, 54, 204, 143, 204, 229, 116, 29, 90, 19, 204, 255, 57, 204, 226, 204, 135, 29, 204, 161, 60, 204, 190, 204, 208, 204, 220, 204, 178, 204, 238, 119, 204, 195, 204, 165, 20, 103, 35, 204, 211, 204, 233, 87, 93, 204, 190, 60, 87, 204, 191, 124, 118, 204, 155, 48, 204, 200, 71, 204, 193, 204, 206, 7, 220, 0, 96, 204, 177, 204, 217, 204, 141, 204, 180, 204, 243, 36, 204, 136, 204, 150, 204, 251, 8, 204, 151, 204, 203, 204, 157, 204, 165, 64, 204, 159, 58, 1, 204, 198, 49, 122, 204, 253, 108, 58, 11, 126, 29, 83, 204, 164, 204, 189, 61, 65, 204, 150, 204, 166, 4, 93, 87, 204, 191, 106, 204, 193, 23, 102, 25, 204, 139, 204, 163, 33, 95, 204, 171, 15, 204, 179, 204, 185, 204, 156, 114, 204, 134, 204, 214, 204, 221, 20, 77, 19, 204, 152, 56, 204, 156, 79, 204, 215, 7, 114, 126, 204, 171, 73, 88, 34, 204, 233, 38, 26, 14, 65, 204, 150, 204, 236, 118, 35, 53, 23, 22, 65, 204, 233, 120, 204, 233, 81, 90, 204, 232, 119, 9, 107, 31, 204, 131, 9], publisher: None, expires: None }}
2025-07-02T01:52:11.913580Z ERROR ThreadId(04) colonylib::pod: /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/colonylib-0.4.3/src/pod.rs:1845: Error occurred: Pointer(PutError(Network { address: NetworkAddress::PointerAddress(aaa518a2cf8260f6bebc769c16b8147ea215adf569696497b7fc1f250823d89a49990187e83fc0f9ae1cf3d44afb7dce) - (acae9b20a0dc6d7da27aa34239d97ee231fd389b014eace164e0171e7fd28969), network_error: PutRecordVerification("Peers have conflicting entries for this record: {PeerId(\"12D3KooWSuTm6wx2myt5BVY6JGXyBZ7eVhYdbeLAub66iKBA5wTV\"): Record { key: Key(b\"\\0\\xd5\\xa4s\\xf9\\x10\\x9d\\xfe\\xbd\\xdeo\\xf6H\\x0b_\\xee\\xf1bX\\xa7X\\xb4\\\"9\\xe1\\xf8;\\xd3w\\xf9\\xc7\\xb8\"), value: [145, 2, 148, 220, 0, 48, 204, 170, 204, 165, 24, 204, 162, 204, 207, 204, 130, 96, 204, 246, 204, 190, 204, 188, 118, 204, 156, 22, 204, 184, 20, 126, 204, 162, 21, 204, 173, 204, 245, 105, 105, 100, 204, 151, 204, 183, 204, 252, 31, 37, 8, 35, 204, 216, 204, 154, 73, 204, 153, 1, 204, 135, 204, 232, 63, 204, 192, 204, 249, 204, 174, 28, 204, 243, 204, 212, 74, 204, 251, 125, 204, 206, 1, 129, 177, 83, 99, 114, 97, 116, 99, 104, 112, 97, 100, 65, 100, 100, 114, 101, 115, 115, 220, 0, 48, 204, 178, 36, 204, 145, 24, 30, 204, 166, 54, 204, 143, 204, 229, 116, 29, 90, 19, 204, 255, 57, 204, 226, 204, 135, 29, 204, 161, 60, 204, 190, 204, 208, 204, 220, 204, 178, 204, 238, 119, 204, 195, 204, 165, 20, 103, 35, 204, 211, 204, 233, 87, 93, 204, 190, 60, 87, 204, 191, 124, 118, 204, 155, 48, 204, 200, 71, 204, 193, 204, 206, 7, 220, 0, 96, 204, 148, 53, 22, 204, 238, 54, 204, 203, 204, 142, 204, 162, 111, 204, 176, 57, 204, 224, 204, 205, 26, 204, 243, 91, 204, 167, 22, 204, 190, 98, 204, 245, 127, 204, 253, 204, 200, 204, 153, 77, 204, 234, 57, 204, 142, 2, 100, 204, 253, 48, 204, 199, 59, 204, 199, 204, 130, 204, 185, 204, 180, 33, 204, 164, 204, 198, 204, 185, 86, 40, 21, 204, 180, 31, 11, 204, 237, 13, 204, 201, 204, 187, 204, 181, 204, 146, 204, 172, 204, 142, 93, 204, 158, 57, 86, 25, 36, 204, 188, 72, 80, 96, 101, 26, 204, 225, 204, 239, 100, 40, 204, 176, 204, 166, 104, 19, 204, 199, 21, 204, 151, 204, 224, 204, 250, 204, 174, 79, 204, 234, 102, 83, 111, 46, 68, 204, 167, 204, 221, 8, 122, 204, 149, 204, 252], publisher: None, expires: None }, PeerId(\"12D3KooWLmfdTmmTtKDwcKey28sybA1Kh9SShwAjf84e8pnPsSZT\"): Record { key: Key(b\"\\0\\xd5\\xa4s\\xf9\\x10\\x9d\\xfe\\xbd\\xdeo\\xf6H\\x0b_\\xee\\xf1bX\\xa7X\\xb4\\\"9\\xe1\\xf8;\\xd3w\\xf9\\xc7\\xb8\"), value: [145, 2, 148, 220, 0, 48, 204, 170, 204, 165, 24, 204, 162, 204, 207, 204, 130, 96, 204, 246, 204, 190, 204, 188, 118, 204, 156, 22, 204, 184, 20, 126, 204, 162, 21, 204, 173, 204, 245, 105, 105, 100, 204, 151, 204, 183, 204, 252, 31, 37, 8, 35, 204, 216, 204, 154, 73, 204, 153, 1, 204, 135, 204, 232, 63, 204, 192, 204, 249, 204, 174, 28, 204, 243, 204, 212, 74, 204, 251, 125, 204, 206, 0, 129, 177, 83, 99, 114, 97, 116, 99, 104, 112, 97, 100, 65, 100, 100, 114, 101, 115, 115, 220, 0, 48, 204, 178, 36, 204, 145, 24, 30, 204, 166, 54, 204, 143, 204, 229, 116, 29, 90, 19, 204, 255, 57, 204, 226, 204, 135, 29, 204, 161, 60, 204, 190, 204, 208, 204, 220, 204, 178, 204, 238, 119, 204, 195, 204, 165, 20, 103, 35, 204, 211, 204, 233, 87, 93, 204, 190, 60, 87, 204, 191, 124, 118, 204, 155, 48, 204, 200, 71, 204, 193, 204, 206, 7, 220, 0, 96, 204, 177, 204, 217, 204, 141, 204, 180, 204, 243, 36, 204, 136, 204, 150, 204, 251, 8, 204, 151, 204, 203, 204, 157, 204, 165, 64, 204, 159, 58, 1, 204, 198, 49, 122, 204, 253, 108, 58, 11, 126, 29, 83, 204, 164, 204, 189, 61, 65, 204, 150, 204, 166, 4, 93, 87, 204, 191, 106, 204, 193, 23, 102, 25, 204, 139, 204, 163, 33, 95, 204, 171, 15, 204, 179, 204, 185, 204, 156, 114, 204, 134, 204, 214, 204, 221, 20, 77, 19, 204, 152, 56, 204, 156, 79, 204, 215, 7, 114, 126, 204, 171, 73, 88, 34, 204, 233, 38, 26, 14, 65, 204, 150, 204, 236, 118, 35, 53, 23, 22, 65, 204, 233, 120, 204, 233, 81, 90, 204, 232, 119, 9, 107, 31, 204, 131, 9], publisher: None, expires: None }}"), payment: None }))
2025-07-02T01:52:30.027683Z ERROR ThreadId(03) autonomi::client::data_types::scratchpad: /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/autonomi-0.5.0/src/client/data_types/scratchpad.rs:104: Got multiple conflicting scratchpads for Key(b"\xea\xb9\x8eY\xdc\x8ew\x01\x0c\xf95\x96?\x07sb\x92A\x9b\x91\x03\xd7\xb1/K\xc2\xa6\xafR\xc5\xbdw") with the latest version, returning the first one
2025-07-02T01:53:28.834378Z ERROR ThreadId(04) autonomi::client::data_types::pointer: /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/autonomi-0.5.0/src/client/data_types/pointer.rs:288: Failed to update pointer at address aaa518a2cf8260f6bebc769c16b8147ea215adf569696497b7fc1f250823d89a49990187e83fc0f9ae1cf3d44afb7dce to the network: Put verification failed: Peers have conflicting entries for this record: {PeerId("12D3KooWSuTm6wx2myt5BVY6JGXyBZ7eVhYdbeLAub66iKBA5wTV"): Record { key: Key(b"\0\xd5\xa4s\xf9\x10\x9d\xfe\xbd\xdeo\xf6H\x0b_\xee\xf1bX\xa7X\xb4\"9\xe1\xf8;\xd3w\xf9\xc7\xb8"), value: [145, 2, 148, 220, 0, 48, 204, 170, 204, 165, 24, 204, 162, 204, 207, 204, 130, 96, 204, 246, 204, 190, 204, 188, 118, 204, 156, 22, 204, 184, 20, 126, 204, 162, 21, 204, 173, 204, 245, 105, 105, 100, 204, 151, 204, 183, 204, 252, 31, 37, 8, 35, 204, 216, 204, 154, 73, 204, 153, 1, 204, 135, 204, 232, 63, 204, 192, 204, 249, 204, 174, 28, 204, 243, 204, 212, 74, 204, 251, 125, 204, 206, 2, 129, 177, 83, 99, 114, 97, 116, 99, 104, 112, 97, 100, 65, 100, 100, 114, 101, 115, 115, 220, 0, 48, 204, 178, 36, 204, 145, 24, 30, 204, 166, 54, 204, 143, 204, 229, 116, 29, 90, 19, 204, 255, 57, 204, 226, 204, 135, 29, 204, 161, 60, 204, 190, 204, 208, 204, 220, 204, 178, 204, 238, 119, 204, 195, 204, 165, 20, 103, 35, 204, 211, 204, 233, 87, 93, 204, 190, 60, 87, 204, 191, 124, 118, 204, 155, 48, 204, 200, 71, 204, 193, 204, 206, 7, 220, 0, 96, 204, 150, 0, 98, 204, 200, 92, 20, 114, 22, 204, 166, 204, 183, 40, 204, 238, 49, 96, 41, 18, 59, 204, 169, 45, 70, 204, 150, 204, 246, 20, 106, 204, 190, 204, 211, 204, 222, 64, 114, 204, 136, 204, 154, 7, 204, 215, 36, 204, 162, 75, 75, 204, 199, 3, 204, 204, 204, 171, 115, 204, 252, 13, 109, 73, 25, 204, 154, 10, 21, 204, 149, 204, 252, 121, 204, 223, 96, 204, 207, 204, 242, 71, 204, 188, 8, 204, 208, 122, 99, 103, 28, 204, 138, 65, 204, 141, 59, 16, 13, 204, 222, 39, 204, 235, 72, 2, 204, 202, 51, 52, 204, 196, 204, 158, 204, 222, 103, 204, 162, 114, 204, 218, 204, 153, 102, 204, 158, 204, 235, 204, 252, 204, 152, 204, 148, 30, 204, 188, 204, 201], publisher: None, expires: None }, PeerId("12D3KooWLmfdTmmTtKDwcKey28sybA1Kh9SShwAjf84e8pnPsSZT"): Record { key: Key(b"\0\xd5\xa4s\xf9\x10\x9d\xfe\xbd\xdeo\xf6H\x0b_\xee\xf1bX\xa7X\xb4\"9\xe1\xf8;\xd3w\xf9\xc7\xb8"), value: [145, 2, 148, 220, 0, 48, 204, 170, 204, 165, 24, 204, 162, 204, 207, 204, 130, 96, 204, 246, 204, 190, 204, 188, 118, 204, 156, 22, 204, 184, 20, 126, 204, 162, 21, 204, 173, 204, 245, 105, 105, 100, 204, 151, 204, 183, 204, 252, 31, 37, 8, 35, 204, 216, 204, 154, 73, 204, 153, 1, 204, 135, 204, 232, 63, 204, 192, 204, 249, 204, 174, 28, 204, 243, 204, 212, 74, 204, 251, 125, 204, 206, 0, 129, 177, 83, 99, 114, 97, 116, 99, 104, 112, 97, 100, 65, 100, 100, 114, 101, 115, 115, 220, 0, 48, 204, 178, 36, 204, 145, 24, 30, 204, 166, 54, 204, 143, 204, 229, 116, 29, 90, 19, 204, 255, 57, 204, 226, 204, 135, 29, 204, 161, 60, 204, 190, 204, 208, 204, 220, 204, 178, 204, 238, 119, 204, 195, 204, 165, 20, 103, 35, 204, 211, 204, 233, 87, 93, 204, 190, 60, 87, 204, 191, 124, 118, 204, 155, 48, 204, 200, 71, 204, 193, 204, 206, 7, 220, 0, 96, 204, 177, 204, 217, 204, 141, 204, 180, 204, 243, 36, 204, 136, 204, 150, 204, 251, 8, 204, 151, 204, 203, 204, 157, 204, 165, 64, 204, 159, 58, 1, 204, 198, 49, 122, 204, 253, 108, 58, 11, 126, 29, 83, 204, 164, 204, 189, 61, 65, 204, 150, 204, 166, 4, 93, 87, 204, 191, 106, 204, 193, 23, 102, 25, 204, 139, 204, 163, 33, 95, 204, 171, 15, 204, 179, 204, 185, 204, 156, 114, 204, 134, 204, 214, 204, 221, 20, 77, 19, 204, 152, 56, 204, 156, 79, 204, 215, 7, 114, 126, 204, 171, 73, 88, 34, 204, 233, 38, 26, 14, 65, 204, 150, 204, 236, 118, 35, 53, 23, 22, 65, 204, 233, 120, 204, 233, 81, 90, 204, 232, 119, 9, 107, 31, 204, 131, 9], publisher: None, expires: None }}
2025-07-02T01:53:28.834501Z ERROR ThreadId(04) colonylib::pod: /cargo/registry/src/index.crates.io-1949cf8c6b5b557f/colonylib-0.4.3/src/pod.rs:1845: Error occurred: Pointer(PutError(Network { address: NetworkAddress::PointerAddress(aaa518a2cf8260f6bebc769c16b8147ea215adf569696497b7fc1f250823d89a49990187e83fc0f9ae1cf3d44afb7dce) - (acae9b20a0dc6d7da27aa34239d97ee231fd389b014eace164e0171e7fd28969), network_error: PutRecordVerification("Peers have conflicting entries for this record: {PeerId(\"12D3KooWSuTm6wx2myt5BVY6JGXyBZ7eVhYdbeLAub66iKBA5wTV\"): Record { key: Key(b\"\\0\\xd5\\xa4s\\xf9\\x10\\x9d\\xfe\\xbd\\xdeo\\xf6H\\x0b_\\xee\\xf1bX\\xa7X\\xb4\\\"9\\xe1\\xf8;\\xd3w\\xf9\\xc7\\xb8\"), value: [145, 2, 148, 220, 0, 48, 204, 170, 204, 165, 24, 204, 162, 204, 207, 204, 130, 96, 204, 246, 204, 190, 204, 188, 118, 204, 156, 22, 204, 184, 20, 126, 204, 162, 21, 204, 173, 204, 245, 105, 105, 100, 204, 151, 204, 183, 204, 252, 31, 37, 8, 35, 204, 216, 204, 154, 73, 204, 153, 1, 204, 135, 204, 232, 63, 204, 192, 204, 249, 204, 174, 28, 204, 243, 204, 212, 74, 204, 251, 125, 204, 206, 2, 129, 177, 83, 99, 114, 97, 116, 99, 104, 112, 97, 100, 65, 100, 100, 114, 101, 115, 115, 220, 0, 48, 204, 178, 36, 204, 145, 24, 30, 204, 166, 54, 204, 143, 204, 229, 116, 29, 90, 19, 204, 255, 57, 204, 226, 204, 135, 29, 204, 161, 60, 204, 190, 204, 208, 204, 220, 204, 178, 204, 238, 119, 204, 195, 204, 165, 20, 103, 35, 204, 211, 204, 233, 87, 93, 204, 190, 60, 87, 204, 191, 124, 118, 204, 155, 48, 204, 200, 71, 204, 193, 204, 206, 7, 220, 0, 96, 204, 150, 0, 98, 204, 200, 92, 20, 114, 22, 204, 166, 204, 183, 40, 204, 238, 49, 96, 41, 18, 59, 204, 169, 45, 70, 204, 150, 204, 246, 20, 106, 204, 190, 204, 211, 204, 222, 64, 114, 204, 136, 204, 154, 7, 204, 215, 36, 204, 162, 75, 75, 204, 199, 3, 204, 204, 204, 171, 115, 204, 252, 13, 109, 73, 25, 204, 154, 10, 21, 204, 149, 204, 252, 121, 204, 223, 96, 204, 207, 204, 242, 71, 204, 188, 8, 204, 208, 122, 99, 103, 28, 204, 138, 65, 204, 141, 59, 16, 13, 204, 222, 39, 204, 235, 72, 2, 204, 202, 51, 52, 204, 196, 204, 158, 204, 222, 103, 204, 162, 114, 204, 218, 204, 153, 102, 204, 158, 204, 235, 204, 252, 204, 152, 204, 148, 30, 204, 188, 204, 201], publisher: None, expires: None }, PeerId(\"12D3KooWLmfdTmmTtKDwcKey28sybA1Kh9SShwAjf84e8pnPsSZT\"): Record { key: Key(b\"\\0\\xd5\\xa4s\\xf9\\x10\\x9d\\xfe\\xbd\\xdeo\\xf6H\\x0b_\\xee\\xf1bX\\xa7X\\xb4\\\"9\\xe1\\xf8;\\xd3w\\xf9\\xc7\\xb8\"), value: [145, 2, 148, 220, 0, 48, 204, 170, 204, 165, 24, 204, 162, 204, 207, 204, 130, 96, 204, 246, 204, 190, 204, 188, 118, 204, 156, 22, 204, 184, 20, 126, 204, 162, 21, 204, 173, 204, 245, 105, 105, 100, 204, 151, 204, 183, 204, 252, 31, 37, 8, 35, 204, 216, 204, 154, 73, 204, 153, 1, 204, 135, 204, 232, 63, 204, 192, 204, 249, 204, 174, 28, 204, 243, 204, 212, 74, 204, 251, 125, 204, 206, 0, 129, 177, 83, 99, 114, 97, 116, 99, 104, 112, 97, 100, 65, 100, 100, 114, 101, 115, 115, 220, 0, 48, 204, 178, 36, 204, 145, 24, 30, 204, 166, 54, 204, 143, 204, 229, 116, 29, 90, 19, 204, 255, 57, 204, 226, 204, 135, 29, 204, 161, 60, 204, 190, 204, 208, 204, 220, 204, 178, 204, 238, 119, 204, 195, 204, 165, 20, 103, 35, 204, 211, 204, 233, 87, 93, 204, 190, 60, 87, 204, 191, 124, 118, 204, 155, 48, 204, 200, 71, 204, 193, 204, 206, 7, 220, 0, 96, 204, 177, 204, 217, 204, 141, 204, 180, 204, 243, 36, 204, 136, 204, 150, 204, 251, 8, 204, 151, 204, 203, 204, 157, 204, 165, 64, 204, 159, 58, 1, 204, 198, 49, 122, 204, 253, 108, 58, 11, 126, 29, 83, 204, 164, 204, 189, 61, 65, 204, 150, 204, 166, 4, 93, 87, 204, 191, 106, 204, 193, 23, 102, 25, 204, 139, 204, 163, 33, 95, 204, 171, 15, 204, 179, 204, 185, 204, 156, 114, 204, 134, 204, 214, 204, 221, 20, 77, 19, 204, 152, 56, 204, 156, 79, 204, 215, 7, 114, 126, 204, 171, 73, 88, 34, 204, 233, 38, 26, 14, 65, 204, 150, 204, 236, 118, 35, 53, 23, 22, 65, 204, 233, 120, 204, 233, 81, 90, 204, 232, 119, 9, 107, 31, 204, 131, 9], publisher: None, expires: None }}"), payment: None }))

2 Likes

I’m guessing they put some extra tests and messages in to see if my endless reports of pointer reliability have any basis.

If so, maybe in not an idiot after all. :rofl:

2 Likes