and then the upload without --no-verify failed as well…
willie@gagarin:~$ SN_LOG=all time safe --log-output-dest data-dir files upload ~/Videos/cooking/madhur/madhur.jaffrey\'s.flavours.of.india.episode.06.avi
Logging to directory: "/home/willie/.local/share/safe/client/logs"
Using SN_LOG=all
Built with git version: aba95fb / main / aba95fb
Instantiating a SAFE client...
🔗 Connected to the Network Loaded wallet from "/home/willie/.local/share/safe/client/wallet" with balance Token(99999948658)
Preparing (chunking) files at '/home/willie/Videos/cooking/madhur/madhur.jaffrey's.flavours.of.india.episode.06.avi'...
Making payment for 699 Chunks that belong to 1 file/s.
Error: Failed to send tokens due to Network Error Could not retrieve the record after storing it: 3d8c30a37508d68bc49d3bf75c902523df0eb3727b4b6dd26dcc6402fb8821bd.
Location:
sn_cli/src/subcommands/wallet.rs:305:26
Command exited with non-zero status 1
13.43user 4.44system 10:39.20elapsed 2%CPU (0avgtext+0avgdata 765120maxresident)k
697312inputs+2640outputs (2major+1099891minor)pagefaults 0swaps
I’ve just published vdash v0.8.6 which adds display of chunk storage fee which is available since ThePriceIsRightNet, and fixes a progressive memory leak.
Try this again with no nodes running on your internet connection and see if it occurs the same.
Maybe your nodes are participating in the upload process. David explained that the client finds the destination nodes by asking the nodes it is connected to for nodes closer to the destination. This process is repeated till the final 8 nodes are found.
The network is small so its likely one or more of your nodes will be queried by your own uploads. This might explain some of the additional traffic occurring when you upload. @joshuef Could this explain some of that additional traffic
I only run the client and I have similar behavior. Can’t say it’s the same as @Southside but with verification the upload process is like molasses and a lot of download traffic.
Maybe there is something in the number of messages being sent/received in the process of getting the destination nodes addresses and getting prices then do it over again when finding the nodes for sending the chunks to.
Failed to store all chunks of file '1MB_7' to all nodes in the close group: Network Error Could not retrieve the record after storing it
and the speed seems to have gone back to what it was after a very bad patch yesterday.
These are the times of the ‘Run’ starts. Each run is the upload of 100 1MB files:-
Run 1
Tue Aug 15 06:19:53 UTC 2023
--
Run 2
Tue Aug 15 07:01:54 UTC 2023
--
Run 3
Tue Aug 15 07:41:30 UTC 2023
--
Run 4
Tue Aug 15 08:14:52 UTC 2023
--
Run 5
Tue Aug 15 08:46:21 UTC 2023
--
Run 6
Tue Aug 15 09:25:55 UTC 2023
--
Run 7
Tue Aug 15 10:19:26 UTC 2023
--
Run 8
Tue Aug 15 11:32:35 UTC 2023
--
Run 9
Tue Aug 15 12:21:01 UTC 2023
--
Run 10
Tue Aug 15 13:33:03 UTC 2023
--
Run 11
Tue Aug 15 14:01:41 UTC 2023
--
Run 12
Tue Aug 15 15:59:28 UTC 2023
--
Run 13
Tue Aug 15 16:27:19 UTC 2023
--
Run 14
Tue Aug 15 17:51:58 UTC 2023
--
Run 15
Tue Aug 15 18:19:11 UTC 2023
--
Run 16
Tue Aug 15 18:57:08 UTC 2023
--
Run 17
Tue Aug 15 20:49:53 UTC 2023
--
Run 18
Tue Aug 15 23:07:42 UTC 2023
--
Run 19
Wed Aug 16 01:51:29 UTC 2023
--
Run 20
Wed Aug 16 04:48:18 UTC 2023
--
Run 21
Wed Aug 16 08:01:30 UTC 2023
--
Run 22
Wed Aug 16 13:18:20 UTC 2023
--
Run 23
Wed Aug 16 17:57:52 UTC 2023
--
Run 24
Wed Aug 16 18:42:42 UTC 2023
--
Run 25
Thu Aug 17 00:03:03 UTC 2023
--
Run 26
Thu Aug 17 00:52:48 UTC 2023
--
Run 27
Thu Aug 17 01:22:10 UTC 2023
--
Run 28
Thu Aug 17 02:19:49 UTC 2023
--
Run 29
Thu Aug 17 03:06:33 UTC 2023
--
Run 30
Thu Aug 17 04:03:40 UTC 2023
--
Run 31
Thu Aug 17 04:49:26 UTC 2023
--
Run 32
Thu Aug 17 05:55:20 UTC 2023
--
Run 33
Thu Aug 17 06:27:33 UTC 2023
So around 2100 on 15/08 the speed nosedived and was terrible most of yesterday but it picked up around 1800 UTC yesterday and has been back to how it was near the start ever since.
Certainly could. I didn’t realise there were nodes on that same machine, but that makes sense.
@Southside Looking at those logs, there’s a lot of chunks going up, which could be a lot of messaging. But there’s nothing untoward there jumping out at me at first glance.
Not in a place to do much tinkering but it is feeling slower than the last test despite dropping to 8 peers.
Unfortunately I did not keep any of the timed put/get data from the last net , hopefully when I get back this afternoon I find that I did not nuke it all.
Could it just be that the network is getting flooded with messages to get prices, it feels a bit faster than before and it stands to reason that more people were trying it out earlier in the week.
Up to 16 Tokens per record now!
What are the increments here, next is 32?
Added tracking of Starting vs Final Store Cost calculation on a per PUT request
Re-introduced duration (ms) for RequestIDs (last minus start timestamp for same Request Id #) associated with send request and receive response from peers
Observations:
Nearly ~46 hours went by before PUT requests started flowing in (2023-08-16 14:27:30 UTC)
When PUT requests did get logged, it was about ~300+ requests within 3 minutes (spike on the graph above)
Why is the starting cost always 1 even when node has 0 or 100s of records stored within it?
Note: I haven’t read up on how the storage cost algo works yet (my apologies in advance)
FWIW, the final store cost seem to have started at 2 then rose to 4, 8, 16 then finally back down to 4
FWIW, before the large spike of PUT requests, average # connected peers was ~287, afterwards it ballooned to ~424 and remained steady at the higher base level
Memory continues to rise on the safenode pid (expected for now)
Avg duration of send & receive Request Id was ~2764ms, though 1 of the Request Id hit exactly 30000 ms (possibly a timeout in code and might be expected)
Still To Do:
Parse the verification fee for messages such as below:
I am not clear on how this works.
Say we have a network of 200 nodes that are 70% full and charging 128 tokens per record.
20 new nodes come on board, is the design such that the data gets quickly and evenly distributed over the now 220 node network or are some people winning the lottery and paying 4 tokens per record.
Is there a likelihood that some unlucky folk in a neighborhood that didn’t get new nodes are paying 128 while others pay 4.