I am bothered about the way the discussion about how the data flow on the network should be set up has gone. The points @neo has raised have a lot of support from various sources I have found.
The team says that neo’s suggestions have been tried, but the only public source regarding them is this PR in Feb 2025. It does not provide any information about the findings and leaves neo’s main points untouched. November 2025 though, very similar changes were made in this PR and they were noted to make clear difference for the better.
What bothers me most, is that none of the changes made (and reverted) does not have any detailed arguments as to why they were done. There seems to be a lack of understanding about the issue at hand and at least a lack of will to communicate anything in any more detail than works / does not work.
According to my research David and the team may very well be right, that it is not only about tweaking one parameter that would make a big difference - if all else is left just as it was. But it also seems to me, that the needed changes to just one or two other parameters could make a big difference.
So I would really like to see how these parameters could and should be set and test that in a live network. On the other hand, Autonomi 2.0 is just around the corner, so these changes in 1.0 code would be in a quite “academic” or mere “told you so!” domain.
But then again, I am truly curious about this matter, and we might see same issues in the 2.0 too, as is hinted at in this post. And for a couple of weeks, what else is there to do?
Would folks be up to trying this out in a Comnet? If so, how should we proceed? I could share all my findings in the next post in this topic, if there is interest in taking this forward.
V2.0 will be bringing a lot of changes and the required changes may very well happen organically with the code/agent/ai/whatever that is being introduced to make the node more efficient.
I would do a wait and see, I will when all this has died down just mention my main concern and maybe secondary concern to see what has been done to make them less of an issue.
That being
buffer overflows in the router which has multiple nodes behind it, and
latency for setups with sufficient buffer space for most cases. Latency as in when my router is buffering over 40MB and a 40Mbps link and it takes nigh on 6 to 8 seconds for a response to be finally sent to the requester
I say this because there is not always one way to skin a cat. They could make it so the node itself breaks up large chunks and the receiver asks for say 8 data blocks instead of a 4MB chunk. (IE split the chunk into 8)
I’ve already been assured that I will get some pointers needed to change the values when the time is right
I’ll pitch in when the time is right I agree with @neo best time to have a look at this will be after 2.0 has been launched and we have had a chance to have a poke it.
I may still progress with this using just these QUIC settings. If I do that, it may very well end up being a totally useless venture of noob vibe coder with very little understanding.
I’ll only do it, if I find it fun & interesting enough. It would have little meaning for the future of the network, but might give a new ground on how I evaluate the work of the team. (Might also give me a reason to hide in the forest for a few months… )
If I miraculously can make a fork with few changes in these parameters, would anyone care to review my changes in order to ascertain that they don’t cause trouble to anyone running them? And would anyone join my Comnet, if I make one?
There’s also a problem of measuring the effects, but… that’s for later.
Yea, this is not what I am talking of. For one this is the receiver stream window size. That will not cause buffer usage in routers since you are going from slow internet to 1Gbps LAN.
But it would affect a downloader’s (node/client) experience.
As David says though just changing these sort of values without some understandings of the interactions can really nuke the connections. Make it so slow that carrier pigeon would be faster. Or even cause everything to time out.
Anyhow we have time to wait, there is definitely nothing going to be done at this time no matter what anyone feels. And further discussions seem to indicate that other measures are going to be in place.
It’s interesting that old comnets had smaller chunk sizes and were much easier on my router.
I agree on waiting for the v2.0 dust to settle though. The debate has strained emotions. Let’s hope v2.0 is smooth sailing and we can all move on with enjoying smooth uploads.
David is keen that we get a comnet together (or several).
Until we get some familiarity with Autonomi 2.0, thats unlikely to happen as easily as it used to.
This time around, unlike the free-wheeling fun we had in years gone by, the new Comnet(s) will only be useful if we have a bit of discipline and planning beforehand and regular reporting in a std format.
I’d suggest the best practical way forward right now is tiny steps, downloading GitHub - saorsa-labs/saorsa-node: Pure quantum-proof network node for the Saorsa decentralized network and getting some local familiarity with this fast changing code. Once we (ie those most likely to participate in said Comnet) get half a clue, an idea of how many we could/should run on each box, then we can look at what it would take to set up our own Comnet(s)
What would be the minimum viable node size?
What would be the minimum viable Comnet size?
How much HDD space will we need to rent off Hetzner etc to get boot-strapped?
it’s done well but surely 2.0 should be the start point?
We cannnot contribute anything more to 1.0, other than validating the node upgrade code IIUC …
Yes and I will formulate a test plan for it. The MikroTik routers allow the setting of buffer size on any port or LAN IP address. Also allows the speed of the outflow of the buffer. This would simulate different upload speed even for those with Gbps connections. This is something Maidsafe cannot do on their datacentre servers.
The inflow to the buffer cannot be controlled since it is UDP packets. Actually this is the source of a major contributor to the problems we are testing for
Also I need to get a small script that can extract from the router the fullness of the buffer and the traffic readings. Maybe once a second. Then people can run upload or download scripts of specific files so that multiple testnets can be compared against each other.
I expect there would be needed at least 3 such routers in the testnet but more the better to improve results.
Basically there would be a fill up the nodes with specific data files, then start the testing scripts with timings of upload time, download time, error counts, and the data extracted from the MikroTik routers. So there would be data coming in from all the people running nodes and those uploading/downloading. I would hope at least 6 people running nodes with at least 3 using MikroTik routers. I know the 5009 router became popular and for good reason.
I need two things from the devs, and been assured some help with pointing to code point. The two things being where to change the QUIC parameters and how to neuter the requirement for paying for uploads. We can do without the test token complexity in the testing.
For numbers of helpers we need to use criteria so that we will be able to have a controlled test environment. Requirements will be at least 3 people with capable MikroTik routers and expect needing at least 6 people running nodes. The further requirement would be to be able to run the tests for a while as we cycle through at least 10 test runs, prob more.
I’d rather contribute the small number of nodes I can run to the official 2.0 when it is available. And until then to the remains of the 1.0 network to give it as soft a landing as possible and get more data for the team. I feel the end is nigh for it though.
I’d estimate at this time we are looking at at least 3 months before testing would be optimal and of course we might get a better idea if it will even be needed or worthwhile
I wonder how many people on the live net running nodes aren’t in the group here, wanting a comnet?
If we are running the majority of nodes, with the network nearing a conclusion, we could treat it like a comnet, but with an element of randomness (other node runners outside this group).
Ofc, if changes may collapse the network altogether, maybe it is the wrong place to test. On the other hand, it is a perfectly representative network, complete with data - that is ideal for testing.
Surely the testing should be on Autonomi 2.0 and the latest code will change so much in the next few weeks that its hardly worth testing (other than what can get learnt about upgrades)?