Do you know where this is in our code. We can test it in a dev testnet to see the difference.
Its in the libp2p libraries
Search for this
// Ensure that one stream is not consuming the whole connection.
max_stream_data: 10_000_000,
From docs Config in libp2p_quic - Rust
`max_stream_data: u32`
Max unacknowledged data in bytes that may be sent on a single stream.
I predict you will see no difference
The reason being that they use carrier grade routers in Digital ocean and connections in and out are same speed so no real buffering in the routers at all. Its going to affect ISP supplied routers at home the most
On a very high level, community members should be allowed to design, plan, and run their own testnets so they can try out their own ideas simply and quickly.
Nobody is stopping us running our own testnets.
We can clone from Github and do what we like to the code. Persuading a critical mass of others to join may be more of a problem but if the goals and timescale of the proposed testnets are well communicated maybe not so much.
@Josh is your man for advice there.
To test varying QUIC window sizes we would need a reasonable (@neo fill in this embarssing blank please) number of nodes from behind bog-standard ISP routers and some agreed test parameters so we know we are comparing apples to apples.
While the the actual network is being subsidised there is absolutely no incentive to take nodes over to a comment.
And finding the 2k nodes for a viable network on home standard hardware would be a challenge at the best of times as most the hard core testneters are already running mikrotics or other fancy routers.
Did I say it would be easy?
However until somebody takes the first step it will always be impossible.
I have to admit to an old but gold Draytek router and some nice fibre at home myself.
I can in theory run run some nodes on ADSL through an ISP router as we are forced to pay for ADSL while we still have a landline.
@neo couple of questions
Would we see any difference if the network was say 50:50 VPS/ home machines?
How quickly could we establish that this change could make a difference? 10 mins, 12 hours, 3 days?
Could we run a small test and if results show any promise , look harder at how we could get a larger âfrom-ISPs-onlyâ up for a more detailed examination?
IF we do this, can we agree in advance to a proper structured series of tests - ie more than just BegBlag getting flung about. no disrespect to the Besoms
Realistically, any number past 5% of your investment portfolio spend on maidsafe would be outrageous.
I take pride in being outrageous.
And I donât have an investment âportfolioâ.
I tend to look down on those who do as being too lazy to do any actual work themselves.
Especially those so pretentious to use the word âportfolioâ to describe a collection of short and long term bets.
âAye Iâm just back from the bookies, hud tae check ma portfolioâ
My money is in Maidsafe because I believe in it.
At the very least, if the proper place in code is identified for the suggested change(s), we can at least confirm there isnât a regression in metrics or abnormal behavior detected on a wide variety of metrics specifically because off this change (at least within the DO internal controlled environment).
If so, that itself is a good sign (i.e. it didnât make the current scenario worst (not saying it was suppose to either)) on the nodes we do control etc.
I believe its been on the teamâs radar to attempt a REF vs TEST with just this change, but hasnât been fully prioritized yet, among all the existing backlog items.
Respectful request to fully prioritise it ASAP, please
you should see the list of priorities Picking one is not so easy
I never said it was easy
Thats why we have you for the bits that arenât easy.
Can we consider a short period (2-3 days) of incentives and a testnet for only ISP router nodes to see if this QUIC window thing would make a difference?
Yeah - I know thats a whole shit load more work for you. And I also know there are probably a lot more aspects to doing this that I havent thought of yet.
Man, I have some things I am pushing like mad, I mean really pushing and some are vital to me, but the team are beyond hyper focussed and almost impossible to move from their path. It bugs me and drives me mad that there are things I feel are vital, but I cannot argue they are doing great things with stability. I am desperate to get the client APIâs python bindings and so on. I feel they are incredibly vital for testing. I almost donât stand a chance of moving them So I work on the side where I can to help out.
So pushing them to stop for 1-2 days would be impossible, unless there was a nuclear weapon going off, the team seem to be totally dedicated to exactly what they are doing. Stress levels high as well I think.
We are uncomfortably close to stupid people making that a reality.
Stress management is very important. This is a marathon not a sprint. There will be enough time to try everyoneâs ideas.
Thatâs why calendar deadlines are a real liability. Autonomi tries to fit everything into an arbitrary schedule, and it causes a lot of stress.
Wouldnât it be ironic if Autonomi made the calendar deadline it set for itself, by compromising the quality of the finished product and burning out the whole team?
If it were stressed. No stress means everything looks fine, one or 2 chunks every so often is no sweat for almost any router out there, except a Flintstoneâs router
If we could monitor the retry rate of the large chunk transmission then should not take too long. Iâd expect a network of mostly potato ISP routers then the bigger the chunk sizes being transferred very often the quicker. 4MB with most routers passing many chunks a couple of seconds through connections < 50Mbps should see them within minutes and 10 minutes to confirm. At 128KB we should only see background retries so take a few hours to confirm.
Unlikely due to not generating the churn and upload/download traffic needed to confirm it. And as @aatonnomicc says the ones likely to help are the ones who have good routers. Also if we have others then whose to know if the problem is this or just the general, what settings do I need type of issues
Also I think David was more about the TOR question of size of data block being transferred rather than testing the router buffer issue
Yes, very much agree. And I assumed the libp2p has implemented this correctly and is another reason why I predicted there would be no difference seen. IE no regression.
If no regression then it should be implemented anyhow for good traffic control through all sorts of unknown routers out there. I say it should based on experience over the decades and my own communications programming a while back. Its keeping the traffic flow control at a level that is kind to most if not all post 2000 routers. We saw this with the 1/2MB max chunk size, that the size did not cause enough trouble above the noise of any other issues tweaked since those betanets.
Nowhere as close as in my younger days.
That is not pretentious, it is definition of âportfolioâ
Nah a portfolio is the big thing you had to make out of cardboard at art on the last day of school to take home all your paintings.
Dont try to tell me otherwise, I made at least 7 of the buggers. And my mum kept them all. So there.
How wise is it to start pressuring the network with a community initiative when the team is doing things step by step? Such an approach would likely be counterproductive in nearly every aspect I can think of.