Autonomi Client libraries as shared service

Hi all,

today I realized my internet connection is broken for 2 weeks because I had like 20 AntTp instances running. It was a result of intense development of my browser launcher and they were hanging for days. Anyway, this means that each instance has it’s own client running. Since client is connecting to many nodes, it is taking resources, which brings me to another problem. Currently there are AntTp, Dweb, Jams…all separate apps. If we want fluent experience we can’t ask users to run so many different apps. Just imagine, you run few such apps and your local router dies.

It would be better, If we start thinking about layer architecture. Right now AntTP is a service, that acts like webserver, but on top of it, it also does some logic. So does dweb. But we need both to work in paralel in the same browser and ideally both to use same autonomi client instance, shared among them. Of course, as a feature, and still let users use 1 client instance per app.

I don’t have any solutions yet. I created web browser launcher with socks5, so it can allow running both anttp, dweb and TOR and in theory any amount of other web servers and filter between them by domain name, but still this brings many complications for router, because of many autonomi clients running.

I think, we should have some minimal routing proxy that everyone uses, this routes to whole jungle of various web servers(anttp, dweb, …) and all those web servers should be able to use shared client autonomi local service. Of course nothing can prevent it to use its own client instance. But If I as an autonomi user am forced to run many different services on my machine, they should not crash my router.

Was this topic discussed before?

3 Likes

I am accepting that at a time like this, when the gates to community development have only recently been opened, this is normal and to be expected.

Yes of course, in time (how much?) there will be collaboration, confluence and rationalisation of these apps - or app components as I tend to think of them.

IMHO the best way to speed up that rationalisation is for

EVERYONE TO ACTIVELY TEST AS MANY OF THE NEW APPS AS POSSIBLE.

That way we get maximum eyes on the situation, we see what works well, what doesn’t, what does one thing very well but is weaker in other areas etc etc etc.
We need real life testing in depth of ALL the available projects otherwise the outcome will be that the the consensus on the way forward will be formed by a rather small group.
IMHO that is a BadThing.

Much has been made of “marketing” recently.
The marketing that matters right now is getting those who can to test AND to encourage those who think its too difficult, others are better placed to decide, its too much work etc etc that their voice and experiences matter and that its not that difficult really to get tore in and try out these new apps.
Sometimes some of us are too close to te trees to see the forest and we need fresh eyes to come in and look at the big picture.
So @Herodotos is correct to raise this.

YOU can help by running Atlas, Friends, colonylib and all the others. Tell us what works, what doesn’t what could be done better and if necessary “WTF were you thinking about?”

5 Likes

I think one issue is that the client and node routing tables is tailored to the xorNames it needs to talk to. Basically a single purpose system. Uploads have limited endpoints needed to be contacted at any one time and set by the batch size which defaults to 1 and much more than 8 is not a good day. This basically makes it a sequential series of (almost) single purpose routing table. And of course a node is singular since its related to its peerID → netaddress (xorname) → xor address

Now if many apps use the one client then it is trying to turn an essentially sequential process into a many paths all together.

My thought is that this requires a redesign of the flow of the client software. But that also means to have multiple end points (maybe 100’s compared to a hand full) will require a routing table for each (maybe some of each shared) and thus connection count.

I am sure that there are ways to reduce the routing issue requiring so many connections for each end point.

1 Like

Thanks for explanation. This complicates it a lot. But still, as far as I understand sequential are only uploads. Downloads are parallel, correct?. In such case, real life scenario browsing websites, listening to music, etc, rarely happens that multiple such services wants to upload at once, and yes, it is usually some background process.

I am just thinking loud, trying to find out how real is this problem, whether it is a problem at all.

If the goal is to create alternative to the internet, than we need to count that there will be tens of apps and services running on a single computer. Even if it is 5 on average, in a household/school/company where single router handles many such computers that router will be in trouble. I accidentally overloaded my router for whole office with 10-15 clients running.

I know it is not real problem now, but if we are about to build the libraries which will be a foundation for all future autonomi client software I think it would be wise to think about this bottlenecks now, rather than later.

It kind of sequential parallel AFAIK. That is the routing table grows to accommodate the required connections.

Guess the point I was trying to convey is that for each endpoint the client tries to get to requires a certain number of connections. So if you have 10 apps using one client and each has batch size of 4 then there is 40 end points and essentially some common connections and 40 sets of other connections.

To do even that of course needs changes to the code I’d say. And as far as I can see you might save on 20 to 50% of the connections compared to using the 10 clients (one per app)

Please explain me details. You are talking about endpoints, that client is trying to download certain chunk at a moment from? Or endpoints that are long term for communication, and are connected even if there are no download/upload calls? I don’t see a big problem in the actual parallel activity, but the problem I see is a waiting state. Where they are doing nothing and waiting for activity but they were able to overload the router. At least my problems with router were solved when I killed those hanging AntTp services.

So if i want a record from a particular XorName (–> xor address) then the client has to request a path to the nodes holding the record (closest nodes). To do that the route has to be determined by sending out requests to nodes that are closer and closer. Each causes a connection. The number of nodes queried are determine by the routing table nodes and starting with the closest, so not necessarily Olog(n) because part of the way is already there.

Now when you want another record then the process has to be done again. Connections occur to do this. And this may change the routing table as well, as it reconfigures.

So while the one instance of client apis could in theory handle all the requests from different apps, it still is creating connections to obtain those records.

I do think this would result in significantly less connections on the one device with (for example) 10 apps doing reads/writes.

Also consider it will be a significant savings in connections. Need to modify the code though to do it (I think, and not explored it though)

1 Like

Thank you for explanation!

So this explains problem of many connection when doing a download/upload. Those connections are short lived and one the chunk is downloaded, they are closed, right? So this is a problem only if all the apps do activity at once.

But how client library behaves where there is not any activity? Does it keep many connections opened? It should manage a knowledge about active state of the network, so the question is how many opened connection has such waiting client.

2 Likes

Not sure of the answers to how long connections are opened. But I gather this was a focus for improvements recently

1 Like

I began a topic on this in the development category a couple of weeks ago and have a positive response from other developers.

So I created a shared document on Codeberg to help us evolve and manage this as we go forward.

Since then, both myself and @Traktion have begun making changes to help converge on a single REST API (though some way off ATM). My changes are on main but not yet released. I hope to do that next week but have had no time for code recently.

7 Likes