Web client devs - best way to do low level Autonomi REST APIs

In a related situation, AntTP returns an ETag header containing the XOR name.

This has 2 benefits - 1, it tells the client what the XOR name, should it be needed. 2, it instructs the browser to cache the data with the ETag. The next time the browser requests the file, it provides the ETag and just asks if it has changed. If it hasn’t (immutable data never will, ofc), then the server should just confirm no changes have been made. The browser then uses what was cached.

This is a very efficient secondary caching mechanism, on top of expiry dates and so forth (which can be more fickle, in my experience). They are also good for mutable data, where expires isn’t useful.

So, headers that mean something to the browser are especially powerful. Maybe that much is obvious, but thought it was worth underlining with an example.

3 Likes

Personally, I’d stick with the verbs until there is a strong reason not to use them. I’d definitely be cautious of folks misusing them to achieve alternative goals.

1 Like

For folks that dispair with REST and want a more programmatic format, there is always gRPC too. Probably one for another thread though! :sweat_smile:

2 Likes

VERY important in Australia where the government mandated requirement of ISPs to store the full URL for 2 years just in case the police want to look at your requests.

And Email subject line, and time date of every request, and, and, and

3 Likes

This is true but the situation isn’t clear cut.

Firstly for browser links there’s nothing you can do, so only JavaScript in a client or a non browser REST app can avoid using URL params for metadata such as a data address.

Secondly, even for browsers, dweb is intended to run in the client and so won’t expose anything to government/ISP etc that isn’t encrypted (ie network access).

And if you are running a remote gateway, I would be reluctant to regard that as secure without extra measures anyway.

So it is not clear to me this is really an issue for dweb, but if say AntTP is intended to be a remote gateway, we do need to consider this. What do you think @Traktion ?

1 Like

Yes, it is a concern for AntTP hosting for sure.

I was originally thinking about having white/black lists of XORs that a host could set. It maybe that a host will only want to proxy their own traffic, for example.

For apps like IMIM blogger, where folks are free to publish, it gets a bit harder, but the white/black lists could still apply if legally necessary.

Ofc, the brave hosts will claim impartiality and maybe that will work better in some countries than others. I suppose VPNs are similar though.

So definitely an important area to consider. I suspect bandwidth/hosting fees will naturally curtail how open folks want their proxies to be too though. In fact, that may be a bigger driver for XOR filtering in public environments.

Edit: In a related context, I did want to allow folks to easily fund the hosting of certain XOR (archives or specific files) in the publc envs. So, clients or content creators can fund public hosting easily. I’ve not returned to that idea again yet, but it seemed a good way to pass the hosting cost to clients.

1 Like

Do You think that will be enough or should we look for ways to mitigate this in the REST APIs?

I just realised I was replying to the idea of preventing access through legal action from the other thread! :sweat_smile:

Maybe it still applies here though, as recording IP addresses or activity in the logs certainly applies.

For logging, AntTP is pretty unstructured at the moment. It logs useful information, but there is too much debug and probably not enough logs about access.

There certainly needs to be some attention spent on what is/isn’t logged, depending on legal/privacy requirements for the host. Right now, it isn’t really considering either deeply.

Edit: For REST API specifically, I’m thinking it is probably more per host than per user, so should probably be a setting on the host.

However, perhaps the client could provide override headers, e.g. only serve if privacy is protected. Then the client can choose to use the service or not, based on their security preferences.

1 Like

This might be a bit off-topic, but since you all are here:

Could you somehow tune your apps so that they would not jam potato routers with too fast / too parallel downloads? Maybe same applies to uploads too?

We can certainly code to upload fewer chunks in parallel. For AntTP, I just made this configurable for downloads (with a custom routine), although it just uses the default functions for uploads.

Whether that will save your router… well, we should test that! :sweat_smile:

2 Likes

Uploads would be the more serious case though since the packets arrive at the router at 1Gbps and leave at a much slower rate, raw average of 48Mbps and more than 1/2 in the world are at lower rates. (The raw average is skewed high due to > Gbps connections)

2 Likes