I think it would be good to begin convergence on REST APIs as we seem to be duplicating more now. Maybe at some point one can be retired, but for now I think it’s good to have separate strands.
@Traktion I’m assuming your API is a more standard mapping to Autonomi. Mine is one-to-one too, pretty much but has extras built in to simplify app development (eg with named objects and per type key derivation, extra control via custom headers etc).
What we could do is put a common REST API together on the same path that is as close to Autonomi as possible, and move anything that has extras onto a separate path.
So for example, standard on /ant-0, my enhanced equivalents on /dweb-0 your on /anttp-0 and go from there. Just using those as examples, happy to adjust if you’re in.
In time we might decide to migrate some of the extras into /ant-0 in response to developers.
Developers could build on a common REST API that works with either dweb or AntTP, or for just one of them if they prefer.
All sounds good and I’m sure I’ll learn from your APIs as they are as well. I also have a mix including multi-part, binary, JSON etc. Another area of difference may be how we return struct results - for example when doing put/get I’m using Rust structs with utoipa Schemas. I see you added swaggerUI so are you also using utoipa for docs/SwaggerUI? If so us both using the schema of common Rust structs would be a good way to go.
I don’t have strong feelings about the paths except for… /v1 etc which I explain below, but first another possibility that occurs to me is for us to standardise on the base APIs (/ant /api or whatever) and to allow a header which enables extra features. That might simplify some areas of the handler code, but tbh it will just make the docs harder to explain, having optional features in many APIs all enabled by a custom header. But I mention it anyway.
Back to why I avoided /v1 etc for versioning: that was because I wanted an optional ‘version’ in the path of APIs which accept a History (which is quite a few) so I went from /ant/v0 to /ant-0 and can use /archive-version/[v<VERSION-NUMBER>/]<ADDRESS-OR-NAME> where ADDRESS-OR-NAME resolves to a History (similar to Register).
I use that in several my APIs (e.g. for links to another website with /dweb-open/[v<VERSION-NUMBER>/]<ADDRESS-OR-NAME><REMOTE-PATH>). The version is optional because you can leave it out, in which case you get the most recent version in the History. I’d planned to use the same convention for Registers. So I wanted to avoid having another /vN in the path. Normally I’m a slave to such conventions
I suggest we pull this into its own topic so I’ll copy my first post onto the topic below, and if you can copy your reply there, I’ll reply to that with this. If that make sense
@Traktion I understand and would normally agree with your preference for ‘/api’ and ‘/v1’ etc but propose the following for the reasons noted above. Another is that I chose routes that would be unlikely to appear in the path of a website resource which would make that resource inaccessible (masked by the API route).
So my proposal is
/ant-0 for a thin, no extras implementation of the Autonomi APIs, probably as you have them rather than what I have done in that area which I would move to:
/dweb-0 to include my current Autonomi and dweb APIs which have extra features to simplify their use in apps
/anttp-0 to include any extras/extensions that you have which don’t fit into /ant-0
I’m also suggesting version 0 so we can move to version 1 when we feel things are fairly complete and stable on a particular route.
I imagine once we have more experience of our ‘extras’ and feedback from developers would could slowly migrate them from /dweb and /anttp to something like /ant-extras-0 so we’d aim to end up with just two routes in the long run:
/ant-N - simple REST versions of Autonomi APIs
/ant-extras-N - anything that doesn’t fit in /ant-N
That seems simplest to start with but is open to discussion obviously. This will also affect everyone developing (or who may be considering) on our current APIs so feel free to give feedback: @riddim, @Josh, @safemedia, @loziniak, @Champii, @Nigel, @rreive (apologies to those I’ve missed!)
For completeness…
Other current dweb routes
I also have a handful of additional routes that are named to make it easy for people to remember for typing in the address bar or in the case of the first two, including in web pages as external links:
/dweb-open and /dweb-open-as
/dweb-info - info about the current website (addresses, version etc)
/dweb-version
/dweb-next, /dweb-previous - not yet implemented but additional manual ways to change website version
/ant-proxy-id proxy identifier and version of the dweb API (so at the moment this returns /dweb-0)
Archive metadata
dweb can serve any website in an archive, or any version of a website that are stored in a History (created and updated with dweb publish-new and dweb publish-update).
When using dweb publish, I add some extra metadata in two ways:
I create add a file with the path /.dweb-history/<HISTORY-ADDRESS>:<VERSION>. For example: /.dweb/history-address/b3f448c9b4820e7ce4ec620220d612fec039155b6754727de7cc1ecf46d3546b6f8bfa18671b266914af3d3acae78e82:2. This means that if someone loads a website from an Archive I can access the History and provide the versioned features, even if the user doesn’t know the History address. I don’t use it yet but it’s a small enhancement to do that at some point. The important thing was to get that data into Archives early.
The dweb CLI and implementation can include a file of JSON settings which will be stored at /.dweb/dweb-settings.json. I haven’t defined the format for this yet and nothing uses it, but it can be stored! I envisage this can be used to customise some server behaviour, and maybe to support website redirects.
I’m not proposing any of this is picked up by AntTP but thought it handy to have alongside the other things we are looking at standardising. Maybe you have some things like this too so this is a place to pull everything REST together.
Don’t want to hijack the thread, so I can split this off to its own if you prefer, but I think this is somewhat related. I’ve made a first pass server for colonylib in Rust that implements a REST API for all of the colonylib API. So you can add pods, add/remove RDF entries into pods, upload them to the network, and perform SPARQL queries against your RDF database. I didn’t map the Autonomi API since you guys have already done this part. My intention is that a user could use dweb or anttp for all their low level Autonomi operations and use the colony-daemon endpoints as an abstraction layer for managing the pod metadata, pod keys, and performing queries.
I’d like to follow the Autonomi REST API conventions for this so it plugs into this environment as seamlessly as possible. Any suggestions here? Or maybe because these are separate, it doesn’t matter?
There is an example.sh in the scripts directory that exercises most of the end points. I plan to go the utoipa/swagger route to document the API, but haven’t gotten that far yet. The code is messy at the moment, but it is all there
I apologise for not tagging you @zettawatt - just my bad memory.
Your input is very welcome and I think it is a good idea to pull info about all our REST APIs together here and see where it goes. I’d also forgotten you were building one
Ultimately there may be one server offering all the options, it maybe multiple ones. I don’t have any opinions on that other than trying to make this work as best we can, first for users and then for developers.
So my next UX feature is to try and hide the dweb serve step. That might lead to a way to have multiple servers without the user needing to start them.
We could create a single CLI to check they are installed and warn/install if the one needed is missing, check if what’s needed is running, and start it if not.
So a user could install and run one command/app that seamlessly provides access to all Autonomi websites and apps, whichever REST server and API is needed.
That’s some way off, but the kind of UX I am thinking of.
I think there is an angle where we can create libraries for each app and include them as extensions/plugins.
Tbh, that would probably be possible trivially, by just starting each service on a separate port, from within a wrapper executable. It would be nice to share a single port, but it would place minimal restrictions on architecture that way.
Maybe something more comprehensive could be defined, but sometimes complexity is the enemy of execution.
I like the plugin/extensions idea in some kind of unified wrapper command. Maybe we have some kind of reverse proxy in place to route traffic to the proper endpoints in each app extension. Then you get the single port that the user sees without having to do anything specific in the extensions themselves, other than making sure they’re on unique internal ports. I do this using traefik to handle all of my docker containers here at the house, but i doubt we would need something that complicated here. Though now we’re getting into high complexity territory
I think I can get behind this too. It’s a pretty similar syntax to /api/vX and I can change AntTP to suit. I don’t think anyone (other than me with IMIM) is using the REST endpoints yet (at least not publicly, but for now I’ll map both and mark the old path as deprecated, just in case.
Yes, I’ve used separate structs to the Autonomi ones to keep the coupling loose (between REST and Autonomi libs). However, they tend to map in a consistent way.
address (String: where the data is stored)
name (String: used to derive a key from)
content (String: plain text or base64 encoded content)
counter (Integer: update/version number)
cost (String: price in nanos)
There are some additional fields where applicable, but these are core to a number of types (register, scratchpad, pointer, etc).
I think they mostly map to Autonomi client naming, but I’ve tried to make the fields more similar between types, where appropriate. Keeping the names short/concise helps too, imo.
Other interesting features of note:
POST/PUT operations will validate input, but then assume success and return a populated body. This is to give the illusion of low latency to clients. The operations happen async in the backend. I want to move these to a task queue (like public archive POSTs) and it may make sense to return an upload_id to allow clients to track this.
The data being returned by the above is also cached (for a period), so requests that immediately request the data will return the cache still. After the caching period, it will request the data from the network, at which point failed operations become obvious.
Once the upload queue is enabled, then it will be easy to track success where desired, perhaps as an async task (e.g. in a task status panel in an app, etc)
Archive uploads return a different payload to the above, but they should probably be aligned, where feasible, taking advantage of an upload_id in the response.
There are no standards for passing private addresses/payment receipts yet, although I think these should be passed in headers in the future.
There are no standards for approving uploads in general, although an application key is used as the basis for derived keys in AntTP (it can’t be passed in a header yet either). I thought it would be good to have a separate app to monitor the upload queue, allowing manual/semi-manual approval to uploads, to keep track of what apps are trying to send to the network. It may make sense to combine this with payments and/or have a similar flow (e.g. sign payments which are queued for upload in a companion app, plugin, etc - like a second factor for login is done). Anyway, open to ideas here for sure!
Ultimately, I believe we want the client experience to assume success to reduce latency within applications. As an upload can take over a minute in some cases, masking this feels desirable. As the network becomes more stable, success should largely be assured, other than outliers, imo.