You will remember the early mutable data types and them all having a limited number of entries. We had Sequence, Append Only to name just two, and they all had a fixed maximum size.
The answer to “what about when they fill up?” was, more than once that it can be handled by an API on top, or if not then by the app. So from outside we had to accept that.
Anyway, if nobody inside had thought it through, by all means let’s try and solve it now. From outside though, it is hard to go deep because most of us don’t understand the limits and capabilities we have to work with.
I’ve a better idea now than back then, but I’m not coming up with ideas.
The creator of the first tx could ofc know the path if a sequence of txs would be created by derived keys… So possibly one could publish the list of possibly existing txs… Which could be queried as batch again (and 99% would not be found initially)
That it’s a valid chain can be verified when having all the txs then… (or updated scratchpad… That ofc doesn’t have a history but would just be a support to crawl the txs faster)
This is where I am at a well Mark, I feel we now have a stable network, the consensus wars are over and the team are actually giving us the baseline of a decentralised network. Now we have to get to the API and find out what are the limits we can build with it and what are the easy things.
This is a time that tI am excited about actually I wish it were years ago we were doing it but it’s now and I don’t feel it’s too disheartening as we can build pretty much anything with what we have, but I do feel we can do better than using scratchpads for static addresses of changing data.
I also feel though Transaction’s (read these are just unlimited Registers) can give us much more than we realise.
We will though, we will and even better we can try them out now with focus. Rather than chat about a future thing we can make this current and try it out.
Yes that’s true, we could have a logarithmic search then for the latest version. Yea, I like that. It’s still an encumbrance, but also that can be done in parallel and QUIC is great at that. So we could fire off 1000 or so queries at once. The we can almost take the time down to a single message time!
I don’t think my scheme results in data loss. You can, if you try hard, embed links that will break but they are recoverable from the history. So nothing would ever be lost.
I think we need concrete alternative ideas so that we can examine the pros and cons.
Awe provides one such concrete approach which I expect to document at some point - and a useful set of test pages which illustrate this at work, referencing different versions of resources from pages on different websites, as well as linking between pages on different sites with and without versioning.
So if there are other demonstrable approaches we can compare.
This sounds perfect Mark as well, but are you relying on a register not filling up?
I am sorry but I am not familiar with awe just yet, I am only about 60% of the way through our codebase. So still catching up.
That’s a PITA for you though, can you point me to the gh file to look at or something and I will try and get a look there as it’s now apt we find solutions for this one.
I’m not reliant on Registers. I don’t see a problem reimplementing the TroveHistory using Transactions, but as the chain gets long finding the head will take more time. Not a problem for most regular websites - and in a way gives small independents a bit of an advantage over sites which are updated many times more often, so
On awe, looking at the code isn’t a good way to understand the linking use cases that it enables. I have notes though so could hack them into a blog post fairly quickly. Not currently on my to-do list though, but I’ll make a note.
The place for that post and these discussions was the Dev Forum IMO, but that has been shown to be an unsafe place for anything we want to keep, and no longer a place where it is worth building on with such content
It’s unfortunate, but I’ll be owning such content myself now, even though there are disadvantages to that. At least those posts will end up in Autonomi more quickly this way.
@riddim brings up an interesting point and perhaps a solution here. As we use BLS keys you can read thousands of possible versions per second so that may be a valid approach.
When I talk about links @happybeing here’s what I mean. (I see a need for current version and some version stuck at a point in time) Here’s the point in time
you have a paper or website you work one
I want to link to that paper
I want to link to the current version
I don’t want to link to the original version and step forward
So I use the current version link (transaction) to point to that and if folk want to look at history then they can, if you have subsequently updated that then they can go forward.
So say we switch it round. It’s my site you are linking to, I say something you entirely don’t agree with, so you link directly to that version and debate it in your site, showing why you don’t agree etc. In the meantime I agreed and updated my paper/site etc.
So linking to latest in that event might be an issue.
I think we can do similar with registers if we assume they don’t full up, but you see what I mean. Here we directly link to your version of my paper you want to discuss and perhaps even show later on I agree and so on.
I feel we are on the same page actually, but using differing terms to discuss it.
But I had to take wee Uisge out for a wander there and rushed back as I think @riddim is certainly onto something here with precomputing all possible successors to any type (or ancestors) using the derived key BLS trick. That allows us to read in many many versions at once and it might be a really good approach, at least in the short and medium term?
From your description there I agree, and both awe and the new approach I’m taking for dweb provide for both in standard URLs.
One reason I said we differ is that in the past you have described using immutable addresses in URLs to link sites (either to site metadata or to resources). That is problematic both in that it requires custom tooling, and is incapable of circular linking (e.g. two pages each linking to the other using an immutable address - because when you insert the link to page A into page B, the address of page B changes, and breaks any link to it previously present in A. Same effect when pointing to immutable metadata. Hence links must include a fixed address and optional version, which requires them to be based on a mutable history.)
awe allows for versioned links, or links without a version which defaults to using the most recent website metadata. It uses the RegisterAddress for the site, a path for resource (e.g. ‘/ants.png’) and an optional version that defaults to most recent. The same approach can easily be implemented using Transactions.
With awe I was able to define a custom scheme and add the version as a URL parameter. That requires a modified browser though. Another issue is that the way different OS’s handle non-standard URL schemes, makes this unworkable until the scheme is adopted as a standard.
dweb is a follow on project - both a CLI app and API - that must support versioned URLs in a way that will work in a standard browser (ie without a custom URL scheme). I’ve defined a modified versioned URL convention for this but am yet to implement it in the dweb-cli. First I will implement awe CLI features and then provide access via an HTTP server (on localhost).
Feels like I don’t need to prioritise that blog post now?
BTW I don’t yet understand precisely how the BLS key thing works but fingers crossed for that.
This parts not so hard. BLS has a really neat feature. Here it’s simplified.
You have a key ABC
Transaction 1 is at location ABC and has content XX
We can take ABC and add on XX and then do a `bls.get_key(ABC+XX) and get the new key.
So basically we can do this for the public key. The owner can do the same for his secret key (bls_get_new_secret(ABCSecretKey + XX) and can sign or whatever the new location with that.
So this means we can figure out each new Transaction name for near infinity and the owner can also create the appropriate secret key.
#############
This buys us a lot that we have not even introduced yet. So say
you visit website XX
You have your private key pair YY
when XX asks for a registration you take `bls_get(YY.public + XX) and give them that.
You can get the secret key as above to sign etc.
What that means is a new identity for every object/site to whatever you visit and all you need is the object name or site name and you have your key pair for that thing.
object == docs/wallet or whatever you want. So with a single key pair you can have millions of id’s and so on. You don’t need to remember any of them, just your root key pair.
I thought the BLS post was a suggestion for how to overcome the problem with finding the head more quickly (e.g. for versioned metadata), but maybe I didn’t read it closely enough. In fact it is something orthogonal yes?
Yes so to find the next 100 Transactions we do this
Take owners key
Take transaction (contents etc.)
bsl_get_key(Owner_key + hash)
And we can do that repeatedly to know all the owners keys (or transaction names)
Lot’s of tricks with BLS as above, but to make a chain not guessable you can add in a secret index, so something like the hash of your secret key. i.s. bls_get_key(Owner + content + hash) and that won’t be guessable. So for your calculable set of passwords this trick means no 2 entities can tie your ID’s together, so you seem like a different person on each website for example.
When I get this API thing fixed I will mock up a few wee quick demo apps to show this In operation. That will be helpful for everyone, and me, to understand the API and identify any limitations or weaknesses etc. But mostly to find the new designs that get us more than we have in the old web.
On the immutable vs latest link, sn_httpd accepts either atm.
If an archive address is assigned a name, it will resolve to it. However, an archive address can be accessed directly too. Much like domain names and IP addresses in clear net (at least originally, until IP supply became tighter).
Tbh, I can see why both may want to be linked. Dynamic links may result in 404s, but getting latest is often preferable. However permanent links to immutable addresses is handy too, for specific data points.
For example, having an immutable link to a news site is pretty pointless in most cases - you want the recent headlines, usually.
So, I think being able to link to mutable and immutable data is necessary. As long as the xor address can be exposed, folks can then choose which to link (e.g. like I use etags woth xor in sn_httpd now).
All this is calculated client side. So when you get your first transaction you can … ah hang on. I am assuming content does not change, but these transactions change the content each time, that’s the purpose …
Ah Sunday afternoon dreaming
Sorry @happybeing I have led you up a garden path there.
Well - I thought public key of the tx was simply the address? So we don’t need to hash in the content but can freely choose what to hash and write that public key to the output…?
So e.g. Hash(first tx content, previous tx address)
Here’s a thought @happybeing Tell me how you feel about this.
So Transactions can do all registers can (but more decentralised), but we still have the lack of ability to statically point to head of tree. But when at head we can step back through time. However getting to HEAD is the most common thing for the static address use case (not for all transactions or links though).
So in Scratchpad, we have a Counter. That’s just a fancy name for a monotonic counter (1 2 3 4 5 …) (nodes use this to store only the latest copy, it’s not CRDT or concurrent protected).
A ScratchPad can have any data you want in there. I was thinking of a simple counter type, but we already have that with scratchpad.
So you do this.
Store a Scrachpad version 1 as your web site root.
All it contains is a single entry (XorName). (plus the default counter)
The entry points to latest version of your site
on update of your site there will be a new transaction
at this point you update the scratchpad
Apps/browsers should check the scratchpad is a valid pointer only and the thing it points to is a valid transaction. Users can step back in time to see all versions.
I think that gives you what you want?
The door it leaves open is being able to point to a whole new transaction/history and denying your own web site history, but if others seen it an held a copy of your site then it’s irrefutable as the transactions are owner signed and that can be proven.