Could it be possible to create google realtime firebase like database based on Autonomi?
Firebase database can be used to create a serverless applications. If implemented on top of autonomi, it would be independent of any 3rd party. And cheaper
My question is can it be possible to create a notification based database on top of autonomi? So clients could subscribe to changes on some db item (scratchpad i guess) and get a notification when the item changes?
But if you want a notification of changes in a particular scratchpad, that one is stored only in limited amount of nodes. And those nodes (well, at least one if them) would need to know all clients which are listening.
Actually, if a scratchpad is changed, does it also move to different nodes?
Well even creating a shared database on top of autonomi might be quite complex. And adding database triggers to that makes it even more complex. Maybe impossible if there is no support on the lowest level.
Frequently used data was meant to be replicated in more copies, but Iām not sure this mechanism is present. Also, even if it was, this adds more lag. So, maybe itās really not the field, where Autonomi would excel at currently.
The issue with mutable data is that caching is difficult since the data can change, so caching (one of 3) cannot be done, or only for a very short time (eg a second), or some form of asking the nodes with the mutable record if the data has changed.
Nodes that cache the data could also poll the source nodes, so not all clients would poll the sources, some could poll the caching nodes, but with the lag. In that model, with 5 original source nodes, 100 cache nodes and 2000 clients, we would get 20 queries/s. Thatās 10x less queries/s per node and only 2x lag for client, so maybe it would be acceptableā¦? Just theorizing. And that would probably mean introducing the āreal-timeā or āpolledā datatype to have this behavior. On the other hand ā would it be necessary and worth it in the first place?
Imo, this is the danger of over using mutable types, especially for bulk data. You just canāt cache it in a universal way.
Maybe one app is happy for the cache to be 1 hour old, but maybe another canāt have data older than 5 seconds. How do you bridge that gap and have a shared cache?
With immutable data, it can be cached to the end of time. It will never change.
When we can cheaply use immutable data, append only databases may become very useful
We could set small chunk sizes (instead of the 4mb default today), with costs adjusting down to meet this lower requirement.
Then appending immutable changes becomes cost effective and fast too. The client app would just need to know which immutable chunks to download to get up to date.
There would probably be a role for mutable data at the head of the changes as a buffer, so that small changes could be grouped and then persisted as immutable when the buffer fills. Iirc, mutant can do that sort of thing already.
Native token (or another way to massively shrink gas prices) will unlock a lot of these design patterns which arenāt yet economical. In the interim, scratchpad can bridge the gap though.
I mused on that for about 5s when creating the ETag handling and saw references to server side caching among other things. I donāt do much of that yet, nothing for resources in fact (just archives and history metadata).
I donāt know much about that side of things, but expect thereās a protocol somewhere for an app to control this - perhaps by saying no older than (I think that may already exist in the ETag handling).
So we could have a sensible default for caching mutable data, but apps which need fine control can have it either through ETags and related headers, or some other existing protocol or something we come up with together.
I donāt see much need for this yet, so have not given it more than 5s thought but it seems feasible when apps begin to need this.
The chunks were never āset to 4MBā they were allowed to be a maximum of 4MB. Any size from 1 byte to 4MB
If you want to implement a uploader that self encrypted (or other encryption, or none) to 1MB max then you can and the network will store those smaller records. Remember the limit is a maximum not a set size.
Now there has been talk of the ANT cost required being proportional to the size of the record. The record size is even in the values used for working out the cost. At this time it does not seem to affect the cost in a noticeable way.
I donāt think we should be concerned, at least not yet. The network isnāt heavily contended and the team have said throttling node responses could prevent them being overwhelmed.
However, I believe weāre being pushed to over use scratchpads, due to the economics of them. right now. Immutable data is core to the network design and for good reason.
I suspect applications which only use mutable data where immutable data isnāt practical will scale out the best. Putting everything onto scratchpads will surely cause a scaling challenge. However, it will likely just slow down the apps doing this, from what I understand.