That’s why I ask for it to be O(1) as part of the requirements.
Getting the latest value of an infinitely growing AD must be constant time for it to be long term feasible.
At most log(n) for being feasible in the above. And then it will degrade performance correspondingly of course.
If that cannot be reached then they need to be swapped out regularly. And for really giant enormous data sizes (haven’t checked how big), we would eventually need to create a new account as to get rid of the previously filled upp app container.
This is a scenario I lay out in the Appendable data discussion topic.
This is from the draft I wrote the other day, didn’t include it in this post:
So, it would work in the way that we will see eventual consistency corresponding to how often we compact, or the need for replicating the in mem table (in which case we are more likely to be using dedicated database servers which clients connect to, instead of writing in each client.).