Been reading through this some more and have been struck with a few ‘eureka’ moments. Will need to share those another time. A few quick responses/comments below.
Yes true. What is really required is a third object to form a suitable basis for both.
Perhaps they should be the same thing and offer a uniform interface…
Maybe it shouldn’t grow indefinitely… just have a very large max size that you can quantify and make design decisions with.
A typical/standard Vec lives in volatile memory only. Eventually you will need to serialize this to disk for persistent storage. I figured that reuse of a SAFE datastructure could handle this for you automagically.
I recall long conversations past with @neo about reference counting and data deletion. The consensus view was that this was very cumbersome and inefficient. From experience I know that it has some serious performance penalties on local disk operations in linux when you have lots of hard links to the same file. Do the chunks really need to be deleted? I’m not so sure. The previous standard method of letting a user delete the metadata to a private chunk (aka the “datamap”) but leaving the chunk on the network as garbage is probably fine. Copy on write is as safe as it gets. I know there was some pushback from the community when dirvine asked about this. To some extent I think dirvine was too accomodating for trying to keep the “hive” all happy and that his first intuition about append only and copy on write is a preferable strategy. This was more of a concern when Safecoin was a unique data object and not a section balance. With the balance method, those concerns are likely unfounded.
It solves the “obfuscation at vaults” mentioned here.