Will saving a file use 3 PUTs (transaction, metadata, data) then? And also 3 GETs? This looks like performance problem to me.
Does one transaction take one record, and one xor address? This means, that āpay once, update many timesā feature in Registers is gone here, right?
Could the content
field be bigger? Given the network protocols overhead/latency, it could probably be happily 500 bytes or even 2kB, without any significant effect on speed, but allowing to save more data in-place, like file metadata etc., and spare additional PUTs/GETs.
Will the address of the transaction be selected by user, like with registers? Is it only genesis, or every transaction?
Maybe instead of just parent, we could save a parent, a 10th ancestor, 100th, 1000th etc.? Then going back to genesis would not be a problem, and itās an O(logn) complexity, so quite good perhaps?
What happens if client detects bad transaction (verification fails)? When node is doing check in registers, it reports the bad node, but what is the process with the client and transactions? Does client also report a node, that broadcasts broken transactions? Is it the same process?
I cannot understand the outputs part. Is it like when transaction T1 has outputs T2 and T3, that means T1 is a parent of T2? How is that possible, when we donāt know the future? Or the outputs maybe could be not only transactions, but any arbitrary data?
Can we perhaps not deprecate registers until new datatype proves itself stable?