Yes, but we can do better. Adding receiver signature is optional, meaning we can still do the same as bitcoin blockchain, but we can also allow another important use case where proof of acceptance is needed, for example to avoid being accused of forcing others to provide goods or services with unsolicited payments.
I am not talking about something that needs to be developed, because this is possible right now with current SDs: In addition to senders’ signatures, the signatures field can contain other signatures whose validity is checked by routing layer. This is not possible anymore with MDs, because signatures field has been completely removed, including senders’ one! (which prevents static checks of last transfer, which is another problem).
The coin is not locked because you must get my signature before issuing the transaction. If I don’t give it to you, then just don’t create the transaction. Or you can create it without my signature, but then you take the risk that later I deny that I wanted the transaction to be done. This is your choice, a choice you don’t have with bitcoin transactions.
In the screenshot you provided, I suppose that “Transaction 2” MD is the final one, not a transient one. The owners field has pubkeyB value which is needed to check the signature of previous_data. But it should contain also a dummy key to prevent mutation.
You will say that it can contain both keys which would disallow mutation. But even though this field is a BTreeSet, multi-ownership is not implemented for MDs and I doubt that it will be ever implemented without signatures.
The transfer to a dummy key is introduced on the slide after the one in the screenshot, so it’s not complete. Its only goal was to explain the general idea.
The owner (pubkeyB in the screenshot) could also be stored in an additional data.owner field I suppose to retain it after transfer to the dummy, but it’s also possible to find it in the previous transaction if we specify an input index. This would save some storage space.
But doesn’t this mean that validation of a transaction becomes very computationally heavy after a while? I imagine that when lots of transactions have been done in this way the whole chain of transactions needs to be validated for each MD in a single transaction.
My guess is these transactions fit in 100 kb. of MutableData. So if a coin already had 100 transactions that would mean downloading 10 MB. and read out back to the genesis one. Don’t know if there’s a different solution for this.
You only need to check your branch. With an average of two son MD by transaction and a total of 1000000 of transactions you only need 20 lectures to reach the genesis.
That example is only true for a balanced binary tree. In reality I think there will be long chains of transactions. Unless a red-black tree implementation is used or something like that.
Yes, these are possible ways to store the current owner pub key. But a cleaner one would be to store both this pub key and the dummy key in owners field. Unfortunately, this works only for SDs and not for MDS.
Let’s assume the average amount of outputs in a transaction is 3. Then the above adds up to 256 bytes. Now there are some smaller fields that I left out, but they won’t add up to much. Let’s assume for convenience that the average transaction will be 300 bytes.
If there’s a history of 10K transaction to validate, that’s 3 MB of data. To prevent having to download them one by one which would be incredibly slow, the sender of the transaction could send a list of all addresses of the transaction history rather than just the address of the last one. That list would in this case be 320 KB. Now the recipient can start downloading them in parallel and then verify them.
There’d be 10K signatures to verify. I read in a paper that a 2010 Intel Xeon quadcore CPU that’s now a little over a hundred dollars can verify 71K Ed25519 signatures per second. Even if you have an outdated desktop CPU, verifying those 10K signatures likely won’t take longer than a second, assuming optimised software.
So I’m not worried about this, even with hundreds of thousands or even millions of transactions to verify in the very distant future (when computing will be even faster), I don’t think verification will need to take longer than 5-10 seconds. Computing speed increases may very well outpace transaction history growth.
This is a bit hand wavey, like the early belief that computing power and storage growth would overcome the growth of a blockchain.
One reason this is hard to determine is that growth in computing capability also facilitates growth in transactions, so the former is unlikely to help overcome the latter.
We’ve always had this kind of problem with computing though. For example, memory capacity of computers once increased beyond what we imagined software could usefully make use of. But that just stimulated us to look at more complex problems, or to trade programming effort in for less efficient use of storage through higher languages etc.
So I’m not saying this isn’t a useful approach, but I think it is likely to face problems with long chains at some point, and that these will need to be addressed or we could end up with similar difficulties to blockchains.
It would be better overall if we could have a way of pruning or limiting the length of chains without undermining the integrity or introducing centralisation as a consequence. Safecoin is one such solution, but as we know that introduces other problems, so there’s a trade off being made here.
This falls in a different complexity class than a regular blockchain, where everything everyone does has to be received and validated by everyone. This is linear, a blockchain is exponential.
It doesn’t have to be overcome if it’s not an issue in the first place. It’s contradictory to state that facilated growth is a threat to the viability of original use cases.
In general I’d say that the viability of the whole concept of SAFE relies on a degree of exponential growth. It doesn’t have to be as strong as Moore’s law, but it has to be there or permanent data retention becomes impossible. In my view the growth requirement of this transaction concept is lower than SAFE’s, so if SAFE is viable on the long term then so is this.
Everything is self contradictory if you examine it and nauseum, because all ideas are relative, but I don’t see your precise meaning here. No matter.
I agree that SAFE faces the same issue and suggest that we are all prone to erring, by believing what we want to be true, and that the difficulty in determining the outcomes here is what gives us leeway to do so. The unkown creates faith, but not blind faith, informed with a mix of logic and experience.
I confess I have not thought anywhere as deeply about either as you perhaps have, but I bring at least these questions to help sharpen the discussion Well, it is sharpened for me anyway.
I am just not able to be satisfied with the “bigger faster computers” argument as a solution for either SAFE or this. It worries me, but that doesn’t undermine my enthusiasm or determination to help things along. Definitely give it a go!
Then what about the other part of each transaction? If Alice sends 0.3 coins to Bob and 0.7 coins to herself, then doesn’t Bob’s wallet need to verify both the 0.3 coin transaction and the 0.7 coin transaction?
Yes, but that’s the same MutableData piece. The money back to Alice isn’t going up in the tree again, it’s just a split at that point. The protocol reads back each transaction and always checks both the payments. So only when Alice spends that money, the person after here reads back the tree from here point of view. The case you talk about there’s no need to see what happens with Alice her value after the transaction. The wallet would just check the transaction (containing 2 payments). No need to go sideways on the chain I think.
Ok, maybe this has already been explained, but what prevents Alice from making two transactions? For example she sends 0.3 coins to Bob and then she creates another transaction based on the same genesis coin and sends the same amount to Charlie. The original coin is burned by sending it to the dummy address, but she sends two separate transactions, one to Bob and another to Charlie based on the same genesis coin.
Alice doesn’t send a transaction to Bob/Charlie, she uploads it to SAFE and then sends the address of the transaction to Bob. Because the address of a new transaction has to be equal to the hash of the signature over the parent transaction data (using Alice’s public key as defined in the output in that parent transaction), there only exists one possible valid address for the new transaction. This is why Alice can’t create two different transactions using the same parent transaction input; Charlie would notice during validation that the address of the double spending transaction has an invalid address.
The ownership change of the transaction to a dummy public key is not to burn it, but to make the transaction immutable.
okay … maybe this is a veeery dumb question … but if i got 30x 0.33 Coin … (because i get paid in coin and i earn 0.33 per hour) … that would make 99 coin …
if i wanted to spend 99 those 99 coin for my new safenet-Fan-Shirt … wouldn’t there be 30 transaction-chains to be verified and not only one …? (yes again linear … but with coins being spread+recombined wider and wider at some point half of all chains needs to be verified Oo …? )
ps: oh i think i just realized … no recombination possible - only many parallel strings starting a life of their own … would make a nice picture … and makes it not as computing intense Oo …
…if now there would be a mechanism to summarize the events (say when the chain reaches 1000 transactions …)… pps: solved the “riddim verified lightning-network” (yeah sometimes i should just not say what i think)
Current explanation excludes recombinations (multiple inputs), yes. I’m working on a way to allow them while reducing validation times or at least keep them equal compared to current model.
I don’t understand enough of how the network works to understand all this at once. On first pass then I’m wondering, the network is built to retrieve data, rather than validate it. The impression I have is that some files are large and some are less that a single fragment max-size … and there are a number of instances of all fragments, enough that data can be found in a number of places.
So, forgive three easy questions:
How big is the “MutableData piece containing the transaction”… if it’s just one fragment of data, does that risk any instance of it being liable to actioning a double spend, on the back of it being a true instance.
What about satoshi spam?.. and relative to whatever is required to validate the good spends.
If you’ve multiple copies, then the process can only be as fast as that which can see each action acknowledged across all instances of the file?
just some brain-farts from me because i like the idea (but am not sure about the scalability…)
ok - if there are signatures you trust (e.g. your own) you don’t have to follow that path further once you reached such a trusted signature …
but what do you do if someone you trust signed a faulty transaction …? once smuggleded in (with recombination and a trust-system) it couldn’t be separated that easily again from the correct branch …
edit: maybe first things first … who else would be trustworthy besides yourself … i would trust my family … but thinking global that wouldn’t really solve the problem …who wouldn’t be …? everybody who would earn more by betraying than he would loose when someone notices … so … someone with 100 coin legitimate coin probably wouldn’t fake a 1 coin incoming transaction if that involves the risk of loosing all
edit2: uhm … stupid thought again … wouldn’t it be way easier and more secure if people wouldn’t sign it themselves …? … so … if the close group would verify that transaction …? then only one (or if you really are suspicious 5) steps need to be verified …? (that of course implies the question how to keep track of current coin status if that should be the safecoin implementation … since safecoin would no longer be “just a special datatype” but theoretically able to reach a limit of infinity depending on coin generation/burning-rate …)
edit3: or you do it with following down to genesis blocks till decentralized computing is implemented in safenet and that might then become the trusted entity