Transactions - a chained type to replace CRDT Registers

I think we should charge each was max size and not try to get actual size mind you. It seems like more work and complexity if we get actual sizes and there is something to be said for getting efficient I think?

4 Likes

will lead to people trying to squeeze as much as possible out of each dtype; but that’s most certainly okay and not an issue at all

3 Likes

What about early spitballing

C == chunk MAX_SIZE 4Mb
ScratchPad == 1C payment (MAX_SIZE == 0.25C == 1Mb)
Transaction == 0.1C payment (MAX_SIZE == 0.125C == 0.5Mb (Limit’s the num outputs though))

It would be nice though if a TX linked to a chunk to make the TX free, but that’s extra work for nodes to be sure??? But here we are spitballing so let’s spitball :smiley: :smiley:

5 Likes

My thought here is rather than 0.1C that there is a minimum amount of C that is always there no matter the size. After all there is a certain amount of work the nodes do no matter the size stored. Maybe 0.2C + 0.1C for transactions. And 0.2C + 0.2C for 1MB (0.2 is 1/4 of 0.8)

5 Likes

what’s the motivation to make scratchpad smaller than a chunk? to account for the rewrites?

ha! maybe that’s a compromise indeed …?
payments for data itself is free (so yes - a bit more work - but since all nodes only get paid if they do the work that might be okay? they could check that the paid amount is about right for a chunk; and the signer of the quote being referenced in the payment (?) is close to the referenced chunk in a way …? )

1 Like

…or maybe it’s good enough if something that “looks like” a chunk payment is free? depending on the rules chunk payments follow it might be rather difficult to abuse the system there (and TX amount would be for sure higher than the TX fee …)

1 Like

It’s to really prevent folk using that as a data store instead of chunks. It would just be misuse really but we should avoid that if possible.

3 Likes

The reason is deduplication probably, right? I think it’s worth mentioning.

1 Like

So we can have max ~6500 outputs per transaction (512KB / (32B + 48B))?

Storing a file would most of the time be probably about 2.3C anyway: a TX (0.3C), a metadata (1C) and a real data (1C), so this 0.3C is negligible, one order of magnitude less. So…

… maybe not worth the extra work, not to mention also code complexity? Since we want things to be as straightforward and lightweight as possible.

3 Likes