I have seen many topics on this forum regarding safecoin divisibility, but they all appear to be rather dated or do not seem to give a solid answer to this question or go into crazy complicated hypothetical projections…
I would assume it is yes to allow micro payments but just wanted to confirm.
The capability is there either inherently, or via the alternative schemes that have been mooted. There is no doubt something will be needed if Safecoin appreciates, although dividing the coin is not the only possible way you could provide micropayments.
So there will be a solution, I don’t doubt that, and we’ll be homing in on precisely what as soon as the implementation of test Safecoin is underway.
It had been discussed a lot here - there are a couple of very good threads if you can find them. I’m sorry I don’t have time to look them up, but @dyamanaka was very active on them.
A question like this was asked on Reddit and someone with the username dirvine replied. I don’t know if it was @dirvine but it seems to be from what he posted.
Couldn’t we use the same method that is use to handle the micro payment of PUT request? Each user would have a “income buffer” wallet which fills up as others pays them with micro payment, once the value of a whole Safecoin is in it the network generates a real Safecoin and give it to the owner.
Was this proposed before? It sounds like an easy way to get divisibility without changing much the network.
Maybe is a idea to consider, but I always think that an added numeric wallet can be the right solution. There are a post from Ben Bollen about this possibility.
They’d only have to get it perfect right out of the gate if changing the protocol is made hard to do. I imagine this won’t be the case, because that would be a god awful thing to do, and bitcoin points out exactly why.
That’s basically what I’m saying but I see that I liked Ben’s post, so apparently he planted the seed in my mind. What I’m adding is that when the numerical wallet reaches a value higher then 1, a Safecoin is generated and given to the owner of the wallet and the value is subtracted from the numerical wallet.
I see some problems. If we use the numerical wallet to pay for example, a PPV by seconds or minutes, what happen when the wallet reach zero? Automatically change a safecoin from the standard wallet to the numerical wallet? This can be very dangerous. And do it manually can be very annoying or can generate problems in automated tasks.
The truth is that I never liked the binary division and find a solution to prevent it seems important to me.
Yes, I guess that’s how it would work. But I don’t see how different it would be with another Safecoin divisibility scheme. In a PPV model, you still need to pay as you go, so you can do it manually, which is a pain, or automatically. But I fail to see how the implementation of the wallet changes that.
I proposed a variation on that with a SD using the same security (core processing) as the coin. The SD is like a note holding a value which is always less than 1 coins. For any payment the launcher supplies coins and these notes and the core returns one note with the balance that is < 1 coin. Always < 1 coins since the surrendered amount is within a coin of the amount needed. Any return note value of 1 coin is automatically turned back into a coin rather than returning a “note”
When a coin is split into 1 or more notes during a payment, the actual coin is “frozen” by the system. The system can take notes given to it with a value >= one coin and return the coin which was frozen and any left over in a note.
That way the security is the same, the value of any note is relatively small. The network never has more than 2^32 worth of SAFE coin out there.
And it means we can divide the coin as much as we ever want. Maybe initially the division is 1/100ths which would be useful at the moment when coin is worth 10 cents. But tomorrow (with a system upgrade) the note could be 1/1000000ths. Any note at 1/100ths remain the same value because well its the same. just less zeros until it is used then it gets the extra resolution. And we have literally tons of bits to keep dividing. I suggest using decimal divisions since we as humans understand it better, and there is no issue with saving bits because an SD is 100KB, Plenty of room to store the fraction in fixed integer size. Do not use a floating point format because then we will have ultra small fractions of coins lost
Also with safe coin there is an issue of each coin being a physical SD that has to be transacted in an transaction. That is sending ten coins is 10 times the transactions as sending 1 coin. This is why creating SAFEcents SDs is a problem because then potentially sending coinage could be 100 times more expensive on the systems transaction speed. By using divisible notes the transaction cost is only increased by 1 no matter how many “cents” or micro-cents there are.
Being out of school for a long time, I looked up “how many bits in a number” thinking this would be very straightforward and code light a thing…but this?
Every integer has an equivalent representation in decimal and binary. Except for 0 and 1, the binary representation of an integer has more digits than its decimal counterpart. To find the number of binary digits (bits) corresponding to any given decimal integer, you could convert the decimal number to binary and count the bits. For example, the two-digit decimal integer 29 converts to the five-digit binary integer 11101. But there’s a way to compute the number of bits directly, without the conversion.
Sometimes you want to know, not how many bits are required for a specific integer, but how many are required for a d-digit integer — a range of integers. A range of integers has a range of bit counts. For example, four-digit decimal integers require between 10 and 14 bits. For any d-digit range, you might want to know its minimum, maximum, or average number of bits. Those values can be computed directly as well.
In this article, I will show you those calculations. I will be discussing pure binary and decimal, not computer encodings like two’s complement, fixed-point, floating-point, or BCD. All of the discussion assumes positive integers, although it applies to negative integers if you temporarily ignore their minus signs. 0 is a special case not covered by the formulas, but obviously it has only 1 bit.
(I use the terms decimal integer and binary integer when I really mean “an integer expressed in decimal numerals” and “an integer expressed in binary numerals”. An integer is an integer, independent of its base.)
I only mention the tons of bits to illustrate the possibilities. In any real proposal we would stick with 64 bits. It could equally be 128 bits.
#####If in the future we need more division then the core could be made to recognise 2 versions of the note, the 1st version is the number of bits decided now (64 or 128) and the new double the bits. The core always uses the larger when storing a value. I doubt there is that many atoms in the earth as 2^128 so doubt we need more than 128 bits for this.
64 bits allows 19 digits of accuracy, and if want negative then it works out that its only 18 digits. Lets stick with 18 digits. This is a form of fixed precision, just all decimal places.
Then 0.125 is stored as 125000000000000000
Easy to do maths on this and no need for BCD (nibble = 4 bits) or storing one digit per byte. And if add 2 values together it does not exceed the capacity of an integer with that many bits. and if it exceeds 999999999999999999 after the add then a coin is released from being frozen to return in addition to the note. We subtract 10^19 from the new notes value that exceeds a coin’s worth to account for it
Then all that is needed is to specify the number of places the current system wishes to work in. 1/100ths or millionths etc. Only do this if we want it. Maybe it can be an account setting that the user wants (for viewing value, not rounding of such thing).
TL;DR the actual value of the fraction is store in an integer in the maximum number of digits the “int” can contain as amounts of that very small fraction.
I used to be a bit against splitting coins being the solution, as it would break coloured coin style approaches on the original coin. However, since understanding more about the structured data types, there is no need for coloured coins - we can create other tokens for those easily.
Moreover, splitting is highly scalable. If we can split over and over, there is no need to worry about not having small enough denominations.
So if we use 128 bit integers then we have about 36 digits (signed) of fractional space. or 10^14 atoms per unit of the divided coin. But if we went with 256 bits then we have about 74 digits (signed) which is about 10^24 units of the divided coin per atom.
Comparing with atoms or grams in the earth is an attempt to give some perspective to the scale of these divisions.
So if one considers that a 256 integer for division amount means that resource wise we are more talking of one unit of division per atom of all the planets in the solar system. There is no practical way that we could even attempt to make sensible use of that small a division in a very very long time.
Even 128 bits which is a billion division units per gram in the earth would be a very very long time before we could make use of it.
64bits is still inconceivable to know how we could make full use (in the foreseeable future) of its division size with 18 digits.
I would say that 32 bits would be enough for the foreseeable future, but 64 bits is just as quick and easy, so I’d go for the 64 bit (18 digits) integer to hold the divided coin value, and future proof to boot.