Well payments for uploads will always be needed. So there isn’t so much a problem with them. People doing the uploads need the nano (or smaller early on) and node runners have them a plenty.
So we end up with money changers. Maybe direct with the node operator with no fees, and maybe with a “professional” money changer who charges a small fee for the service. The money changer needs to ensure a supply of both very tiny denominations and also of larger ones (like 1, 0.1, 0.001, etc) and if they do a good job then they should never need to seek out a supply of any denomination.
The node operator wants to off load their tiny denominations and the uploader wants them, simple trade, and the money changer is doing both directions and should only need a float (as in cash register float of various denominations)
Throwing a curved ball into this. Why did they abandon digital bearer certificates? (There’s actually an implementation of this I believe by the guy who invented the idea, using Blockchain I think.)
It was a really great tech - discussed here before - but had one weakness: the need to trust a centralised mint to validate transactions.
I understood that the network could perform that function and solve that problem, and for a long time that was the approach MaidSafe were working on. They abandoned it in favour of the DAG ledger but I don’t recall why, or perhaps no explanation was given.
Maybe it was too anonymous and that killed it rather than any technical problem because the change was around the time the regulatory environment started to get hostile.
When this was the plan we were promised all sorts of brilliant features and characteristics. Offline transactions, no DAG/ledger, privacy max with hidden inputs and outputs. All the network had to do was tell a client whether a transaction was valid or not. Almost all the work was offline.
Maybe we can clarify why that was dropped and look at implementing it. Do you remember @mav? Seems something you’d have taken an interest in. There’s probably working code for parts, maybe in private repos, but I expect most of it is fairly straightforward for someone with the skills. I don’t remember if it was tested outside.
Good thing if they go back even further to one unit of token per data record is that the owner of the data record is the owner, and no need to validate further and they transfer ownership to new user to send. Use denominations to get smaller than one token. And then we are back further than DBC
Obviously there is in the data record a signed message from the issuer so the receiver knows its real ANT unit. In the case of native token that would be Autonomi foundation that generates them
I think this is a separate topic but I think it would be worth discussing and I would very much like to know the reasons too … maybe not the right moment to discuss this here now … but maybe @dirvine@Bux would care to clarify some of the reasoning for us …
for me personally I’ve seen some minor issues but no large discussions (to possibly find a solution for them?) about it in the initial safe coin design with owned data
why DBCs have been abandoned is a bit opaque to me too … I think it was just said that the DAG would be superior without much of an explanation … (but was pretty straightforward too and I didn’t hear of big issues anywhere there either)
with the DAG we’ve seen significant speed issues and it comes with the need to trust validator nodes … which imho is a big loss …
…this is no attack and no criticism … it’s just a plea for an explanation to be able to understand why we want to go this route that looks so difficult and complex to me …
The problem with that is with splitting. It’s why every implementation so far uses valuesn with lots of divisibility built in. It wasn’t solved here and I think is one reason they went to DBCs.
If you use data you can’t have enough units, so you end up with the splitting problem.
DBCs avoid that and the need for a DAG or ledger. Why were they abandoned? is the key question. Was it technical or political?
That has some pros and plenty of cons IMO. Fungability, managing your wallet holdings of different units, conversions, predicting proportions for a new kind of economy, all those ownership changes. Yuk.
Using values is simple, works and with DBCs provides extra capabilities over using data with ownership.
maybe first things first - ownership of data (decoupling address and owner) would be pretty cool anyway … and maybe we’ll get into some discussion with the team before they “just implement the right solution”
Sadly, they have no history of doing such. IMO, we are on our own and a fork will ultimately have to be our solution. I’d love to be wrong on this BTW, but that’s what I’ve seen over the many years.
Don’t be too hasty to jump to conclusions here please. Let me try and find out from David why it was the case this was sidelined. I can’t remember off the top of my head.
I hope you are correct and they do have an interest here. I wonder though with the regulations regarding tokens - it may simply be a hurdle too high in terms of risk for them to implement a native token.
For example, the Financial Action Task Force’s (FATF) Travel Rule, which mandates crypto service providers to collect and share users’ transaction data, similar to traditional finance requirements.
If we have a native token, we will also have to have a native exchange. Even if DeFi, this may likely run afoul of FATF regs unless there is a built-in method for tracking and reporting transactions. I hope I’m wrong about this, but even if I am, these rules are still changing and Maidsafe may simply not want to risk it unless the rules open the path here and say it a-okay to do it (and frankly we know they won’t do that).
So again, I think a fork and a community run effort is the pragmatic way forward with native token.
@chriso the POC is working pretty well now, so it would be great to get some eyes on it.
AntTP branch to work with the client library changes:
Autonomi client/ant_protocol/ant_node changes to allow ownership changes:
To summarise:
Instead of just owner, there are now 3 distinct fields for owner, address and previous_owner (which is really a signer, depending on context)
Given we already check that count has been incremented, the prior record must already be retrieved. This code allows us to check the previous_owner against the local/last owner. If they don’t match, permission is denied to make further changes.
I confirmed that user A could create, edit and transfer a pointer to user B. Once the transfer was made, A could do no further edits. I then confirmed B could edit and transfer back to A. Once transferred, B could do no more edits. I then confirmed A could make edits again. This is in line with ownership expectations.
The client library, nor AntTP does sufficient validation to expose all of the above (yet!). Simply, ant nodes will reject the change and it will fail silently (unless you’re watching the node logs, ofc). This area could be greatly improved to allow the client/node to tally though.
What does this provide as a dev/client experience?
We can now create pointers at an address by using a shared seed (private) key, to derive a public key based on a string/word/phrase. Anyone can lookup the address based on this key, providing key/value retrieval (turning autonomi into a huge key/value database, along with everything else).
We can transfer pointers between different users (public keys), preserving the address and content. Given the address is still unique, but easier to derive, this could be used for NFTs, allowing ownership to change hands. Could also be used for generic tokens, such as native ANTs.
No history is retained beyond the previous_owner, which could easily be wiped by the current owner making a change to the pointer (e.g. counter bump).
Transfers and ownership gives us some interesting new patterns, including:
Shuttling pointers back and forth, containing some data, e.g. private chat. As pointers are free to update, this could be an interesting short messaging system (just spit balling! ha!)
Using addresses derived from a shared private key means pointers with derived addresses could be owned by different people. This could be useful for arbitrary message queues or lists, including forums, comments, etc, with the content either short or linked to an immutable, etc.
Probably lots more stuff too, but those just came to me while playing about with this POC.
I hope the idea isn’t flawed and garners support, as it really does open up a lot of possibilities. Extending the ownership model to graph entries, scratchpads, registers, etc, could unlock many more cool use cases too. And… of course… native currency, perhaps with its own type and properties!
Thank you for taking the time with this. I have passed the message on to Anselme in our Slack channel and really hoping he will get some time to take a look at it soon. If not I will keep persisting to try and get someone to look at it .
I am a fan of physical store of value converted to a representative share of the total amount of resource currently available, this concept really goes back to ‘proof of resource’ work.
ie- We all spent money on a device, so fiat was converted to an asset of value, the device.
That purchased device in turn is converted by the NodeRunner from what is a locked and depreciating store of value into a logical share of of resource value, as the AI knowledgebase used ‘implicitly’ points out in your OneToken run, which is
the logical representation of the resource value is a percent of the overall value of resources in the collectively connected network, which keeps changing…
Which means at any point time the cpu speed, number of cores, ram, capacity and network bandwdith in the system deployed by the NodeRunner hosting multiple antnodes can be represented by a ‘proof of resource composite index’,
where cpu+core and model can actually be recorded and stored using known cpu/core/ram benchmarks which are closely aligned to the actual functional use cases of the network…, same goes for bandwidth and i/o to storage.
so what is required is a PoR index build for the system hosting nodes, then that PoRindex is published as part of the quote,
which can and will in most cases influence client upload choice as a form of price/performance discovery, a choice which can be controlled by the client uploader via script on their own uploading system that receives a quote together with the initial PoR index of each antnode to be stored in say those node’s scratchpad memory (operators may/can upgrade system to make them go faster) , where the uploader before commiting views each quote received, compares across quotes and, based on the uploader’s own rule set, a quote is selected by the uploader likely influenced by the systems price/performance index as reported by the quoting node.
It wasn’t a single prompt by any means. I spent two hours of research developing that idea with multiple AI’s (mostly Grok3) via openrouter API in Zed editor. I have a much more comprehensive paper that is mostly done, but I stopped work on it. While it has some interesting properties - as you noted, it doesn’t really solve the problem I was hoping it might.
If you are really keen, I can zip up the work that was done (all markdown files) and give to you.
If others think this has merit for Autonomi, I can finish doing the paper, but otherwise I’m not going to put in the additional time.