Update, 25th September 2025

Visionaries & Predictions

I once shared your confidence in David’s vision and predictions. I recognise he has some outstanding skills and principles which have been vital to get this project this far without selling out to VCs. We certainly need him and people like him but I think there’s a tendency to accept everything they say, to idealise and not discriminate on what we make of what they say. I’ve been guilty of that.

That’s especially likely when people we respect and like talk about things we don’t know anything about, or may lack the ability to understand. Or when they wow us with big ideas or incredible (note that word :wink:) futures. David talks a lot about such things as we well know! :laughing:

A mundane but very important example: he’s been promising NAT was all but solved since 2014. And he has a history of not calling things right, the latest being AI. In time maybe some of what he’s predicted there will happen but he’s, as usual well out on timescales at least.

This is to be expected because predicting futures is very very hard, and it’s never been harder than now.

Delivery

Autonomi in general have not been good at delivering on specifics, including things they long recognised as important. Delivery of a product is hard in this case, but they do seem to lack in this area more than I expected.

This is not an attack - I am here supporting the mission in practical ways and trying to keep folk grounded in the reality. I think these are fair observations, and I mention them to explain why I no longer spend time listening to much about marketing and product, or to predictions from David, and don’t take the Roadmap seriously. I get the drift :laughing:, if you like, but deal with what is.

What I Find Useful

It’s great to hear snippets of what’s nearly done, thanks @rusty.spork, and what is being worked on, but beyond that I can’t take much seriously.

Pragmatism

So I work with what we have, and was glad to see David wrote that they won’t be messing with the data types or breaking the API so hopefully what I’ve done will, for the first time not be torn up by future changes before the network gets users. You should see how much code I’ve written over the years that became useless after one of several big changes. I daren’t count those hours. Not all waste, because I learned so much and had fun in the process, but some really useful things have ended up in the Trash Can.

From a Community to Everyone :partying_face:

The best thing about this project has so far been this community. Hopefully soon it will start to grow into a community of individual users who won’t know anything of what we agonise over. I don’t get excited about corporate users, but the change to easy payments gives me hopeful of the former, at some point.

Privacy and Security

I do worry about the impact of that on the fundamentals though, and David seems to be saying he’s given up on significant parts of them now, which is still not acknowledged or characterised officially. I don’t expect expect an answer from Jim on that because I’ve already asked several times, but will keep asking from time to time because it seems rather important to have an understanding of what the network is delivering in terms of privacy for each use case.

11 Likes

From the Co Founder of Ethereum

Full Blog Post: Why I support privacy

Recently, I have been increasingly focusing on improving the state of privacy in the Ethereum ecosystem. Privacy is an important guarantor of decentralization: whoever has the information has the power, ergo we need to avoid centralized control over information. When people in the real world express concern about centrally operated technical infrastructure, the concern is sometimes about operators changing the rules unexpectedly or deplatforming users, but just as often, it’s about data collection. While the cryptocurrency space has its origins in projects like Chaumian Ecash, which put the preservation of digital financial privacy front and center, it has more recently undervalued privacy for what is ultimately a bad reason: before ZK-SNARKs, we had no way to offer privacy in a decentralized way, and so we downplayed it, instead focusing exclusively on other guarantees that we could provide at the time.

Today, however, privacy can no longer be ignored. AI is greatly increasing capabilities for centralized data collection and analysis while greatly expanding the scope of data that we share voluntarily. In the future, newer technologies like brain-computer interfaces bring further challenges: we may be literally talking about AI reading our minds. At the same time, we have more powerful tools to preserve privacy, especially in the digital realm, than the 1990s cypherpunks could have imagined: highly efficient zero knowledge proofs (ZK-SNARKs) can protect our identities while revealing enough information to prove that we are trustworthy, fully homomorphic encryption (FHE) can let us compute over data without seeing the data, and obfuscation may soon offer even more.

At this time, it’s worth stepping back and reviewing the question: why do we want privacy in the first place? Each person’s answer will be different. In this post I will give my own, which I will break down into three parts:

Privacy is freedom: privacy gives us space to live our lives in the ways that meet our needs, without constantly worrying about how our actions will be perceived in all kinds of political and social games

Privacy is order: a whole bunch of mechanisms that underlie the basic functioning of society depend on privacy in order to function

Privacy is progress: if we gain new ways to share our information selectively while protecting it from being misused, we can unlock a lot of value and accelerate technological and social progress

5 Likes

It might be a long shot, I don’t know but from the sounds of it once the Communitas app is showcased it will show significant progress, feature sets, and should also potentially ease your security concerns.

Communitas is supposed to use the saorsa core libraries that David has been building in the background. The post quantum cryptography sounds pretty cutting edge but maybe standards for state of the art? Either way, one would need to test them against a quantum device unless that is just done mathematically or theoretically? That’s far beyond me.

once saorsa core is proven and battle tested then saorsa nodes would run parralel to ant nodes and eventually take over as ant nodes. Still Autonomi, and the basic are the same but the components of DHT, Encryption, etc are overhauled and optimized. At least that’s what it says on the tin.

What I fear is that that could potentially mess with the API’s but that is simply a guess on my part. The components being the “same” but also altered via optimizations, not sure how that changes things.

One thing I’ll say is that I have some of my own mental models and projections, looking forward and trying to invest and plan for the future, and you’re 100% right, it is hard. A lot of times things don’t happen as quickly as we’d assume but now we hear rumblings of even the most optimistic people being surprised by the exponential rate of progress and after really letting the spaces sink in, I can see how the plan or roadmap, saorsa labs and how it aligns itself with what is likely coming online very soon, is highly strategic. The biggest thing would be, it has to be proven and marketed very loudly and quickly to draw attention / adoption, otherwise kind of as David alluded to, this short term genius will easily be achieved by an AI in a short couple years and this project at least, wouldn’t really stand much to gain.

I’m hopeful but patiently awaiting the release of Communitas, Mobile SDK’s etc Hopefully saorsa is all it is cracked up to be and we’ll both be pleasantly surprised?

Either way, I think your assessment just now is very level headed and fair. Just chiming in a bit.

6 Likes

I’m a tad confused - was there an announcement somewhere about native token being officially dropped or something? Maybe in the “Spaces” (which I will not be watching)? There seems to be a strong reaction here as if there was some event, but I’ve given up trying to follow with all the different platforms. Links or explanations appreciated.

Generally I agree with the assessment that that roadmap is poor. Shockingly so, it’s just a page of buzzwords with no dates.

I also am amazed at @JimCollinson’s refusal to address @happybeing’s questions above. How is it unreasonable (or something, I don’t know, because there’s no response) to ask for an update on the privacy situation, when it was a cornerstone of the project’s DNA for so many years?

I for one second the request for an update on the situation there. Many others too, it would seem.

If the project as a whole, internally, has decided: actually look, it’s too much work, and (/or) governments don’t like it, we’ll drop a few privacy bits here and there, and maybe someone will tack it on later, there would certainly be a logic to that. Would you not consider having the decency to say so, if that is what has happened?

Instead of implying, as @dirvine does there, that it’s simply a “small group of people” intent on having a “native token rant” that “will go on forever”. As if we’re some deluded loons only asking about the native token on account of some brain defect we can’t shake off. The issue isn’t the native token, it’s dropping privacy as a goal, and never coming clean and saying there’s been a shift in priorities! If the questions regarding privacy were actually addressed, the “rant” would effectively be over for many, I would guess.

How insulting to imply that we’re the ones being inconsistent or odd or bad sports or something, it was literally repeated wall to wall here for years that this goal would never be budged on. Now you budge, majorly, and make little insinuating comments like that one while continuing to not address the questions and concerns.

I never thought I’d see this type of behaviour from this project. For years I said to people: whether they can do it or not, I don’t know, but I’m certain they’re going to try as hard as they can, and if they can’t do it, they’ll have the integrity to say it. I’m not making that up, literally word for word or a close variant of that point I’ve made to I don’t know how many people. Now not only do you budge, you do so in a haze of denial, ambiguity and uncertainty, essentially dropping the forum in the process, apart from milking it for developers who you won’t answer questions from.

What in the world is going on here? Am I missing something?

14 Likes

You can listen to the spaces on X without an account. Not trying to pressure you but it was a worthy listen in my opinion. It took me several listens to catch everything and really let it sink in.

I am somebody that is pretty comfortable with change and pivoting, even at this stage of life but I definitely understand the confusion and reactions to being told “native is not a fight worth fighting”.

They have legal, development, technical overheads, exchanges (who don’t like custom setups or privacy coins)’ adoption, and much more to worry about and basically what they are saying is native won’t make much of a difference and their agentic payment system will be more broadly open, use the current token in the background, still be mega cheap, and be able to leverage an existing smart contract infrastructure (Ethereum / Arbitrum) without having to waste time and resources. Instead focusing on networking breakthroughs and efficiencies that do focus on privacy and security for all with being “Post Quantum Secure”. I’m just relaying what I’ve gathered.

They also said that a community member could pretty easily create a native token with graph entries. Something I think @loziniak is considering?? His Autonomi Community Token (ACT) allows the creation of altcoins that work on Autonomi, which is a great step in that direction it seems.

I think checking the available resources is a good first step before getting too upset. Then reassess.

I’m on board so far. I’d still like a community made native token regardless though. I’m just not going to chastise Maidsafe for moving on with things they deem more important, that I can agree with.

11 Likes

My only real question is, since most of the roadmap github links lead to the massive P2P lib that David said was abandoned because it became too complex, then goes on to say that saorsa core pulls in everything necessary and is available to test, though the components it pulls in are going through some rapid changes, does that then mean that many of the listed roadmap features are integrated directly into saorsa core?

David also mentioned that there are millions of lines of code. I know how much he likes simplicity and I’m sure it could be whittled down with time but as he suggested there are systems today like PayPal that have millions lines of code. In fact, a recently discovered (inferior but existent) corporate competitor to PirateRadio is millions of lines of code. So that’s not to say it’s bloated necessarily but rather perhaps just full of features as indicated by the roadmap.

Does it make sense to have all of these features into core? No judgments, just curious on these fronts.

Most likely, it looks like P2P was just used as a github placeholder when setting up the roadmap page.

2 Likes

OK, I understand now. I am logged in, but I was referring to the time 00:27, and I am on GMT+1, which is why I couldn’t find the post scottefc86 made at that time. Thank you. :slight_smile:

The spaces was, once again, so bad communication:

  1. The term Roadmap leads one to expect something clear.
  2. David goes full on with some cutting edge speculative possibilities for future.
  3. The only graspable bit (for most) was about the token.
  4. Then David complains people not discussing anything else, and makes insulting remarks, even though this community has a long history of caring deeply about the token and being vocal about it.

Again, I think David does exactly right thing trying see into the future possibilities like that.

But the context of publishing a Roadmap was so wrong context to deliver the stuff he did. At least it would have been better to first present the next concrete actions that the project is going to take, and after that present the R&D for the future.

Now it seems to me that we are not having a clear leadership for immediate actions and live in a hopium of a genius pulling of gazillion lines of code into solution that “solves it all”.

5 Likes

Perhaps some more comparisons to underline just how big 67m lines of code is:

67m is a huge amount of code. I wonder what all of it is doing. Maybe much of it does nothing of use at all. Who knows? It sounds like no one does, which is certainly a brave new world.

6 Likes

I’m guessing most of the code is redundant or just very inefficient. Aka AI slop.

When Autonomi publish an application we will be able to make a judgement, until then I can only see it as improbable at best.

It has all the hallmarks of the worst kind of thing. I wont label it, but that’s how it looks, especially with David’s defensive and aggressive attitude.

It’s fair enough if he doesn’t want to engage for whatever reason, it is the constant unreasonable blaming of others for his decision to just pop up now and again with improbable claims and predictions, insult us and then bugger of again.

It is so disrespectful, especially towards one’s biggest and most long term supporters, people who felt like we were his friends. So damaging and unnecessary. Not to mention inexplicable.

I do hope, honestly, @Nigel that your optimism and generosity of thought which I’ve always valued, prove to be well founded.

9 Likes

Me too :grimacing: (min characters)

4 Likes

Damn. I tried figuring out how many lines of code but that’s bonkers. I’d assume inefficient slop too tbh. But that doesn’t mean it couldn’t be optimized either.

2 Likes

Very interesting and informative, thanks!

During the spaces I was trying to figure it out by comparing to the amount of individual letters in the Bible - around 3M. Quite poor comparison, but gives some kind of yardstick for the scale.

And I wonder how safe and secure it can ever be, no matter how much AI investigation you throw at it.

I mean, how wise it is to program this kind of stuff with that kind of approach?

2 Likes

I think my one question is still important. There are many features, components, dependencies etc all lumped into saorsa core. If separated, it might not seems quite as insane. But when all or many of these elements of the roadmap are basically lumped into one, then maybe that number seems more reasonable. The main question would be, why aren’t they separated?

For all of those other projects I listed, humans have written most or all of it. For this, it sounds like AIs have written almost all of it.

This is pretty much at the cutting edge of what AI is being used for with code. I’m not sure whether anyone has used it at this sort of scale.

It maybe visionary and the first of many such projects. Or maybe it is the reverse.

It’s a quantity of code that takes years, decades, of many a human’s endeavour. Can that be done by one guy, some AIs and a few months? On that, the jury must still be out.

9 Likes

I’m afraid that’s unlikely. Nobody understands this codebase so that’s an unknown possibility, in a field where there is nothing to go on.

The evidence is that LLMs create hard to find bugs and security issues.

So you have code which nobody understands, an enormous codebase that nobody can review and validate, and is therefore unmaintainable.

You can keep firing LLMs at it, but with little understanding of the process or its effects. That’s a very time consuming and expensive process, and no way to measure success because of the complexity of the output.

It’s possible of course but implausible and is in reality experimental research.

As such it should not be released to end users until the research is understood and the approach validated and I’ve no idea if that could be done.

I don’t know how this was presented in the Spaces, but I do not think it can be considered to be or turned into a product.

David is in his lab doing interesting and valuable experiments. I don’t believe he is building products.

7 Likes

Yes discord shows the time in the users timezone. So unless you both are in the same timezone, then times are difficult unless you know the poster’s timezone and want to do the timedance again.

I read that as lifetime work of writing code. I’ve never tallied how many millions I’ve written, and I am afraid to do so. Lets say a ton of it in assembler (some self assembled), once you do assembler the lines balloon out. Still more in Pascal, C, and about 100 or more other languages. So 67 million by David is not unbelievable. I would not include AI generated code since that is more system analysis then writing code.

3 Likes

I read it as the lines of code produced by Saorsa Labs for Communitas and core. It sounded like a very specific figure.

Even if it was a lifetime’s body of work, that’s well over 1m lines of code per year. An excellent developer may give 100k in something like Java, but most don’t even manage half of that, in my experience.

Anyway, I gather that this code is very much research based and the team is working on much more pressing immediate stuff as a priority too. What happens in the lab, may stay in the lab, unless it proves itself.

5 Likes

Communitas is 111k lines of code measured by this tool:

9 Likes

Interesting! Maybe 67m or whatever is something else? Hopefully, it was just a lifetime figure loke @neo suggests.

2 Likes