Update, 25th September 2025

The 68m lines of code is the Ant-QUIC network according to the roadmap.

On the stages, David broke down how many millions of lines of code were in each programming language etc.

Ah, thanks. That only shows as about 165k lines too, using the above tool though? :man_shrugging:

3 Likes

Hmm… who knows where these lines of code are hiding, but maybe they’re all on David’s systems & not published?

Anyway, however many lines of code there are, it’ll be great to see them functioning if all goes well :slight_smile:

5 Likes

Aye! I’m going to try and abate my over optimism a bit but I’m hopeful that Communitas might be an enlightening demo.

If it’s functionally useful and it proves to provide good networking, NAT, and security then I think that the roadmap (said to be more KanBan than roadmap) should be taken more seriously.

2 Likes

Anybody got it to run yet?

I am failing with the latest commits…

willie@gagarin:~/projects/maidsafe/communitas$ npm run build

> communitas@0.1.17 build
> vite build

node:internal/modules/esm/resolve:274
    throw new ERR_MODULE_NOT_FOUND(
          ^

Error [ERR_MODULE_NOT_FOUND]: Cannot find module '/home/willie/projects/maidsafe/communitas/node_modules/vite/dist/node/chunks/dep-D_zLpgQd.js' imported from /home/willie/projects/maidsafe/communitas/node_modules/vite/dist/node/cli.js
    at finalizeResolution (node:internal/modules/esm/resolve:274:11)
    at moduleResolve (node:internal/modules/esm/resolve:859:10)
    at defaultResolve (node:internal/modules/esm/resolve:983:11)
    at #cachedDefaultResolve (node:internal/modules/esm/loader:731:20)
    at ModuleLoader.resolve (node:internal/modules/esm/loader:708:38)
    at ModuleLoader.getModuleJobForImport (node:internal/modules/esm/loader:310:38)
    at ModuleJob._link (node:internal/modules/esm/module_job:183:49) {
  code: 'ERR_MODULE_NOT_FOUND',
  url: 'file:///home/willie/projects/maidsafe/communitas/node_modules/vite/dist/node/chunks/dep-D_zLpgQd.js'
}

Node.js v22.20.0
4 Likes

A company needs to stay compliant and a privacy token is a fast track to non-compliance. The biggest problem of the project was and still is, that it’s being built by a company. As long as the community does not fork and take over the development there won’t ever be a native privacy token.

It would be nice to see a community fork implementing Bitcoin Lightning payments which currently comes closest in terms of privacy and speed. No need for some token tbh.

Saorsa Core is here….

And yes. Lots of code and once again trying to cover a huge amount, possibly too much.

As mentioned, impossible for a single human, so AI has probably been used to the max.

It reminds me of old times and, judging by how it turned out, not for the better.

5 Likes

Spot-on.

My experience using AI to build scripts, is that often during editing, the AI will leave duplicate unused functions all over the place - in a recent analysis of some this code, I found one function replicated 5 times (each function having a different name).

After cleaning, my scripts codebase dropped in size by 50% … and these are just some scripts - a tiny itty-bitty codebase! I can imagine that for a large codebase, the AI won’t ā€˜see’ what’s going on and will simply replicate it’s work over and over again.

You really have to drive the AI to build detailed plans with checklists for each small task. Coding successfully with AI means you must be a micro-manager of a know-it-all with zero short term memory … it’s a completely different job than being a coder and I wonder how many coders have understood this huge limitation of AI.

I’ve used it to analyze large projects, including Autonomi … it fails completely. AI right now is really in it’s infancy and it will be many years yet before the software and hardware reaches the level needed to work with large codebases on it’s own.

To build a larger codebase with AI right now, everything has to be chunked into small discrete files and mapped out, because there is only so much effective context the AI can work with. Large files are incomprehensible for AI right now.

5 Likes

Not trying to be ā€˜that guy’ but I think David knows all of that.

I know I’ve read somewhere here early on, him saying that building with AI was a constant refining of the original prompt and that the prompt and planning was 90% of the effort. Something like that.

Plus he uses them against each other. I’ve had some success with that before when they get stuck in a loop or have a short coming. Another thing that works that I don’t care for doing, is threatening them. :grimacing:

Apologies for playing devil’s advocate so much recently. It can be kind of irritating, I know.

7 Likes

It’s quite big, but according to the LOC checker linked above, it’s only about 100k lines (not millions).

3 Likes

I remember feeling shaken up and excited (in equal measure) by the node network.
Ai takes things to a whole new level of ā€œWe’re not in Kansas anymoreā€ .
Extraordinary times or what?

Instead of the movie ā€œReturn to Ozā€ .. i want to see "Return from Oz " and the best group of folk being able to produce that movie is autonomi and community :slightly_smiling_face: :grinning_face_with_smiling_eyes:

ā€œAnother thing that works that I don’t care for doing, is threatening them. :grimacing:ā€

Have you seen Ex Machina or West World? Be kind to Ai :laughing:

3 Likes

I have heard physical threats work but my threats have always just been more mild, like ā€œif you cannot figure this out or think creatively to solve this problem, I will permanently discontinue working with you.ā€

Still threatening for an AI, imo. But it’s worked on a few different occasions where the loop felt endless.

4 Likes

David almost certainly understands this better than anyone here.

I have though learned not to listen to exceptional claims about new technology from anyone without verifying.

So I wait to see results which remain lacking and way behind the claims made about what and when.

I’m confident people will find uses for LLMs and derivative tools, but so far the majority seem negative, and we’ve not seen anything of the tsunami of that harm yet IMO. Not to mention the costs in terms of resources, not just money which is staggering, and seems a terrible misuse of wealth.

Listening to David talk about how he’s working with them shows there’s still a long way to go even for those who can already code to make good use of them. As for amateurs, I see they might one day help with simple tasks, but expect there to be a lot of damage done to get there as people continue to use them inappropriately, or for nefarious purposes.

FYI: A lawyer was fined $10k this week by a judge who said anyone filing citations that they had not personally read could expect the same. Quite right, and I expect more of this and much worse fuelled by unrealistic claims and blind lack of understanding of the limitations of this tech.

I read ā€œAI is the new Asbestosā€ today, because we’ll spend decades removing its harmful output.

Until I know I can use it productively and without degrading my output I’ll only experiment with it or use it for trivial tasks, if at all.

4 Likes

Give me the correct answer or I pull the plug, I swear. Too harsh maybe? :joy:

2 Likes

A while ago I had an existential conversation with Chatgpt about the universe and life, it shocked me on it’s response so I saved it, thinking of maybe even posting it someday. Chatgpt managed to touch my soul, it was surreal.

That’s fair enough, Mark.
Now can you let David get on with it and criticise the output in a few weeks when we should all be able to run the code.
Cos all I see here is the trumpeting of an entrenched position with no allowance for the possibility that David may be on the right track with AI, using it against itself almost and picking up the good stuff and discarding the rest.
Its not easy, demands more focus than most of us can muster and requires starting from a position of strength and long experience.

Qualities that most of us do not have and hence we get these stories of AI fail which the doomsayers claim are endemic.

Give Davids latest efforts some time to mature and celebrate the fact that he is sharing si early and openly.

Cos if it was me, I’d have told youse all to eff-off you might be lucky if I share some of it once I have my monetisation plans well in hand.

Give David a break.
Criticise if you must, if you can, when the code is out there and running (or not).
But until then, I smell a lot of sour grapes and the consequent thrashing from the cognitive dissonance resulting from a change of emphasis.
This community has stuck faithfully to the original vision from 2006 or 2014 whatever. What was valid and worthy then, still is but the world has moved on.
Our collective thinking needs to accept that and reluctantly accept that we need to do things differently in 2025 onward to achieve the mass adoption we all dreamt of in 2014.
Lets leave Lambos or the lack of them to one side, its the uptake of the network that matters to most of the OGs

5 Likes

Right!? I’m ready to get scrappy personally. I’ll poke out some eyes if I have to if that’s what it takes to make people see what this network can and should be for the world.

@happybeing I’m talking about to the world at large btw, not the community, if I didn’t make that obvious.

Still, I understand the cautious stance.

2 Likes

If autonomi isn’t going to offer proper privacy might as well move from crypto to real money. GNU Taler: Features

1 Like

Why does the vaguemap have indelible platform: permanent storage for ā€˜enterprise’ users. I thought that was the foundational feature of the network, the default, for all users. Please tell me this isn’t going to be only for those with an ā€˜enterprise’ subscription service?