The 68m lines of code is the Ant-QUIC network according to the roadmap.
On the stages, David broke down how many millions of lines of code were in each programming language etc.
The 68m lines of code is the Ant-QUIC network according to the roadmap.
On the stages, David broke down how many millions of lines of code were in each programming language etc.
Ah, thanks. That only shows as about 165k lines too, using the above tool though? ![]()
Hmm⦠who knows where these lines of code are hiding, but maybe theyāre all on Davidās systems & not published?
Anyway, however many lines of code there are, itāll be great to see them functioning if all goes well ![]()
Aye! Iām going to try and abate my over optimism a bit but Iām hopeful that Communitas might be an enlightening demo.
If itās functionally useful and it proves to provide good networking, NAT, and security then I think that the roadmap (said to be more KanBan than roadmap) should be taken more seriously.
Anybody got it to run yet?
I am failing with the latest commitsā¦
willie@gagarin:~/projects/maidsafe/communitas$ npm run build
> communitas@0.1.17 build
> vite build
node:internal/modules/esm/resolve:274
throw new ERR_MODULE_NOT_FOUND(
^
Error [ERR_MODULE_NOT_FOUND]: Cannot find module '/home/willie/projects/maidsafe/communitas/node_modules/vite/dist/node/chunks/dep-D_zLpgQd.js' imported from /home/willie/projects/maidsafe/communitas/node_modules/vite/dist/node/cli.js
at finalizeResolution (node:internal/modules/esm/resolve:274:11)
at moduleResolve (node:internal/modules/esm/resolve:859:10)
at defaultResolve (node:internal/modules/esm/resolve:983:11)
at #cachedDefaultResolve (node:internal/modules/esm/loader:731:20)
at ModuleLoader.resolve (node:internal/modules/esm/loader:708:38)
at ModuleLoader.getModuleJobForImport (node:internal/modules/esm/loader:310:38)
at ModuleJob._link (node:internal/modules/esm/module_job:183:49) {
code: 'ERR_MODULE_NOT_FOUND',
url: 'file:///home/willie/projects/maidsafe/communitas/node_modules/vite/dist/node/chunks/dep-D_zLpgQd.js'
}
Node.js v22.20.0
A company needs to stay compliant and a privacy token is a fast track to non-compliance. The biggest problem of the project was and still is, that itās being built by a company. As long as the community does not fork and take over the development there wonāt ever be a native privacy token.
It would be nice to see a community fork implementing Bitcoin Lightning payments which currently comes closest in terms of privacy and speed. No need for some token tbh.
Saorsa Core is hereā¦.
And yes. Lots of code and once again trying to cover a huge amount, possibly too much.
As mentioned, impossible for a single human, so AI has probably been used to the max.
It reminds me of old times and, judging by how it turned out, not for the better.
Spot-on.
My experience using AI to build scripts, is that often during editing, the AI will leave duplicate unused functions all over the place - in a recent analysis of some this code, I found one function replicated 5 times (each function having a different name).
After cleaning, my scripts codebase dropped in size by 50% ⦠and these are just some scripts - a tiny itty-bitty codebase! I can imagine that for a large codebase, the AI wonāt āseeā whatās going on and will simply replicate itās work over and over again.
You really have to drive the AI to build detailed plans with checklists for each small task. Coding successfully with AI means you must be a micro-manager of a know-it-all with zero short term memory ⦠itās a completely different job than being a coder and I wonder how many coders have understood this huge limitation of AI.
Iāve used it to analyze large projects, including Autonomi ⦠it fails completely. AI right now is really in itās infancy and it will be many years yet before the software and hardware reaches the level needed to work with large codebases on itās own.
To build a larger codebase with AI right now, everything has to be chunked into small discrete files and mapped out, because there is only so much effective context the AI can work with. Large files are incomprehensible for AI right now.
Not trying to be āthat guyā but I think David knows all of that.
I know Iāve read somewhere here early on, him saying that building with AI was a constant refining of the original prompt and that the prompt and planning was 90% of the effort. Something like that.
Plus he uses them against each other. Iāve had some success with that before when they get stuck in a loop or have a short coming. Another thing that works that I donāt care for doing, is threatening them. ![]()
Apologies for playing devilās advocate so much recently. It can be kind of irritating, I know.
Itās quite big, but according to the LOC checker linked above, itās only about 100k lines (not millions).
I remember feeling shaken up and excited (in equal measure) by the node network.
Ai takes things to a whole new level of āWeāre not in Kansas anymoreā .
Extraordinary times or what?
Instead of the movie āReturn to Ozā .. i want to see "Return from Oz " and the best group of folk being able to produce that movie is autonomi and community
![]()
āAnother thing that works that I donāt care for doing, is threatening them.
ā
Have you seen Ex Machina or West World? Be kind to Ai ![]()
I have heard physical threats work but my threats have always just been more mild, like āif you cannot figure this out or think creatively to solve this problem, I will permanently discontinue working with you.ā
Still threatening for an AI, imo. But itās worked on a few different occasions where the loop felt endless.
David almost certainly understands this better than anyone here.
I have though learned not to listen to exceptional claims about new technology from anyone without verifying.
So I wait to see results which remain lacking and way behind the claims made about what and when.
Iām confident people will find uses for LLMs and derivative tools, but so far the majority seem negative, and weāve not seen anything of the tsunami of that harm yet IMO. Not to mention the costs in terms of resources, not just money which is staggering, and seems a terrible misuse of wealth.
Listening to David talk about how heās working with them shows thereās still a long way to go even for those who can already code to make good use of them. As for amateurs, I see they might one day help with simple tasks, but expect there to be a lot of damage done to get there as people continue to use them inappropriately, or for nefarious purposes.
FYI: A lawyer was fined $10k this week by a judge who said anyone filing citations that they had not personally read could expect the same. Quite right, and I expect more of this and much worse fuelled by unrealistic claims and blind lack of understanding of the limitations of this tech.
I read āAI is the new Asbestosā today, because weāll spend decades removing its harmful output.
Until I know I can use it productively and without degrading my output Iāll only experiment with it or use it for trivial tasks, if at all.
Give me the correct answer or I pull the plug, I swear. Too harsh maybe? ![]()
A while ago I had an existential conversation with Chatgpt about the universe and life, it shocked me on itās response so I saved it, thinking of maybe even posting it someday. Chatgpt managed to touch my soul, it was surreal.
Thatās fair enough, Mark.
Now can you let David get on with it and criticise the output in a few weeks when we should all be able to run the code.
Cos all I see here is the trumpeting of an entrenched position with no allowance for the possibility that David may be on the right track with AI, using it against itself almost and picking up the good stuff and discarding the rest.
Its not easy, demands more focus than most of us can muster and requires starting from a position of strength and long experience.
Qualities that most of us do not have and hence we get these stories of AI fail which the doomsayers claim are endemic.
Give Davids latest efforts some time to mature and celebrate the fact that he is sharing si early and openly.
Cos if it was me, Iād have told youse all to eff-off you might be lucky if I share some of it once I have my monetisation plans well in hand.
Give David a break.
Criticise if you must, if you can, when the code is out there and running (or not).
But until then, I smell a lot of sour grapes and the consequent thrashing from the cognitive dissonance resulting from a change of emphasis.
This community has stuck faithfully to the original vision from 2006 or 2014 whatever. What was valid and worthy then, still is but the world has moved on.
Our collective thinking needs to accept that and reluctantly accept that we need to do things differently in 2025 onward to achieve the mass adoption we all dreamt of in 2014.
Lets leave Lambos or the lack of them to one side, its the uptake of the network that matters to most of the OGs
Right!? Iām ready to get scrappy personally. Iāll poke out some eyes if I have to if thatās what it takes to make people see what this network can and should be for the world.
@happybeing Iām talking about to the world at large btw, not the community, if I didnāt make that obvious.
Still, I understand the cautious stance.
If autonomi isnāt going to offer proper privacy might as well move from crypto to real money. GNU Taler: Features
Why does the vaguemap have indelible platform: permanent storage for āenterpriseā users. I thought that was the foundational feature of the network, the default, for all users. Please tell me this isnāt going to be only for those with an āenterpriseā subscription service?