The new legislation changes the rules of the game for Autonomi

This is true, but the third party LLM is there to “help” you if and only if you have a modern machine made by Apple Inc. with 32GB RAM or above. That’s if I’m understanding the Discord correctly (which isn’t certain, the answers can be quite evasive, but I believe that’s what is being said).

The plan AFAIU is to later develop the “agentic” chatbot for first Windows and then Linux. Attempts to get the size down have already been made though and have not worked out thus far, so looks like that 32GB figure could stay, for some time anyway.

It is my understanding that accessing the network will still be possible without running this humongous, resource-intensive, black-box, proprietary, non-deterministic, “female” “assistant”. But, simultaneously, elsewhere it seems to be implied that humans will be doing very little or no config tweaking.

Presumably, everyone who can’t run Fae will in fact be tweaking configs? And, presumably, that’ll be a very large percentage of users?

I don’t know, we’ll see I suppose.

EDIT: Thrilled to learn I was misunderstanding here. @Southside linked to this GitHub repo with more info.

2 Likes

I suppose it’s too optimistic to expect:

  • CEXs would be willing to work with DAG token
  • DEX would even be possible for DAG token
  • exchanging tokens on the forum as in the good old time of BTC @ MtGox
  • avg Joe would think it’s a great idea to run nodes 24/7 for a month upfront to upload x GB
  • organizations would even think of running nodes for tokens

I literally can’t understand the workflow of how native token would survive if it is born tomorrow replacing erc-20.

1 Like

some more answers/hints can be found at Developers — Fae | Saorsa Labs

LLM Backends
Fae always runs through the internal agent loop (tool calling + sandboxing). The backend setting selects the LLM brain:

Backend Config Notes
local backend = “local” On-device via mistral.rs (Metal on Mac, CUDA on Linux)
agent backend = “agent” Auto-select (local when no creds, API otherwise)
Local model selection is automatic based on system RAM: Qwen3-VL-8B-Instruct for 24GB+ systems, Qwen3-VL-4B-Instruct for lighter hardware. Both support vision and are loaded with ISQ Q4K quantisation.

heres how it all goes together


and heres what David said on Discord

Just hold your nose and just read it for yourself on Discord. :slight_smile:

Contributions are welcome - get tore in :slight_smile:

edit: from fae/docs/benchmarks/fae-priority-eval-2026-03-07.md at main · saorsa-labs/fae · GitHub

2 Likes

This helps to understand why Autonomi should be made 100% anonymous asap:

What we need is a functioning network. A network that loses data isn’t suitable for storage, not just money. This doesn’t necessarily require perfection - all storage systems lose data and all financial systems lose your money one way or another. So we need a functioning network that is good enough for both.

I don’t see these as separate. If it’s not good enough to store value it’s probably not good enough to store data.

On using an LLM to optimise things, this is getting ridiculous. We were told this would run on mobile devices in a few month’s the rate on improvement was so fast. Some of us said, ok just me :rofl: that’s not going to happen. Three years later, you need a high end Mac just to tweak a few nodes?

I’m going to suggest that for optimising nodes it would be as effective if you hard coded a set of rules in a node runner control program, and run on Pi or even a mobile. @aatonnomicc’s script is a crude example but shows a better way to do this and was very useful.

There may be other uses for an LLM but this seems unlikely to be a good one and completely misses the people who most need this kind of feature.

1 Like

No you do not. That is sheer hyperbole, Mark.

I am far from happy with the way thngs are but Im not putting up with falsehoods like that.

Chart above suggests just about ALL devices can benefit form some sort of AI assistance. Whether they all need or even want that help is a different story.

Once again, I have to say this might have been communicated better and I have no doubt David could turn round and say it is all in the docs that he linked. And he would be 100% correct.

Cos i can find these answers fairly quickly but here we are with scare stories about needing expensive Apple kit.

You will get the best immediate experience if you have that kit, Thats what David is working with ( I believe he has used a Mac for qutie a few years now) to develop this so its only natural. However the rest of us are not left far behind and other platforms will catch up when David has time or someone else gets the finger out and does the necessary work themselves.

1 Like

Sorry, I thought that’s what was being said. I don’t have time to scrutinise long posts full of screenshots and read that Fae was only available on a high end Mac. Is that not the case?

EDIT: turns out it is the case

2 Likes

Sorry yes, my mistake there, I thought that’s what was being said when I attempted to read some of the stuff on the Discord.

Was not an intentional scare story :smile: appreciate the links above.

I thought I’d read an exchange on the Discord where David said that, but apparently not. I will refrain from trying to parse the Discord directly in future, please discount my earlier point.

2 Likes

Heres a complete record of what David and Jim have said on Discord.
Look towards the end of these files.
I’ll have Bux for you shortly as well

whoops - here is the correct paste

2 Likes

Maybe before folks pour cold water on Autonomi 2.0, they should make sure they have all the facts straight?

Autonomi 1.0 is open source too, ofc. I’m actually still running some nodes. If Autonomi 2.0 sounds so awful, why not rally a crowd to fork the OG network and set about adding the features you want?

You could add native token, insist AI is air gapped from any code changes, remove privacy concerns, etc.

Indeed, I believe one of the biggest critics of the one time storage has forked the code already. Good for them too, if they feel strongly about it.

3 Likes

I just failed to upload the Discord archives :frowning:

willie@gagarin:~/projects/discord_analyser/digests$ ant file cost dirvine._archive_2026-03-08.md
Welcome to ant
Logging to directory: "/home/willie/.local/share/autonomi/client/logs/log_2026-03-08_23-20-17"
Connecting to the Autonomi network...
Failed to connect to the network: Failed to bootstrap the client: Failed to obtain any bootstrap peers
Error: 
   0: Failed to connect to the network
   1: Failed to bootstrap the client: Failed to obtain any bootstrap peers
   2: Failed to obtain any bootstrap peers

Location:
   ant-cli/src/actions/connect.rs:109

Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.

which is a shame…

AS for a fork and an OGs Comnet, we can absolutely forget Hetzner nodes - pricing is up 30-40%

Shit is getting real.

Just how essential is antsnest.site right now?

Maybe the issue is to a degree a lack of clarity from the team, not engaging here but on Discord, even when responding to things written here.

2 Likes

This is obviously a response to me so may I say: having the facts genuinely straight is very far from a trivial task when the information is spread around Discord chat messages, other docs on various sites, and the GitHub.

The thing I was referencing (the 32GB only-modern-Mac comment from David, and some statement about having tried getting it down to 8GB but it not working) is in the Discord and still completely unclear to me what is being said, I have had time now to go back and have a re-read to try see where I got confused. The answer is unclear to me.

I don’t want to go screenshotting things, and don’t care to resolve the mystery. The links to the docs Southside shared at least seem to suggest that my initial interpretation was off. As I said to Happybeing, I will refrain from attempting to parse the Discord.

2 Likes

Allow me to help in a small way by presenting a weekly summary of all that @dirvine and @Bux have said on Discord in the last week. Jim Collinson has been quiet, otherwise he’d be included as well.

I will produce these weekly for now and find a better solution than filebin for the long term.

2 Likes

Apology needed - looks like I was wrong

As it stands Fae will need Apple hardware for the foreseeable

from the Github ReadMe

Sorry - we could all have saved a lot of time by reading the source…

Right now Claude and Deepseek have differing views on how feasible a linux port could be :slight_smile:

5 Likes

Perhaps @Traktion should have gotten his facts straight before throwing cold water at people… :smirking_face:

I kid, I kid! I couldn’t help myself.

No worries @Southside, even the best of us make mistakes. What was that other stuff you linked about then..? Like, what is the story with hardware requirements?

1 Like

@southside

This was how maidsafe marketed the network it for the first 10 years. They called it ‘farming’ back then. Its what got me invested in the project. Running nodes farmed tokens, storing data costs tokens. Real currency on/off ramps with a successful network aren’t necessary, but if people want to sell tokens they could do that same way bitcoiners trade without exchanges. Having to use multiple 3rd party exchanges and apps to actually get a node to have any utility at all meant putting profit motive before network utility motive. I understand investors want profit but that wont happen if they put cart before horse. Native token is the horse, on/off ramps are the cart that follows. @happybeing is correct, this is exactly how bitcoin did it. In any case a functioning network with NO on/off ramp to real currency is better than a network where its extremely difficult to get on and off and impossible to use at all until you do.

5 Likes

Not any mac, you need a newer SILICON mac. Even second hand these are costly. Everyone else is excluded.

Deepseek says

Based on the Fae README, here are the Apple hardware requirements:

Processor (Required):

Apple Silicon (M1, M2, M3, etc.) - Fae is specifically built for Apple’s custom chips, using the Neural Engine and GPU for on-device ML inference -1.

Intel Macs are NOT supported - The app is pure Swift + MLX, which runs natively on Apple Silicon only.

Memory (RAM) - Tiered by Model:

Minimum (8 GB): Can run, but will use the smaller fallback models

Recommended (12+ GB): Can run the default Qwen3.5-2B-4bit model

For Vision Features (24+ GB): Required to use the screen/camera understanding features (Qwen3-VL models load on-demand)

Storage:

~8 GB free space - For the initial download of all models (STT, LLM, TTS, VLM, embedding, speaker)

Operating System:

macOS (specific minimum version not stated, but requires Apple Silicon support, so likely macOS 11+ Big Sur or newer)

In short, to create a usable Linux version of Fae, you would ideally be targeting a machine with a powerful NVIDIA GPU (24GB+ VRAM) and at least 32GB of system RAM. This matches the performance class of the Apple Silicon Macs (with unified memory) that Fae was originally built for.

2 Likes

I didn’t discuss the facts. I suggested those quick to critique them should. Big difference.

By all means, go an an anti-maidsafe rant, but please base it on a good understanding first.

2 Likes