:IF: AntAI | Meet AntAI the decentralized, open-source AI assistant built for privacy

:white_check_mark: Project Description

AntAI (or AutonomiAI, or AutoAI) is an advanced AI virtual assistant and interaction platform leveraging open-source large language models (LLMs) to simulate human interaction, complete tasks, and answer user questions. It also integrates a code assistance feature capable of generating code, debugging, and solving programming challenges.

Built on decentralized infrastructure, AntAI ensures private, secure, and censorship-resistant AI interactions. While users may choose to store conversation histories locally for convenience and continuity, AutonomiAI never collects or exploits user data for training or third-party purposes.

Unlike centralized AI systems that process and store data on proprietary servers, AutoAI prioritizes data sovereignty, ensuring that intellectual property and sensitive information stay under the user’s control, even in enterprise or highly regulated environments.

:sparkles: Key Features

  • Decentralized AI hosting: Powered by distributed nodes rather than centralized servers.

  • Open-source LLM integration: Supports the latest large language models for general-purpose AI use.

  • Privacy-first design: Zero logging by default, with optional local-only data storage.

  • Integrated code assistant: Supports coding tasks, from simple functions to debugging and full script generation.

  • REST API access: Developers can integrate AI functionality directly into their applications.

  • Plugin marketplace: Users can activate domain-specific plugins (e.g., legal, research, healthcare).

  • Custom fine-tuning: Option to fine-tune local models for personalized responses.

  • Node operator rewards: Individuals who run infrastructure nodes are rewarded for processing AI tasks.

  • Enterprise white-label solutions: Fully customizable and brandable platform for businesses.

Each name AntAI, AutonomiAI, or AutoAI, reflects a focus on intelligent autonomy, automated intelligence, and decentralized AI control.

:bullseye: Target Users

  • Developers seeking decentralized, privacy-focused AI APIs

  • Enterprises managing proprietary or regulated data

  • Individuals prioritizing data privacy and control

  • Blockchain/Web3 developers building decentralized applications

  • Researchers needing confidential AI tools

  • Startups developing white-label AI chatbots

  • Legal, healthcare, and finance professionals requiring compliance-ready AI

  • Node operators looking to monetize compute power

  • Educational institutions offering AI-based tools

:card_index_dividers: Data Types Processed

  • Natural language input (text queries and chat)

  • Programming code (e.g., Python, JavaScript, JSON)

  • Markdown, YAML, structured text formats

  • Chat session histories (optional, user-controlled storage)

  • Plugin-generated outputs (e.g., summaries, document analysis)

  • User-uploaded datasets for fine-tuning or prompt use

Project Kickoff

30/05/2025

Initial announcement, landing page launch, basic project description released:

  • Landing page and AI chat access are live:

AntAI - Chat

  • The domain AntAi.chat has already been acquired, and we’re currently working on endpoint configuration.

I appreciate your support!!!

15 Likes

So running llms locally is supported too? (a llama3.3 on a macbook is totally possible and pretty performant -l+produces high quality output)

For which OSes do you develop?

4 Likes

Sounds interesting.

What use does this make of Autonomi? I guess it reads user data from & stores history on Autonomi?

Is it a node that runs alongside an Autonomi node to add the ability to effectively sell AI computation resources to others on the Autonomi Network?

It’d be cool to be able to sell GPU resources in exchange for ANT through the network.

Looking forward to hearing / seeing more as this progresses.

4 Likes

Imagine one could rent cuda units from autonomi network with ant tokens..

4 Likes

Hi @riddim

Yes, running LLMs locally is absolutely within our roadmap and already partially supported in our current alpha. Right now, our alpha version leverages Ollama 3.2, and users can run multiple models based on their specific tasks and resource preferences, enabling efficient management of token resources.

Although our primary initial focus is cloud-based and decentralized infrastructure, we’re rapidly progressing toward fully supporting local deployment of powerful, performant LLMs such as Llama3.3, even on consumer-grade hardware like MacBooks. This flexibility ensures high-quality, secure AI interactions without reliance on centralized services.

We’re currently inviting community members to join our limited beta testing program to experience and shape this feature first-hand.

Our AntAI platform is currently developed as a web-based interface, ensuring universal accessibility from:

  • macOS

  • Windows

  • Linux

Future local-deployment functionalities are designed with cross-platform compatibility in mind, guaranteeing users across all major operating systems the ability to seamlessly run AntAI locally.

2 Likes

Exactly! AntAI is closely integrated with the Autonomi network. It leverages Autonomi’s decentralized storage and computing infrastructure for securely handling user data, preserving conversation histories, and managing privacy. Specifically, user data and histories (when opted-in by the user) are encrypted, stored, and processed through Autonomi’s decentralized nodes, ensuring complete control and privacy for the end user.

Additionally, AntAI is designed as a complementary service running alongside Autonomi nodes. This setup allows node operators within the Autonomi network to participate directly by providing AI computational resources (such as GPU processing power) to the broader community.

Indeed, one of our key upcoming features is the ability for users and node operators to effectively sell and monetize GPU and other computational resources directly within the Autonomi network, receiving compensation in ANT tokens. This not only enriches the AntAI ecosystem but also creates new revenue opportunities for Autonomi node operators.

Stay tuned, there’s plenty more exciting development and integration to come, and we’re thrilled to have your support and feedback as we progress!

3 Likes

Absolutely! @Hannu That’s precisely the kind of capability we envision with AntAI integrated into the Autonomi network. Users would indeed be able to seamlessly rent CUDA GPU units (computational resources) directly from the decentralized Autonomi infrastructure, using ANT tokens as payment.

This approach not only empowers individuals and enterprises who need AI-driven computations on-demand but also creates a robust and vibrant marketplace where node operators monetize their idle or spare GPU capacities.

We’re actively exploring and planning this integration so yes, your vision aligns perfectly with our roadmap!

2 Likes

Hey Team,

I want to emphasize that AntAI is more than just an idea we’re already a fully operational project with dedicated hardware, active infrastructure, and robust core foundations firmly in place.

As of today, our AI agent has successfully implemented and enabled these critical skills:

  • RAG & Long-term Memory** :white_check_mark:
  • View & Summarize Documents** :white_check_mark:
  • Scrape Websites** :white_check_mark:
  • Generate & Save Files to Browser** :white_check_mark:
  • Generate Charts** :white_check_mark:
  • Web Search** :white_check_mark:
  • SQL Connector** (Currently off, pending evaluation) :gear:

Our next exciting milestone will be integrating AntAI with the Autonomi Network API, enabling us to rigorously test functionalities, validate decentralized operations, and fully leverage Autonomi’s ecosystem.

Now, more than ever, we need the engagement, feedback, and support of our amazing community to continue evolving and transforming AntAI into an industry-leading decentralized AI platform.

Thanks again to everyone for your dedication and passion. Let’s make this happen together!

Best, @Makkomaster

3 Likes

:rocket: AntAI Beta Invitation Is Here!

Hey community! :light_bulb:
We’re thrilled to open the doors to our exclusive Beta Test environment — a major milestone in our mission to build a powerful, private, decentralized AI platform.

This is your chance to be part of something groundbreaking from the very beginning.
If you believe in a future where AI is secure, open, and community-powered, this is your moment to show support and walk this journey with us.

:speech_balloon: Join the Beta:
:backhand_index_pointing_right: ANTAI | Meet AntAI the decentralized, open-source AI assistant built for privacy. Automate tasks, streamline coding, and chat securely knowing your data stays encrypted, private, and fully yours.

:heart: Show us your love. Share, test, and let’s build the future of decentralized AI together.

#AntAI #BetaLaunch #Web3AI #DecentralizedFuture

4 Likes

“invite is no longer valid” :pleading_face:

2 Likes

We know it’s hard to get votes and followers we’re living that challenge ourselves.

The same issue we’re trying to solve trust, security, and privacy is exactly what makes it hard. People don’t want to click on external links, and we don’t blame them, and I do not want to vote for my self!!! no way!!!

But we’re here, building every day, and we’re not just an idea we’re a real project with real infrastructure and real goals.

If you support that, send a screenshot of your 2K vote, and we’ll give you 6 months of free, unlimited AntAI chat with one of the best models out there: LLaMA 3.2.

Only 6 beta spots left. Thanks for walking this path with us. :locked_with_key::fire: Impossible Futures

I’m starting to wonder if @makkomaster himself is an AI :sweat_smile::laughing::cold_face::exploding_head:

2 Likes

hey @makkomaster sorry I wasn’t sure I should say something … but I think it’s just fair for everyone to be on at least a similar page …

i understand you have great plans and good intentions and everything … but as of now this is just anythingllm with logo exchanged for AntAI and with a pretty small model running centrally hosted by you if I didn’t get anything completely wrong

as a comparison here the original anythingllm interface (just as for others in here that don’t know it)

there’s nothing wrong with building on platforms and moving from there but I think it’s a bit unfair against queeni to present you in the light you do …

…queeni really tries to put Autonomi at its heart, they want to store chats on the network and seem to be easily allowing local Ollama models as well as multi-platform natively … (they ofc don’t invent AI themselves either and are using a framework too … but their version 0 already integrates with autonomi …)

llama3.2 is a 3b parameter model (2GB in size) - just to put this into perspective - I’m using locally llama3.3 which sounds similar but is a 70b (billion) parameter model with a bit more than 40GB of size.
…bigger is not always better … and smaller models run significantly faster … but especially when you crack below ~8GB the quality of the responses really start to suffer (my experience ofc … will probably change over time and my experience is limited too)

I don’t want to discourage you - as I said - starting somewhere with functionality is great - and I wish you luck with everything you do :slight_smile:
hearing a bit more about your plan forward from this starting point would be great (how do you want to evolve - local models? how is the plan to connect those ..? you’re running the service on your domain; is this meant to move onto autonomi? the service uses vector databases, an api for models, a whole lot of logics that doesn’t run client-side as of now … it would e.g. be possible to create an official fork off anythingllm; use their interface (which is multi-platform) and modify the backend logics to store/retrieve certain elements to/from the network … decentralized AI do you already have an idea on how to connect gpus running around the globe? how do you prevent gpu runners from extracting private user data from this …?)

3 Likes

I’ve been wondering how to publicly show that Queeni is truly using decentralized AI - something like a transparent audit or verification. I’d love to hear how you’re approaching that problem on your side and maybe we can exchange ideas.

Will you provide a REST API that Queeni can connect to and use your decentralized AI platform?

Right now, Queeni can already work with Decentralized AI (Corcel), since they follow the same API standard as OpenAI - no gatekeeping, no drama, just good old JSON. :grinning_face_with_smiling_eyes:

4 Likes

Hi @riddim , As I mentioned last night during our mini Q&A, it’s truly an honor to receive questions like this from someone like you. Your insight and attention to detail push us to be better and I appreciate that.

This also gives me a great opportunity to clarify things and ,shed some light on our project for everyone else who may have similar questions.

And just to be clear I’m sending all my love and respect to the other amazing projects in this space. We’re all working toward the same goal: building something meaningful, secure, and community-driven.

Thanks a lot for your thoughtful message and I genuinely appreciate your honesty and interest in understanding our vision.

You’re absolutely right in raising the comparison to anythingLLM, and I want to clarify that our goal is not to rebrand anythingLLM, but to build a fully independent solution from scratch with Autonomi at the core of its architecture.

To break it down:

Yes, AnythingLLM Inspired – But Not Forked
We understand that any project needs a starting framework to move fast and test ideas just like many projects (including those using Corcel) rely on third-party backends. But our intention is not to rely on third-party services indefinitely.

AnythingLLM serves as a functional visual reference at this very early stage, but we are using their codebase for reference. Our system will be rewritten from scratch to allow full control over the backend and its integration with the Autonomi network.

Real Integration with Autonomi – Not Just Branding
We’re currently building a client-server architecture where both services (frontend and backend) connect directly to the Autonomi SDK, but they’re not yet hosted within Autonomi itself.

That’s part of our roadmap:

Chat messages (“Chats”) will be stored on Autonomi’s decentralized network.

Authentication will be fully delegated to Autonomi’s identity layer if a user is registered on Autonomi, they will be authenticated and authorized via the network.

Training data and user documents will also be stored inside Autonomi’s storage system, creating transparency, security, and economic value for node operators.

Yes, LLaMA 3.2 Is Small – It’s a Starting Point
We’re aware that LLaMA 3.2 is a smaller model (3B), and we definitely understand the limitations. It’s being used now to test functionality, speed, and chat flow. Our roadmap includes support for local model deployment, model switching, and running larger-scale models like LLaMA 3.3+ both locally and via distributed compute.

Vision for Distributed Compute (GPU Runners)
That’s a great question and an important one. Our plan includes:

A permissioned, opt-in GPU runner system, where users can contribute compute and receive ANT tokens in return.

To mitigate data leakage or abuse, compute tasks will be split, sandboxed, and limited in scope especially for sensitive chat data, which will be encrypted or tokenized before reaching remote GPUs.

Additionally, Autonomi’s proof and trust layers are being considered for validation and secure distribution of tasks.

In Summary
We’re not trying to present something we’re not this is a real project, actively being built by a small, dedicated team.

We’re not in competition with any community member (including queeni). In fact, we deeply respect what others are doing and I also give some votes to them.

Our end goal is not to run things on our own servers. It’s to become part of the Autonomi network, fully decentralized, with the right balance of privacy, utility, and community ownership.

I appreciate you taking the time to write your message, and even more for giving us the space to explain where we’re headed. You’re absolutely right to ask these questions it keeps everyone honest, and helps projects like ours stay focused and clear.

Feel free to reach out with any more questions. I’m always happy to answer.

4 Likes

Thanks for the thoughtful response — really appreciate your tone and approach.

Yes, we’re working on different layers of the ecosystem. You’re building an AI model and infrastructure, which is awesome and much needed. Queeni, on the other hand, is focused on helping people use AI in their daily lives — like a personal assistant who understands what you mean and actually does something about it.

The idea is to give users the freedom to choose which model they want Queeni to work with, based on their own preferences — privacy, speed, cost, control, etc. Whether it’s OpenAI, a local model, or something running on Autonomi in the future — Queeni should adapt to the user, not the other way around.

So I’d say: not competition — complementary. :light_bulb:

If your model becomes compatible with the OpenAI API spec and supports Function calling, then Queeni could absolutely use it out of the box. That’s the beauty of Queeni — it’s designed to be flexible and modular. As long as the AI can understand structured tasks and trigger functions, Queeni can connect to it and turn that into real-world actions for the user.

5 Likes