Update 30th January, 2025

Absolutely. The progress has been rapid.

I can imagine in 6 more months things will have moved on a long way further still, with apps, wallets, test native tokens etc. Doesn’t seem unlikely.

Your work is much appreciated… even if I don’t have the time to dabble at the moment it’s great to see apps and the ability to publish on the network.

Has there been any write up of how dynamic pricing works?

I understand it was added to fix the issues of low payouts during testing, but if it’s staying for launch, it’d be great to understand it to consider implications for the economics of the network.

If pricing isn’t based on node fullness, what is it based on? How will changes in demand / supply impact dynamic pricing to ensure pricing signals are clear to node operators and uploaders?

11 Likes

We are in a great place, tons of progress, the last year has been very impressive.

The hiccup is that due to different opinions on what launch entails all that progress does not match up to what many thought launch would be.

If none of us had launch in our heads, most would be yelling from the rooftops how well the team have done.

18 Likes

If I remember right a definitive launch was abandoned way back and what we have now was being called a soft launch, wasn’t it?

3 Likes

I think people hold our team to excessively high standards and strict expectations. Do they apply the same high standards to Bitcoin and Ethereum?

The most astonishing thing is that we can store decentralized data permanently with a single payment (a small amount of ETH and ANT). Isn’t this the ultimate goal that all crypto projects aspire to? It took 20 years to implement this, and now we are testing the results—so why is there so much negativity? I don’t understand.

No other project has even come close to enabling the permanent storage of decentralized data with privacy. We are witnessing the launch of the project that is closest to achieving this holy grail.

It would be better to just honestly shout “to the moon.”

19 Likes

Show me a software project that is trying to do something complex that didn’t miss deadlines? Very few, if any. One thing a deadline does is push everyone up a few gears to try their best to get it over the line. Not something everyone wants to be involved in because of the stress, but some love it.

Keep going Maidsafe, the trajectory sounds like it is in the right direction. Well done for getting us this far.

10 Likes

1 tb = 1 attos might have been interpreted as “let’s make you rich quick”, but it was more to say ‘let’s upload data to the Network as quick as possible, without distractions’. Talking about costs to upload is a distraction probably…

It’s because you don’t have the luxury of time, OpenAI has 300 M users

Imagine if millions of users their AI is using the internet and they only interact with their AI (your message of privacy and security won’t matter anymore). If computer use and operator is not a hint…

With uploading time should not be wasted, with costs to upload (some might be hesitant), overal it’s a redistribution of wealth, let’s not loose the focus should be uploading…

2 Likes

Yes! My sentiments exactly.

2 Likes

Staged launch was the term being used. But even the team has recently been calling the current testing “beta testing”. So I am not sure if the staged launch was more just for marketing and setting milestones in the beta testing program.

In any case we are still on a staged launch with the TGE event being a major milestone with the next major milestone/launch being persistent data

I liken it to bootstrapping the network rather than a big bang that can be unpredictable.

5 Likes

If the problem is hard, and the dev team says it will take 6 months, double it and add 50%, that is software. :wink:

4 Likes

aint gonna happen no way, no how.

the market for Agentic AI mesh frameworks is exploding now and demands distributed agent frameworks with topically specialized LLMs delivering service to the agents at low cost, that aint Google.

The market for LLM creation is rapidly fragmenting as a result, with LLMs served up from lower cost colo locations to reduce risk,

imo the time is right for Autonomi

to step into the distributed AI Agentic Framework fray to offer distributed AI LLMs serving agentic frameworks with agent co-dependencies delivering what end users want, answers to their prompt, from a mesh of LLMs running on Autonomi from home network computers…

Deepseek , is just the tip of the opportunity, take a look at BitNet(CN and MS) and the 1.58 bit matrix multiplication they are using using = Ternary Math +1, 0, -1 matrix to do relative cell value transformations, one does not need an NVIDIA A100 to generate the same type of response accuracy directly from big integers.

users don’t really care where the composite response came from and how the result was assembled. (from several agents in sequence aggregated and delivered by one agent to the end user). So that is the Autonomi future opportunity, which may be “The killer App”.

10 Likes

I’m unconvinced it with a positive marketing program either. It just led to eye rolls and confusion.

It’s history now though. Let’s hope TGE goes according to plan. In the long term, no one will care about launch hiccups, as long as it launches successfully.

10 Likes

I can certainly understand the frustration and also the internal stresses I can see from many of the team. It’s also linked with many of the team who do a 9-5 type day as well as those grinding like crazy. So a mix of all sorts of feelings and hassles, but my conclusion is that it’s all moving forward and moving forward in a steady pace.

There is a load I would love to get done as fast as possible, but I cannot get the energy right now. So I peck away at bits and pieces.

However, my feels are that we need to push and we need to launch and stay on the launch track. I think there is a lot of honest hopefulness at play as well and I am glad for that, albeit that it causes grief for sure.

In short I can see and totally understand all sides of this one, I get to look on mostly these days, but all I see is a massive group of people who all want to succeed and that is the most important thing.

36 Likes

Yes just my thought :grin:

1 Like

I’m enjoying some of the tweets going out, but have a question about this one: x.com

It states:

Everything that’s ever been public online is potential training data for AI models.

… and then asks whether we should be able to opt out of our data being used for AI training.

My question is this; won’t public data be public forever on Autonomi and therefore available for anyone to train AI models with if they want?

How could someone ‘opt out’ of their forum posts on Autonomi being used for AI training if they want it visible to all users, and therefore bots and scrapers etc?

I can understand why Autonomi may lead to fewer leaks of private data that end up getting into training data etc, but I can’t see the advantage for public data, and an explanation would be appropriated :slight_smile:

1 Like

IMHO, if you make the data public, what the public does with the data is their business and no longer yours.

6 Likes

On the web with domains owned by people/companies the web site has rules that can be set and in theory the AI scrappers are supposed to honour those rules and thus certain data can be, in theory, excluded.

Of course that assumes the scrappers honour the rules LOLOLOLOLOLOL

And some may think that Autonomi is owned by people/companies and divided up into domains that people own. LOL, nope Autonomi is owned by nobody and by everybody on earth. Thus no such rules can exist for public data whether scrappers follows the rules or not.

Private data of course is viewable only by the uploader (& apps they run) and anyone they give the datamap to. If datamap is given away then it may end up public at some stage if someone uploads the datamap to somewhere.

2 Likes

I’m still unclear on how Autonomi would be better than the current Internet in preventing scrapers accessing public information.

Can you clarify, or are you suggesting it won’t be as no rules can exist to protect public data from AI scrapers?

If there is no advantage, do you understand the angle from the tweets Autonomi is putting out around this issue?

For public it has no protection. It is not better. If you were looking for that in my post then that is why you were unclear, my post was saying it is not better, but that the internet today has some (pseudo) protections unlike Autonomi

The only consideration is how to find public files on Autonomi and that might make it difficult for scrappers. Once found though there is no protections

1 Like

If a person uploads data as public could they later decide to make it private? Of course if someone else has already copied the data, then the cat’s out of the bag.

1 Like

The datamap being uploaded is what makes it public. Since its persistent data then you cannot remove it from public.

What may happen is that there will be apps that collate public files and it could be removed from their indexing. The file is still public but perhaps not so easy to find

2 Likes