I feel that the personal AI feature is the right thinking from a launch / marketing perspective. It seems like a perfect fit, relevant, and cutting edge. It will spark a lot of attention. As Happybeing has mentioned it also must be done right, there needs to be some discussion. But I’m very excited!
Amageddon! Sounds good!
Part of what I am working with is fine-tuning to particular API’s so no hallucination type stuff there. Initially, I expect a screen to show you exactly what will happen and what you will spend that you have to agree to. As time goes on and no error is apparent then we can move it forward.
I am not sure we should give it a name yet
I would like it to be an informal decentralised partnership of sorts. Kinda like if SAFE users and community like the simpleX users and community and the solution works flawlessly between them, then I think some distance is good. Sort of exploring what a real decentraliseed partnership would be, perhaps?
Yes, it’s a time we have never seen in SAFE this one and it feels good, very good.
It’s important for us (I think) that we recodnise AI is happening (happened) and here, but the corporate world cannot control it or be those who tune / align it. The approach I am looking at is purely local AI in your safe vault. This does not speak with other AI’s or any global AI.
So a secure local enclave with the power of the LLLM we see now but tuned to your data alone and using APIs of apps you choose (with sensible defaults).
This local AI will never leak data out by design and is specifically controlled and aligned to your needs and wishes. This gives the individual the power to be creative and efficient, but hopefully protects us from a global human alignment with a global AI. So keeps us out of the matrix if you like
This is what I would expect, if I make use an assistant human or otherwise signing off on something that I have not verified to be correct is more my problem than theirs.
Of course I may decide that certain stuff is not important enough to check and errors slip through, but again the buck stops with me.
I agree this should be done but don’t believe it is adequate for various reasons.
People see what they want and expect to see, so you have to be very careful when checking, which takes focus and time. So I think checking is inherently error prone, and doing so effectively may be more time consuming than doing the operation yourself.
Another problem is trusting Armageddon to do what it is showing you it is going to do. That might be solved by taking the action out of its hands with careful design but there’s a risk that implementation may overlook things like this because such risk averse design is a difficult task and a rare skill.
P.S. saying the buck stops with me is true, but we’re not just talking about you. Blaming the user when you give them tools that put them at risk is not a solution, and the buck really stops with those providing the tools.
EDIT: A big risk with human like LLMs is that people tend to trust them.
Yes, this is where I agree with a lot of what appears to be skepticism on your part. I don’t think the tech is the issue it is the sales pitch and some folks expectations.
The push to make it sound human is wrong.
AI Artificial Intelligence it is all in the name, few people consider Artificial Flowers to be the real deal
Perhaps in time it may be True Intelligence.
I’m not attributing this risk to how it is being marketed but to what we already know is an inherent risk of human like software, intelligent or not.
People in general treat software with a human like user interface as if it were human, project human qualities onto it and are wildly over trusting.
MaidSafe can put large warning labels all over such a product and people will still do this because it’s a powerful psychological effect.
I don’t know if it’s possible to design such a user interface that avoids this, but I think that should be a key objective and a deal breaker if not successful.
The more I think about this proposal the riskier I think it will be, but as always I’m happy to wait and see, and to help if I can.
One of the risks is time. This is unknown, personal LLMs untried, and could take a long time or not be achievable.
It also could be deployed with serious unanticipated negative consequences, so I think it’s important to be skeptical even though I would love this to be real and successful.
This is a good approach. However well-designed software is something I am not sure we have ever seen. Our approach to it is not good enough that somebody who has never used a computer can use it. I am not a fan of software as we know it really and I have this deep dislike of directories and files as an interface to knowledge.
However with an LLM you can do things that human created interfaces would find difficult. Some examples
you
Send Mark 100 SNT
LLM
I can do that but Mark had sent you a bill for 10 SNT do you want to send him that amount or do you want to send the 100?
you
Send Alison a message saying I will be at the dinner tomorrow night
LLM
I can, but Alison has requested dinner the following night, do you want me to ask her to reschedule or do you want to accept for the following night as she expects.
you
find out my written opinions on XXX
LLM
Here is what you have said in summary
LLM
Would you like the original sources highlighted for the above answer?
you
Can you write an introduction letter to YY about collaborating on SAFE messaging and can you include the API spec and an example app showing YY running on SAFE
LLM
Yes, here it is
— conversation continues through iterative design
you
What’s the world saying about superconductors at room temp ambient pressure
LLM
Summarises the pro con etc.
you filter using cited sources and my trusted sources
LLM
Here are the responses from sources you have told me you trust, plus responsive citing peer-reviewed papers with links to those papers
you
how does this relate to my own vaults data so far
LLM
It contradicts the following xxxxxx
and agrees with the following yyy
Would you like me to update your references and add in the new findings?
(last one would use a search which the search company could see though)
All of the above can be a mix of voice, text etc. but the interface is no more than a textbox.
I am keeping out of that part and focussing on what’s real right now and likely to be real very soon. Not a cop out and I probably am more concerned by LLMs than most, but I will use them for us to protect us where we can.
Very nice to see how far this project has come!
I have to say, this AI seems a bit like feature creep. I suspect the internet will be lousy with AI assistants soon, they’ll probably be built into computer OSes, and I don’t see why the SAFE Network would need its own.
Maybe it would be better to just focus on developing a good API so any AIs that exist in the future can easily interface with it.
I understand that both the AI assistant and the chatbot would be additional options with the possibility for the user to override them?
I think the “don’t complicate” principle and the default simplicity in SafeNet development is the best signpost for the future of the project, so I would generally not be in a hurry to adopt this artificial “shotgun” trend. I think that humanity is already lost enough in the current technology, and that it is simplicity that will prevail when the dust settles and emotions are cooled. I myself in my projects consciously do not get carried away by the current AI trend.
However, “never say never”, if you believe that the use of AI will not complicate and prolong the implementation time of SafeNet, but actually help it, then I trust in your wisdom and the decisions of MaisSafe.
Control over a LAN or designated IDs for IoT/smart Home etc, would be epic.
And in that response we see the problem I highlighted with human attitudes towards a human like user interface. All those examples are possible but all are also potentially errors.
Saying you’re staying out of possibly the most important risk question, or accepting all those as if they are correct, is a very worrying approach IMO.
How can users judge which things to be suspicious about when they naturally want to trust such a program, because it seems human, and almost always gets it right? This may be soluble, but only if it is acknowledged and taken seriously.
I think ignoring those kinds of questions is dangerous for users and the project.
I am not seeing it, I suspect crossed wires.
This is nto a human like computer interface at all though, it’s data manipulation via a console
Agreed, errors are possible.
I am staying out of the philosophy areas and focussing on what’s possible.
I am accepting those in this manner
- If they are wrong, then they are provably wrong
- If they are provably wrong then folk wont use it
In the above examples they are all provable
- Show me the message saying 10SNT
- Show me the diary where it’s the night after tomorrow
and so on.
I am not getting the we think it’s human vibe from here Mark. I have zero doubts it’s not human and I dont really think the average person does think it’s human.
Then onto the what if scenarios, what if they trust AI (Google), what if AI manipulates them (Amazon, ebay etc.), what if AI passes off false info as real (twitter et al). In all of these cases then humans can be persuaded/manipulated etc. and my core point here is this:
- Why are they manipulated
– perhaps because, ease of use, the simple way, laziness, lack of understand and so much more
More importantly they trust what the gov and corporations want them to trust.
So then you have local private LLMs driven by people and aligned to individual by individuals.
And there we are as far as I see it. We use the power, but try and control it’s reach and don’t become it’s fodder or more importantly the fodder of the corporations who align it. Either way it’s here and we either do something or sit back and … well we know how that goes, look at today. The worlds screwed up and folk are confused.
We need a differnt approach
You may be correct, if I may interject with a but, if we don’t embrace the latest tech the risk is being left behind.
Old school on the surface no matter the vast amount of groundbreaking work that went into the foundation.
My fear is also this.
We and devs build loads of apps, tons of them. LLM gets more viral (Google now includign it, bing has, many many apps do). What LLM input is doing is much of the old apps work, redoing images, making movies, data science and much much more.
Would all that work on apps right now (say it takes 5 years to a big app ecosystem) be wasted? Could we provide the functionality quicker and to more people using a much simpler interface that is more alinged to human interaction (speakign in english or whatever mother tounge).
These things have been on my mind a lot recently and the LLM area is going at break neck speed. I have been running and using locall LLMs for several months now and impressive is not even close. Now some projects look at intgrating actual APIs and using all those apps to provide answers. Not the apps interface, but the apps logic.
So there is likely a way here to get to market much faster with products we cannot imagine.
Or we build a browser and youtube replacement and so on and hope LLMs die.
How much trust have we (humans) placed in software that doesn’t interact like a human? We trust the humans who programmed the software. Like you’ve said these LLM’s are limited to their programming and are not AI. So I guess the question is do you trust maidsafe to programme an unbiased without motive ‘AI’?
I agree. LLMs are doing the opposite to vanishing, more like replacing things like google fast, so now, build a better AI, do it right, different, personal AI is way different, no going back now. Things have changed since this project started, accept the change, but do it the right way, and build something special. This is not feature creep, this is now a necessity. Just my thoughts
If you had to recommend one to try?
When first tried Chatgpt late last year it became clear, adopt LLM right now or rather the day before that or get left behind. Many people will have a hard time understanding that the world changed over night when LLM came out. Things will develop in a very fast pace coming years. There is no turning back right now, adapt or get left in the dust. Excellent move implementing Ai into Safe, it is the right way forward.
It isn’t about thinking it is human but having a response towards it as if it was human.
When I say “human like UI” I mean that it appears to listen and respond as a human would. And as the first such user interface, Eliza demonstrated, humans readily behave as if they are interacting with a human even when they know that it is just software.
I don’t think you see the problem here. It was one of the main points made in the article which I posted, and the reason I thought that you hadn’t read it when you replied to that is because you didn’t seem to address the issues it raised. And I think that is still true.
I’m continuing to highlight the issues that article raised, in this new context, and you don’t seem to understand what I’m saying.
So that’s why I commented about your response above. Is there anyone on the team who has reservations about, let’s call it an LLM UI, because of the issues I’ve raised and which are explained in detail in the article?
I’m not talking about philosophy here and I don’t understand why you think that’s what I’ve been doing