Real alternatives to LLMs for AI and maybe AGI

Many big companies and ‘experts’ are pushing the idea that LLM’s are the future (medium term at least) for AI … but it’s not necessarily true. In fact, LLM’s may be merely a fad and not the best way forward to better AI and in particular toward AGI. Instead LLM’s might be best only as tools for creativity and initial exploration.

One particular project (AIGO), as an example of a non-LLM working intelligence (and that has a working memory), has been developed over 20 years - it works now and in many respects it is qualitatively better than LLM’s.

An introductory example presentation of AIGO as a personal AI assistant:

About the founder of AIGO Peter Voss (via Gemini)

Peter Voss is a prominent figure in the field of Artificial General Intelligence (AGI). Here’s a summary of his work:

  • Pioneering AGI: Voss is credited with coining the term “Artificial General Intelligence” [4]. His goal is to create AI that can think, learn, and reason like humans [4].
  • Focus on Cognitive Aspects: Voss’ approach to AGI is different from the mainstream. He emphasizes understanding intelligence and building systems based on cognitive architectures, rather than just using massive datasets for training [3, 4].
  • Leading Aigo.ai: Currently, Voss is the CEO and Chief Scientist at Aigo.ai. This company is developing what they call a “hyper-personalized chatbot with a brain” for enterprise clients [4].
  • Critic of Current Techniques: Voss has expressed reservations about the limitations of popular AI techniques like transformer architectures, arguing for a deeper understanding of how these models work [2, 3].

Here are some resources to learn more about Peter Voss’ work on AI:


Most everything below about AIGO I’ve pulled from this link:

Here is an example of a better system for talking with humans and having a real memory. They are showing used in text chat mode but adding a voice system is just a matter of system cost.

NOTE: I have seen the AIGO hyperfast language parsing and hyperfast knowledge graph in action in DEBUG mode. It is actually breaking down sentences completely and merging new knowledge facts into a knowledge graph. It can scale to millions of facts.A million real world common sense facts, a million facts about an industry or subject and then a million facts about you and interact in sub-second responses. It learns and remembers you.

Aigo is a real company that has been improving its AI for 20 years. The system works and has several commercial customers paying millions each year. The company would be doing even better with better strategy, marketing and business development. There technology is top notch and works.

The marketing and video demonstrations are inferior for what the system can and is doing. A smaller scale test they performed was to feed in 419 natural language sentences, the system understood and placed the facts into a knowledge graph along with other basic common sense world knowledge. the system was then interrogated with over 700 questions and it was 89% correct vs 35% for Claude 2 and 1% for GPT-4.


Some video’s from the NextBigFuture link above:

Examples of AIGO working as an agent to manage funds from bank account – could be used to manage funds from :ant: account:

An example of a website using AIGO to help customers:

The ‘Aigo’ project has over the past 20 years produced number of improving versions and sold commercial versions for millions of dollars per yaer in revenue.

It uses hyperfast knowledge graphs and best in breed, hyperfast language parsing.

Benchmarks (conducted at the end of August 2023) tests the ability to learn novel facts and answer questions about them. AIGO was compared to Chat GPT-4 (8.000 tokens context window) and with Claude 2 (100.000 tokens context length). The AIGO system was pretrained with only a rudimentary real-world ontology of a few thousand general concepts such as person, animal, red, and small.

Chat GPT-4 and Claude 2 were used in their standard form and not constrained in any way.

The test fed 419 natural language statements to each of the three systems. These were simple facts, some of which related to each other (e.g., Tina wants a dog and a cat. Actually, Tina only wants a cat). Finally, asked 737 questions and scored the answers. We evaluated the responses base on a reasonable human standard. If the response pertains to the topic, answers correctly based on the correct source of information, and is grammatically sound, we consider the answer correct.

The AIGO system scored 88.89%
Claude 2 only managed 35.33
Chat GPT-4 was unable to perform the test. It scored less than 1%.

Longer, but IMO, definitely watch this one - the founder Peter Voss gives a talk about AIGO and answers some questions:

Aigo’s human-like cognition allows it to:
Remember what was said and utilize this in future conversations
Understand context and complex sentences
Use Reasoning to disambiguate and answer questions
Learn new facts and skills interactively, in real time
Have ongoing, meaningful conversations
Hyper-personalize experiences based on user’s history, preferences and goals

Two relevant whitepapers:
Why We Don’t Have AGI Yet

Concepts is All You Need: A More Direct Path to AGI

@happybeing AIGO addresses some of your solid points about the problems with LLM’s, so I hope to hear your opinion on it.

@dirvine Peter Voss reminds me a lot of you - he’s clearly one of those deep thinkers who’s been working in the background for years on a new path forward.

I wonder about a partnership with AIGO as they appear to have more advanced AI (and perhaps more accurate to say a more genuine approach to AI) already working as agents. Using Autonomi to secure the data that AIGO gathers on people it works with seems like it would be a great way to give AIGO users a guarantee of their security. I think they have or are working on a blockchain solution (maybe already partnered with another project in our space), but of course :ant: would be much better.

13 Likes

This is excellent and resonates well. I love that he says

  • Generative alone is not the way
  • Backpropegation is limiting to only historic info and I also believe this is why hallucinations are so common.
  • Open ended MUST be the way forward.

All in all for sure he has similar thoughts that I have. I need to find out much more about this one. :muscle: nice find chap, nice find

EDIT
Bit worrying I cannot find any info on how this works, net even in the whitepapers. I was hoping for open ended neuroevolutoin or spiking NN on neuromorphic hardware etc.

12 Likes

Yes, I believe it’s closed source. He did say in the video though that he’s happy to have private conversations with people about it (probably via NDA).

I will look around a bit more to see if I can dig up anything.

Edit: haven’t been able to find anything, but suspect AIGO is just using GPU’s and/or TPU’s as I can’t find anything relating Peter Voss or AIGO to neuromorphic computing or spiking neural nets. He’s also a general AI researcher and not specifically into hardware development.

Given that, I expect that the hardware requirement for AIGO would be rather large/expensive and not a home-user capable system at this time. Not that neuromorphic computing or spiking neural nets are really available to the home user at this time either, but if they did become so, one might reasonably expect them to be fairly manageable cost and power-wise for a home user.

2 Likes

Another LLM alternative is verses.ai, making AI based on active inference

They’ve got a bunch of published papers and some of the most cited neuroscientists working for them as they’re trying to make more bio-inspired AI. They haven’t really shown much interesting though, but they
have a roadmap Research Roadmap

2 Likes

I am now following Peter on X and have queried about AIGO hardware.

https://twitter.com/peterevoss?lang=en

2 Likes

@dirvine

I was wrong! It just uses ‘standard’ CPU’s according to Peter Voss:

Links to the meetups he mentioned in the tweet ‘to get high level details’ – edit I had a look through and didn’t see much, but there are upcoming events, but of course not in Scotland :laughing::

4 Likes

@dirvine, I noticed you replied to Peter Voss on X. His email, peter(A.T.) aigo.ai was posted in the meetup groups he gave in the links on his tweet to me - which is all public, so probably okay to message him that way if he didn’t catch the tweet.

2 Likes

Will Autonomi in time allow for Terminal like computers free of many of the hardware backdoors? I know it is early days but as bandwidth increases couldn’t we have terminals that store nothing when powered off?

1 Like

A terminal is still hardware, so if there is a backdoor, then not much can be done about that. Perhaps though you meant software backdoors, in which case, in the future it may be possible to run an Autonomi :ant: OS in a browser sandbox.

1 Like

I’m thinking less hardware less places for backdoors or accidentally weak designs.

My guess is this was designed into the hardware and just discovered by researchers. Apple does not allow us to have our own keys and here they screw up encryption on the hardware itself.

1 Like

New substack article by Peter Voss:

I get what he’s saying - don’t fully agree, but I really think he just needs to open source his companies AI and find another way forward for profiting.

As this rate they are going to lose any potential competitive advantage - either he will be wrong and LLM’s can eventually become AGI’s, or some other group will figure out the secret sauce he claims to have - and I want to believe him … but eventually, you have to put up or shut up.

Just my two bits.

3 Likes