AI News - from general AI to LLM’s. All posters welcome

At the risk of getting yet another smack for being “political” its obvious that a general economic crash has been the plan of the “elites” for some time.
Massive unemployment, massive inflation and the only ones to benefit will be the self-chosen tribe and their arse licking petit bourgeois poodles.
Just as in Germany in the 20s, a certain segment wlll massively buy up the distressed assets.

“Kristalnacht” was only the German people taking back what had been stolen from them.

This pattern will repeat- much sooner than you think.
Especially if the Yanks continue losing aircraft bases and personnel at the present rate.

1 Like
4 Likes

Apparently works on existing hardware (or so the AI tells me). However the AI also gives this caveat:

The Tufts breakthrough is a powerful proof-of-concept for a specific type of problem. It shows that if your task has clear, logical rules (like many in robotics, logistics, or finance), you can achieve massive gains by hardcoding those rules.

However, as industry analysts were quick to point out, this is a different approach from building a flexible, general-purpose AI . For many real-world problems, the rules are not clear, the data is messy, and you can’t hand-code a planner for every situation. In those cases, a general-purpose framework—even a less efficient one—is the only viable option.

So, the 100x efficiency is real, but it’s an achievement for solving structured puzzles with hardcoded rules.

2 Likes

This is cool :smiling_face_with_sunglasses:

1 Like

Hard coding rules = humans doing coding :rofl:

So if you want efficient and reliable code… :thinking:

Examining the Claude source code reveals places where this has been done to ensure the LLM outputs a correct result. It reveals a lot more which is um, interesting.

Ironically, LLMs will be strongest where the “rules” are reliable and well known, and less reliable where things are unclear because their training data won’t give rise to clear rules.

2 Likes

The “breakthrough” of the Tufts University team makes their AI more like a human as we also have hard-coded rules - emotions and other preset neuronal regions of the brain. And this method allows us to learn much faster - just as the Tufts team found with this neuro-symbolic method.

Also I think that AI’s themselves are going to be used to create these hard-coded rules. Creating perhaps hundreds or thousands of new models that are more specialized to a field/area of work.

This method limits the creativity of the model, but brings much greater focus for it as well.

1 Like

That is cool John. A “step” toward a complete exo-suit - pardon the pun! I’m going to hold out for the full iron-man suit with in-built AI butler - Jarvis :laughing:

But for those with difficulty walking, this exo-leg booster should be a great tool.

2 Likes

I don’t agree that there’s an equivalent to hard coding software in humans. You can make any analogy but it’s way out of line IMO. I’m not going to argue the point though.

1 Like

… and now you can get free, instant web hosting
for agents:

… of course this has already been possible via github too, just extra steps and not anonymous.

2 Likes

Interesting research with memristor using tungsten, graphene and a ceramic allowing very high temps. Memristors may be an ideal candidate for AI applications and being able to operate at high temps may allow much more efficient cooling - even radiative cooling as used in satellites, thus this tech may be a great solution for space-based AI data centers.

3 Likes

Small models are getting as good as large models in terms of tool use. Could make them useful for a variety of local purposes.

In other news:

Anthropic banning openclaw as they don’t have enough compute.

I find this really interesting because I use mostly all Chinese models and have little issue with timeouts due to compute bottleneck - it does happen, but not often. Given that Chinese appear to be much more positive about AI and that China likely has inferior hardware, I don’t really understand why the US is running into hardware constraints when the Chinese are not.

1 Like

Err, what if that is NOT the case and they have chosen to show you only a subset of their capabilities? How very Iranian of them…

Sorry Tyler, your US exceptionalism is showing again…

1 Like

My good friend … everyone knows the best chips are still made in Taiwan (which some consider China) … so don’t tell me I’m talking about American exceptionalism! :wink:

1 Like

Bollocks - the best chips are made at Marios, just across the road. £4/bag - expensive but like Nvidia, probably worth it.

The process of the peaceful re-integration of the island of Formosa back into the Central Kingdom just took a massive step forwards this weekend. :slight_smile:

The resistance is smiling, the East is Red.

1 Like

1 Like

IMO, what I see here trend-wise is that top models are going to get much bigger. What we can see from the data is that scaling really works - shouldn’t really be a surprise (humans have ~10¹⁵ synapses) but this also means that running a top model on a home server isn’t happening any time soon, unless you are a billionaire and can afford a rack.

Who knew that the actress Mila Jovovich was a coder?

3 Likes

The MIT CompreSSM article is great. One caveat though:

“It’s the stepping stone to then extend to other architectures that people are using in industry today.”

If they are able to translate this work for industry use, then it should greatly speed the development of models and … or put another way, compress the training time of models allowing for much more training in the same period, hence giving more powerful and more refined models

1 Like