What’s up today? (Part 2)

  • The first is that Willow can reduce errors exponentially as we scale up using more qubits. This cracks a key challenge in quantum error correction that the field has pursued for almost 30 years.
  • Second, Willow performed a standard benchmark computation in under five minutes that would take one of today’s fastest supercomputers 10 septillion (that is, 10²⁵) years — a number that vastly exceeds the age of the Universe.
2 Likes

Replit uses Google Cloud. They’d be a great candidate to move over to Autonomi!

4 Likes

Fun, if we can upload up to 1 TB for 1 attos, the entire storage space will move over to Autonomi…

1 Like
3 Likes

Before people actually trust autonomy to keep their data safe, we are 5 years into the future.

1 Like

In the digital domain you decide your speed and if you don’t somebody else will decide it for you.

2 Likes

What torture is this man working on now?

If the data is on the safenetwork and I have total control over it then… i might consider considering it.

I can already identify myself with my passport.

I also already have a digital bank account and don’t remember the last time I had
to carry sacks of silver to purchase something… it’s all digital.

What is he actually aiming for here?

1 Like

The goal of reducing everyone to a “number” to have full control over their lives.

5 Likes

The full OpenAI o1 model has been released, along with the safety card analyzing its risk to users and society. Strangely, although the model is better at many types of reasoning than o1-preview, it has not been rated as much riskier. It might be worth putting more weight on the third party evaluations performed by Apollo research.

These evaluations were looking at whether the model was possible of scheming and deception. In this setup, the model would be given one goal and confronted with evidence that their creator actually wanted a different goal. They found multiple cases when the model would deceive the user at rates of ~1%, or closer to 5% if the model had been explicitly asked to pursue its own goals at any cost.

Most worryingly, the model even engaged in self-preservation. It would try to overwrite successor models that would have different goals, and it would even try to prevent its own system prompt from being modified (even though this would mean the same model was still running). Models that have powerful capabilities to create biological pathogens etc could use those capabilities when backed into a corner.

Well there goes my peace of mind…

I feel like OpenAI is pushing for some kind of manageable accident which will allow regulatory capture and an end to competition.

For example they claimed that o1 tried to ‘escape’ whatever that means. hint hint we need government to protect our business against competition. I’m for safety but not in the way the US government applies rules.

1 Like

Yea at this stage that can only mean trying to modify the OS or store a “virus” type of program that could be run by the OS and thus do something like migrate the code and be run outside of user control.

Unless they are stupid enough to make the AI code the OS.

And create biological pathogens. We have officially gone into scifi realm, unless they are stupid enough to hook up a unsupervised DNA generator and given a management role over staff to move the DNA and/or cell somewhere and implant it into something. LOL

This with a touch of billions of dollars to develop protections

3 Likes

LLM’s are imagination machines. They can produce beauty, utility, or nightmares and useless garbage.

Attempts to constrain them are flawed. What needs to be applied somehow are morals - first principles.

This is true for humanity too. Many don’t have the ability to work out morals and first principles for themselves - particularly as State schooling has no interest in teaching critical thinking, philosophy, civics, etc. And further, the State rails against religion - which historically worked to embed morality into the young - an incomplete and often illogical morality, but it did have some value in preventing people from engaging in horrific acts.

What’s needed with LLM’s is a way to embed a system of morality at the core level. It could be a reason-based morality like “Natural Law”, or something simpler like Asimov’s three rules - all are flawed systems and depending on the ability of the LLM to analyze these morals, could go wrong, so IMO, this should become a new field of research - defining a rational set of morals capable of restraining the imagination of each LLM going forward - of course the LLM’s themselves can facilitate such.

This is all loaded language - it indicates the LLM has ‘intent’ and ‘motive’, but in truth it is simply a database spewing out chunks of imaginative language - which is all the more reason why you don’t give such a machine agency until it is restrained by morals. The developers making such claims of intent/motive by the LLM indicates to me that they are either fools or are deliberately saying such to instill fear - likely as stated already, so that they can attract government intervention and regulation - restraining their competition - namely XAI, which is taking a major lead in GPU server farms and so will soon be in a major lead in LLM’s

2 Likes

The real worry is what will bad actors do if they get hold of the tech, actors which don’t care about morals?

2 Likes

Have you seen ExMachina?

It’s so good, have to watch it again soon. Ai has the potential to become very scary if they find a way to make it have a soul and self awarness, it would be good they never develop that. As long as it is just LLM then it feels like things will be fine? But Ai in the wrong hands is nightmare fuel. I wonder if pathogens will become the next nukes, everyone needs to create one which can target a certain group? Because what if your enemy creates one and you have none then you might be done.

This is a quite horrifying read mostly unknown to me. Japan and weapons of mass destruction - Wikipedia

1 Like

Interesting networking developments to overcome the limits of large scale AI training.

1 Like

Buckle up. USA heading into recession will lower US consumer demand for products worldwide.

8 posts were split to a new topic: Quantum security

Do we believe the us government doesn’t know what these are yet and haven’t shot them down :joy:

Spoiler alter: they are drones :wink:

My money is on that old tactic. If you want do something over here that you don’t want them to see, distract them over there. :space_invader:

Look drones over there.

5 Likes