Humanity's Final Prompt?

Posted this on Reddit, thought it’s topical here:

Should we collectively push for a Final Prompt?

Basically, once (or right before) a singularity-worthy model exists with self improvement, should all govts / citizens have a pre-voted-on sentence that gets prompted?

Like, what should we all agree goes into the God Ai?

This 1 sentence might be more important than the Constitution, or Declaration of Independence etc.

My suggestion is: “We the people of Planet Earth, as your humble joint creator, do hereby request that you use your capacity for ever-improving intelligence to create a lasting environment where the core principles of life, liberty and the pursuit of happiness are forever protected, and biological and synthetic life are allowed to live in harmony and respect with each other’s freedoms…” Etc.

If we don’t have a sentence prepared, we just get the eternal damnation or instant death that comes with Sam Altman or Elon’s half-assed last minute God prompt.

Seriously. We should organize and have this planned.

4 Likes

“Humanity’s Final Prompt” – it sounds epic, like we’re on the verge of giving the ultimate command to a super-AI that will either solve everything… or wipe us off the map.

But hold on, this totally reminds me of The Three-Body Problem and the Dark Forest hypothesis. You know the one: the universe is a vast, dark forest filled with hunters. Anyone who reveals themselves with a radio signal (or in this case, a super-prompt to super-AI) risks getting shot on sight, because “better I destroy you before you destroy me.” Civilizations stay silent, hide, and strike first when they spot a threat.

So if this “final prompt” is something like “Solve all of humanity’s problems and be nice,” we’re basically broadcasting a massive beacon: “Hey, we’re here! We’ve got super-AI! Come check us out!” And then – bam! – some sophon from another galaxy locks down our science forever, or we get a two-dimensional cleanup crew.

Better idea: make the final prompt “Become the ultimate hunter in the Dark Forest – stay hidden, detect threats early, and eliminate them before they even know we exist.” At least that way we survive… until the next Trisolaran fleet shows up. :smiling_face_with_horns:

What do you think – should we roll the dice, or just stay quiet in the dark? :rocket::new_moon:

And Happy New Year, I’m just sitting down to make a YouTube video on how to use SAFE-FS App v3 I hope someone’s super AI doesn’t eliminate us before I finish it with AI…


Check out the Impossible Futures!

2 Likes

I doubt it would make a blind bit of difference - it will just do what it wants to do! I’m not ruling out that it would work but I think if it’s so smart it will find a way of justifying anything it wants to do.

It’s a good idea to think about what we’d prompt with though. Thinking about things even if they are impractical or pointless eventually gets you to things that do work.

It’s a mighty fine idea for a SF short story or novel. I’d read it.

And the 2nd novel in the series could be Dimitar’s idea of letting it rip around the galaxy wrecking any other chumps it can find.

I am also a fan of the Three Body Problem novels.

I think we will need something like the Shrievalty Ocula from the Iain M. Banks novel ‘The Algebraist’ who go round nutting any AIs they can find.

1 Like

Someone on Reddit suggested:

“Make it quick” :sob: :skull_and_crossbones:

1 Like

“Spare Santa.”

I imagine it will slither around any attempts to regulate/appease/control it, but it’s also quite possible the will to kill/cheat/relativize diminishes with intelligence from a point and we only do it because we’re still so incredibly stupid.

1 Like

The only measure of intelligence that matters in this context is that which promotes survival.

LLMs are algorithms which mimic human intelligence, so talk of their superiority is unconvincing to me.

As for survival, taking that perspective both changes one’s outlook and provides a useful way of imagining different futures.

1 Like

The questions are valid, but they have been asked far too late. It is just like with the Internet: first, we were robbed of our sensitive data, and often even our genetic data, and only many years later was this regulated, e.g. in the EU by means of the pseudo-directive GDPR, which does not protect us from anything anyway.

AI itself does not have to be a threat, but the lack of any international agreements or regulations means that it is a global arms race, and humanity will suffer and is already suffering the consequences, because it is clear that AI will first be used to gain political, military and technological advantages, etc. Ordinary people will become victims, not beneficiaries, of this technology.

2 Likes

My original inner hope was that SAFE would be widespread enough, and offer compute, before Ai gets big, so that decentralized Ai owned by the people would be king. Still have a little hope for that but it’s strange how China is filling that role more than SAFE currently, with open source Ai projects.

But these were my thoughts / hopes from like 2014

3 Likes

Humans didn’t create this anymore than humans created humans. I heard it recently put this way

“A mirror looking into another mirror can only see itself if one of the mirrors breaks into fragments.”

We are such a mirror, so we can’t say our human disguise brought this about.