April 18, 2024

Thoughts on AI "escaping the box"

There's no reason to fear uncontrollable AI killing all humans because how could it escape the computer? We could always just turn it off.

Focus on the real risks, if any, like misinformation, bioweapons, and bias.

I hear that a lot.

"AI" is just electrical signals zipping around in some wires. It's not attached to hands, feet, or wheels (unless we give it those things). It doesn't have any way to interact with the real world. On top of that, AI requires energy. How would it get energy if it kills all people? We're the ones running the power plants.

Rewind 200,000 or so years.

Human intelligence is just electrical signals zipping around in some meat. It's not attached to gills or wings. Sure, it's attached to hands. But its hands are soft and squishy. It has no dangerous appendages and barely any protective armor. It's too big to manipulate things at the cellular or atomic level, so it can't make a harmful chemical to hurt us. It's squishy and slow and doesn't stand a chance against something with claws and teeth. Are you seriously worried that they might start flapping their tongues at each other? We can always just claw their throats and then they die.

Could we have seen the danger that human's capability for intelligence would have towards their ancient predators? Could an outside observer have predicted that lions would lose all agency over their future?

I want to take a brief aside here to talk about that word "agency". Maybe that's the more important thing. I often hear "kill all humans" as the threat. But that's just one way that humans would lose agency over their future. The more general thing though is agency. Once we've lost agency and can't shape our own future, what's the point of humanity?

Back to it. Could an outside observer have seen the danger of human's capability? What was it about that pivotal moment a few hundred thousand years ago? What were the key attributes that caused lions to lose their agency to humans? And are there any similarities between what was going on back then and what's going on now with AI?

What was responsible for humans taking away agency from lions?

  • Communication (between individuals/groups and across time to descendants)
  • Matter manipulation

These things are intertwined in a feedback loop.

You can manipulate matter, and that lets you manipulate more matter. Create a shovel, dig more dirt. Dig more dirt, build safer house. Build safer house, have more descendants. Have more descendants, create more shovels.

You can communicate, and now you don't have to discover how to create a shovel. You can start digging dirt from day 1.

I don't mention biological improvements to the brain because 1) every animal has that capability and 2) it's obviously not very important considering how little the biological part of our intelligence has improved in the past 200 years compared to how great our collective intelligence has grown in that same time.

Where does AI stand? It's not hard to imagine AI would clearly win in the communication game. But manipulating matter seems like a bottleneck. And that's a hard requirement. Anything humans care about involves matter. I don't care how "intelligent" something is if it can't interact with the world.

But how do humans interact with the world? There's nothing about our bodies or brains that, without hindsight, would lead someone to predict that we could dig up enough uranium and purify it to the extent required for a nuclear bomb.

TODO: It's like I'm trying to convince someone that something bad will happen but I can't point to why or what or how. I know that's inherent in this. That's the whole point. I'm not as smart as a self-improving AI. But how can I know that Kasparov will win, not knowing what moves he'll make? In that case, I'd know based on priors. His win ratio. His social accolades. What are those priors in terms of AI?

What could you do with just communication?

How bad could things get before matter manipulation?

Let's imagine humans could just communicate. How fast could intelligence have grown? How much of our intelligence gain is attributable to manipulating matter?

Humans would have been limited because the two worked in a feedback loop. Matter manipulation led to improved communication – more descendants, faster communication, healthier diet, better "processing power" both individually and collectively.

But the reason humans needed that feedback loop (and the reason machines don't) is that humans couldn't improve the capacity/capability of their brain. They couldn't change modify and improve their own algorithm. The intelligence gains that humans drew came from a collective intelligence – collective over both time and space. We retain the knowledge of our ancestors in the form of books. We build communities and libraries and the internet. Our gains come from that communication and coordination. We accomplish goals as a collective intelligence.

Machines don't really need communication nor matter manipulation to improve. They can modify their own algorithm. They don't need to manipulate their environment to increase their intelligence.

What limits will machines have? They won't be able to run physical experiments. Will that stop them from learning how to manipulate the world? No one individual person has run every experiment needed to produce an atomic weapon. It got produced by people communicating. That communication took place in a form that AI has access to. Experiments aren't needed to learn facts about the real world. Einstein developed his theory, I imagine, without ever performing an experiment. He formed his knowledge by pure communication.