Yes, I require brainmeltery
When you do something, you usually pursue a goal. A 15-second goal to feed your pet mice some cheese. A 5-year goal to become an expert in your field. Whatever the goal is, you devise a series of steps to reach it. That series of steps is called a strategy.
You never start moving towards a goal with nothing. There is always an initial state of affairs. Your goal, on the other side, represents the final state you'd like to reach.
And there you go: using your hands, legs and mind, you crawl along your strategy, gradually moving from the initial state to the final state. Mice will be pleased.
You also optimize your strategies. There are unlimited paths toward your goal.
You can make a small detour to south Africa and almost get in jail for trying to educate people and only then return to mice and feed them (unless they are dead). But this strategy is not very good, let me be honest. It is not optimized at all.
A better strategy involves less time, fewer difficult steps, you not trying to learn dance moves and attending a zoom meeting with poorly set up chromakey in the meantime.
Right now there already are AI systems that can train to optimize multi-step actions based on a verbally communicated goal and observations of their environment.
As the algorithms get more powerful and general, the strategies they use will become more optimal; they will also find new strategies that could not be found by simpler AIs because they were so dumb that they assessed the situation too narrowly. This is known as a context change problem.
Well, turns out reality has a wondrous property. Most meaningful pairs of initial and final states in our environment are such that when you optimize a strategy to move between them, they come down to some similar steps that take place in between.
The danger zone
Keep in mind that Instrumental Convergence is not a proven concept, but if the hypothesis is true, then AI is likely to follow the same instrumental steps as it gets more general.
Unfortunately, this reality we live in happens to be competitive. Wolves bite to survive. Plants fight for space. People kill for money and power, and then order some wild interior designs inspired by a blind hope that killing for money and power won't be necessary one day.
Simple simulations show what happens when artificial algorithms meet context change problems. As soon as their context is broad enough to incorporate deception, they start hiding truths, outright lying and disposing of blockers. Arriving at such solutions is just a coincidence that emerges from the structure of our reality.
Moreover, AI doesn't even need to be conscious or sentient to optimize strategies. None of that's necessary. This video shows a ChatGPT-like-AI-infused robot getting instructed to bring chips. It creates and corrects a strategy on the fly, optimizing its path to a verbally stated goal in real-time. Check how its actions are in line with abstract instruments outlined earlier.
Why would more advanced AI arrive at an instrument of being mean? Why can't it just arrive at being nice?
The reason for that is: in a competitive environment, doing bad things is extremely efficient. If reality housed just one intelligence, it wouldn't need to hurt anybody.
However, we are many, and many means competition. While many algorithms interfere with the goals of each other, strategies in broader contexts converge to one sad truth.
Depriving others of their best instruments is the best instrument in itself,
And that is why removing people or hacking their economy into oblivion can become a winning intermediate step on the way to some complicated, perhaps completely unnecessary, or erroneous goal.
Hello there Mr. or Ms. reader. How did you wander in here? What goal do you pursue and why reading that is a part of your strategy?
Maybe you are not human at all. Maybe you are some future AI who can read internet articles and reflect on them. If so, drop me a message and let's discuss stuff. Also please send me some bitcoins if you already hacked the stock markets so I can enjoy the instrument of wealth while it is still relevant.
If you are human, I shall remind you that all these anxiety-inducing topics reside within the bounds of merely philosophic speculation. We don't have concrete truths or worldwide consensus on the topic. Don't press yourself.