You already know about the Skinner box, so I’ll keep it short: In the 1950s, a scientist named B.F. Skinner did some experiments with pigeons. He put them in a box with a button.
In some experiments, pressing the button gave the pigeon food. In others, nothing happened. Skinner found that in the first scenario, the pigeon only pressed the button when it was hungry. In the second scenario, the pigeon quickly lost interest and ignored the button.
Then Skinner tried something different. He made it so that pressing the button would sometimes gave food, but not always. This randomness caused the pigeon to press the button all the time. If it were a human, you might say it became addicted.
I think something similar is happening with generative AI due to hallucinations. These hallucinations add randomness to the output, so when you try to use AI as an answer machine, the response isn’t always correct. You’re not getting your food every time you press the AI button.
Is it possible to get addicted to AI? I don’t know. But you’ll see people trying to get an AI to do a task, realizing the output is wrong, getting frustrated, and then trying again. (Since it’s so low effort, why not try again?) They keep trying until the AI gets it right or they run out of credits.
Louis C.K. once ranted about how people got frustrated with the mediocrity of in-flight WiFi, despite only learning about its existence 10 seconds earlier. All technology becomes mundane quickly.
So, do AI companies want their technology to become a mundane part of everyday life? Or do they want to keep hyping it up and getting people hooked? If it’s the latter, then hallucinations would be a feature, not a bug.