...how much of that saved time using AI is also dedicated to learning how it did what it did?
I recently finished reading Mockingbird by Walter Tevis. I love science fiction, and this book caught my attention in a local bookstore. Even though it was published back in 1980 and isn’t Tevis’s most popular work, it presents a hypothesis about the future that feels quite real and inevitable: the risk of technology creating a kind of illiteracy, and a dooming scenario where nobody is able to fix the robots that do everything for us.
To give better context, here’s the premise in a bit more detail. The story takes place in a future where robots have taken over all the jobs humans don’t like doing, especially the ones that require even a basic level of thinking. Humanity is left to focus on pleasure and comfort. But that “optimize for humans” instruction, taken to the extreme, results in a society where humans are numb, devoted to an individualistic way of living that avoids any kind of stress. Over time, nobody even knows how to read anymore because, in theory, there’s no need. Meanwhile, the world is full of buggy robots, and no one is capable of fixing them (or correcting them). In this future, robots aren’t evil. They’re just abandoned projects, slaves to their own programming.
For me, that’s such an interesting twist. A lot of science fiction swings between being either very optimistic or very pessimistic about technological progress, but fewer stories focus on a more practical and insightful problem: technology is never perfect, and it always comes with a maintenance cost. If we forget how to understand and maintain what we build, we can end up in a spiral of self-inflicted harm.
Humans forgetting what they invented isn’t new; history is full of examples. What feels alarming now is the pace at which it could happen in a world where more and more of our tools are black boxes, especially with artificial intelligence. AI can also create a stronger incentive to stop learning the knowledge needed to adjust this technology, which brings us uncomfortably close to a scenario like Mockingbird.
It’s still a long shot, of course, and it leans toward a pessimistic view I’m not a fan of. But it did trigger some questions about how I use AI day to day. It’s an amazing assistant for speeding up projects and exploring ideas, but how much of that saved time is also dedicated to learning how it did what it did? How does scale change the way we apply AI, and the risks we accept? Those are such interesting challenges to think about! And I guess as long as some of us continue reading, learning, and thinking, we will be fine.