What is your position on AI? Are you for or against it?
I’ve heard this question a couple of times already. Part of my reaction is to question the need to see everything in black and white, or even the supposed inevitability of new technology, but another part of me started asking how we apply morality to the use of AI.
Technology has no morality; it’s in how we use it that moral debates occur, and from my perspective it’s always about assessing future risks that could affect the greater good. I define “greater good” here as a scenario of common benefit for society. So the question of what is good or bad in AI is directly related to the risks we’re willing to take.
A few examples:
- AI is a great coding assistant; it helps lower the learning curve, allowing more people to build technology. But we risk eroding deeper understanding of software development, which could lead to less creative approaches and harm career development, since we still need specialists.
- AI is good at refining raw written ideas or summarizing long texts; it helps optimize time on repetitive tasks. But we risk replacing human writing, making the internet more dead than it already is, and contaminating our collective knowledge with poor-quality, repetitive content.
- AI is good at finding patterns in data; this can be very beneficial across fields, helping unlock new advances and discoveries. But we risk losing the drive to develop our own methods for answering questions.
As these examples show, the common pattern is the risk to human input. Our brains are still, in many ways, faster and smarter than AI and LLMs, and our capacity for creative thought proves it.
Our ability to function as a society and to keep pushing the boundaries of our curiosity is directly tied to our ability to think creatively. In a worst-case future where our own technology fulfills our need to think, humanity will be lost. So the question I now ask myself every time I use AI or help develop new AI technology is whether this next project is a step closer to, or further from, that scenario.