Introduction: Is the Question Itself Properly Framed?
The phrase “AI will take our jobs” has become well-established. Media experts repeat debates about which jobs will be displaced, companies rush to prepare countermeasures, and workers carry anxiety. Yet I have long felt uncomfortable with the framing of this question itself — because the binary of “take or not take” fails to capture the essence of what is actually happening.
1. What Distinguishes “When Tools Change” from “When Thinking and Action Change”?
To illustrate, let us trace the evolution of IT since the 1990s along a single axis:
- The internet replaced distance-dependent physical communication with instantaneous electronic communication.
- Smartphones placed computers — previously confined to specific locations — into everyone’s hands.
- Cloud computing changed owned, self-provisioned computers into on-demand rentals.
All were significant changes. But what they share in common is that the tools changed. Things became faster, broader, more efficient — yet certain things remained unchanged: judgment, design, responsibility. These remained firmly human roles.
AI is fundamentally different. Generative AI produces judgment outputs with a degree of autonomy, conducts research, proposes designs, and generates code. Robotics automates driving, transport, and other tasks humans previously performed. AI agents, given goals, autonomously operate tools and systems to execute entire processes.
This is not “acceleration or efficiency of tools” — it is the beginning of a shift where “what humans used to think and do” begins transferring to machines.
2. Why AI Alone Represents a “Qualitatively Different Transformation”
The shift of action to machines has precedent in the Industrial Revolution. But what fundamentally distinguishes AI?
The decisive difference is that “the limits of control” emerge simultaneously. AI’s internals are composed of complex parameter spaces, and even their creators cannot fully read the internal behavior that leads to results. Unlike traditional rule-based systems where “specification equals behavior,” complete control is impossible.
The Industrial Revolution’s machines operated as designed — humans could clearly position themselves as designers and supervisors. AI is not so simple. Even as it substitutes for portions of intellectual work, its behavior cannot be fully controlled. When these two factors overlap, problems arise that differ from both conventional IT and historical mechanization.
3. Not “Take or Not Take” — Roles Between Humans and AI Are Being Reallocated
Viewed from this perspective, the flaw in the “job displacement debate” becomes clear.
Domains rooted in human sensation, experience, and contextual judgment remain — and accepting responsibility and designing those boundaries are still human roles. As portions of what humans did in the pre-AI era shift to AI, the definition of what humans should and can do is being rewritten.
This reallocation of people, budgets, and roles occurs at the enterprise level, the societal level, and even the individual level.
Seen this way, what is happening is not fundamentally a contest over jobs, but a redefinition and reallocation of roles. However, this does not happen automatically. What to delegate to AI, what humans should own, who supervises, who accepts responsibility, and how to draw those boundaries — without proper design, what results is not reallocation but confusion and loss.
Conclusion: An Era Where “Role Design” Matters as Much as Technology Design
AI differs from the evolution of tools that preceded it. A reallocation of roles between humans and machines (computers) is underway. That is precisely why — as much as or more than how to use the technology — the design of “who is responsible for what” is being challenged.
We are in an era where proceeding with adoption while avoiding that question may be the greatest risk for both enterprises and society.