The Rise of Ikigai Risk: AI's Challenge to Human Purpose
Preserving Meaningful Work in an AI-Driven World
One of the more underappreciated risks of AI adoption in both education and the workplace is known as I-Risk, or Ikigai Risk. I-Risk refers to the potential loss of human purpose in an AI-augmented world, a subtle but profound challenge.
'“Ikigai” refers to the Japanese concept representing one's reason for being, and it encompasses what you love, what the world needs, what you are good at, and what you can get paid for.
In an AI augmented-world, it can be tempting to take shortcuts or lose sight of one or more pieces of this meaningful template. These temptations stem from a desire to increase efficiency and productivity, often in the name of increasing revenues. But what happens when there are no corners left to cut? Over the long-term, how does a corporation continue to function when artificial intelligence begins to erode employee purpose?
There is a clear demand for AI proficiency among business leaders and decision-makers. But “AI skills” are not yet clearly defined, and do not always include the all-important soft skills that The World Economic Forum argues will be crucial to maintaining a high-functioning workplace in an AI-dominated world.
As AI reshapes workplaces, the challenge is not just about adopting the tools but about ensuring they are used in ways that preserve purpose, creativity, and long-term growth.
Education sectors have begun to confront this challenge, yet corporations continue to prioritize efficiency gains over a commitment to meaningful work. Two recent studies highlight that this focus on short-term productivity threatens long-term workforce vitality. An MIT Medical study and BCG research present evidence of how AI implementation can undermine both job satisfaction and skill retention - two core elements of human purpose in the workplace.
The MIT Medical Study: Productivity vs. Purpose
MIT researcher Aidan Toner-Rodgers recently published a study of 1,018 scientists using AI in the field of materials science innovation at a large U.S. firm. They tracked productivity and worker satisfaction, and drew two noteworthy conclusions. Andrew Maynard at The Future of Being Human has a helpful take on this study that you can find here.
First, productivity soared. Scientists discovered 44% more materials and increased patent filings by 39%. The firm also saw a 17% rise in downstream product innovation and data showed that “the output of top researchers nearly doubles.”
However, worker satisfaction plunged. Specifically, 82% of scientists reported feeling less fulfilled or satisfied by their work as a result of using the AI. The core finding revealed that as AI took over more creative aspects of their work, scientists questioned the value of their enhanced outputs.
Mark Daley, author of Noetic Engines, summed up the conundrum in this way:
“Why? Because AI isn't just augmenting human creativity – it's replacing it. The study found that artificial intelligence now handles 57% of "idea generation" tasks, traditionally the most intellectually rewarding part of scientific work. Instead of dreaming up new possibilities, scientists may find themselves relegated to testing AI's ideas in the lab, reduced to what one might grimly call highly educated lab technicians.”
This created a fundamental paradox: despite producing more work, the scientists felt less connected to their achievements. The study highlighted how increased efficiency through AI can lead to a decrease in personal investment and meaning in the work itself.
The MIT Medical Study and Ikigai
The MIT study reveals a direct threat to two core elements of Ikigai - what you love and what you're good at. When scientists surrendered creative aspects of their work to AI, they lost connection to the passion that drove their research. The productivity gains came at the cost of personal investment and creative ownership, suggesting that when AI takes over the meaningful parts of work, increased output becomes a hollow achievement.
What happens when corporations reach for short-term productivity gains at the expense of long-term meaning?
The BCG Study: The Exoskeleton Effect
Another study released in September reveals a different type of threat to meaningful work. The Boston Consulting Group released the results of an experiment conducted with a group of consultants that concluded, “GenAI doesn’t just increase productivity. It expands capabilities.”
In the study, employees at a large consulting firm were given access to GenAI systems to assist in the completion of a set of data science tasks, while another set of workers were asked to complete the task without GenAI assistance.
Consultants with AI showed a marked improvement in productivity, efficiency, and capabilities.
BCG cleverly termed AI as an "exoskeleton" that enhanced human production capabilities outside of their existing skillset. With GenAI at my side, they argue, I can access information and production methods that would normally be out of my reach.
But the study also revealed a concerning trend in skill retention. The study administered a final assessment that tested knowledge across all three tasks. The key finding was that participants who completed specific tasks (like coding) with GenAI assistance showed no better understanding of those tasks in the final assessment than participants who had never performed them at all. Essentially, they retained little to no skills in an area where they had only recently performed at a (relatively) high level.
This suggests that while AI can enhance immediate productivity, it may compromise long-term skill development and understanding.
The BCG Study and Ikigai
The BCG findings further emphasize the risk of an Ikigai crisis. The "exoskeleton effect" shows how AI assistance can mask an erosion of core competencies. Or, it can produce false positives and prey on a new form of confirmation bias that leads workers to believe they are more skilled than they are.
Furthermore, if employees can't retain skills from AI-assisted tasks, it could undermine another pillar of Ikigai - expertise and mastery.
This creates a troubling scenario where workers:
Lose confidence in their abilities
Become dependent on AI for basic tasks
Cannot distinguish between quality and mediocre outputs
Miss opportunities for genuine skill development
The Combined Ikigai Threat Together, these studies paint a picture of dual erosion:
Loss of satisfaction and purpose (MIT Study)
Deterioration of skills and expertise (BCG Study)
This combination attacks the balance that Ikigai represents - the intersection of what you love, what you're good at, what the world needs, and what you can be paid for. When workers lose both their sense of purpose AND their ability to develop genuine expertise, the entire framework of meaningful work begins to collapse.
Curious about how your organization can address Ikigai Risk? Let’s talk. I offer workshops and consulting to help companies integrate AI thoughtfully while preserving purpose and long-term growth.
Durable Skills in the Workplace
The World Economic Forum's Future of Jobs Report supports a recognition of the threats listed above. The organization published a set of ‘durable skills’ that lean heavily on soft skills as critical differentiators in an AI-augmented workplace. The belief, especially among forward-thinking educators, is that soft skills can help a worker meaningfully discern when, how, and why they may or may not use AI - a decision-making process that can only be described as a complex web that many of us are still navigating.
The path forward requires organizations to:
Recognize the distinction between productivity tools and meaning-making activities.
Develop frameworks that emphasize metacognition and creativity.
Create training programs that balance AI utilization with skill retention.
Foster environments where "generative thinking," a term coined by Nick Potkalitsky, thrives.
Protect and nurture employee Ikigai through purposeful work design.
The corporate world stands at a crossroads. The temptation to pursue immediate productivity gains through AI implementation must be balanced against the preservation of human capability and purpose. Organizations that fail to address Ikigai risk face a future workforce of "aimless drones" lacking the creativity and critical thinking skills essential for innovation and growth.
The solution lies not in resistance to AI adoption but in thoughtful integration that expands human potential while preserving purpose. This requires a fundamental shift in how organizations view productivity, success, and human development in this era.
Corporations face a choice: short-term productivity or long-term workforce health. The right balance requires thoughtful frameworks and intentional integration. If your organization is ready to address Ikigai Risk and preserve purpose in the age of AI, let’s start the conversation.
Interested in exploring how to balance AI adoption with workforce vitality? Contact me to schedule a consultation or workshop tailored to your industry.
In a word, yes.
For the first time, maybe, for-profit companies might have to define for the very near term how their purpose includes doing good in the world— so that they will scale back AI deployments—and accept inefficiencies as the cost of doing future business. Talent will flow to companies that help it flourish; and where AI erodes flourishing, they can expect resentment and acrimony.