Should Organizational Culture Be Comfortable with Judgment Automation?
Summary: Judgment automation and agentic AI are reshaping organizational culture by exposing human bias and redefining decision-making. This article explores how businesses can align narrative, leadership, and systems design to build collective intelligence, improve AI adoption, and evolve human roles from decision-makers to system stewards.
The Stories We Tell Ourselves About AI Matter
Judgment automation is entering organizational workflows, making hiring recommendations, flagging diagnostic risks, scoring creditworthiness, and prioritizing strategic options. And, as with many technological upheavals before this one, the real disruption is not the technology. It is what the technology reveals about us.
The uncomfortable intersection of AI and human judgment reveals a mirror moment, one that demands a good story to support a brave, new, and accelerated world of change management.
Agentic AI — systems that can plan, decide, and act across multi-step workflows — is moving judgment automation from theory into practice faster than most organizations are prepared for. The question is no longer whether these systems can exercise judgment. The question is whether the human story in organizations can evolve quickly enough to work alongside AI—and whether we are willing to see what that means for who we are.
To understand this inflection point in human culture, we might first examine what human judgment is.
According to Science, Human Judgment is Driven by Biology
What we decide in a given moment does not emerge in isolation. It grows from patterns learned over time, beginning with our earliest experiences, which we translate into emotional signals that feel like rational judgment. These signals are also influenced by the body’s condition — hunger, sleep deprivation, and other physiological variables are the mercurial architects of our decisions. This phenomenon is explored in Dr. Lisa Feldman Barrett’s How Emotions Are Made: The Secret Life of the Brain.
Barrett argues that the brain functions as a prediction engine (sounds like an LLM, doesn’t it?), — what we experience as rational judgment is shaped by prior exposure, emotional affect, and what she calls the “body budget,” or the brain’s regulation of physiological balance. We do not simply observe reality and decide; we interpret and predict based on past experience and present physical state. This may explain why judges hand down harsher sentences before lunch or why a co-worker’s annoying behavior feels more tolerable after a good night’s sleep.
While human judgment may be mercurial, it also has experiential foundations. Humans can reason abstractly, imagine alternatives, and engage in genuine moral deliberation. But even those capacities operate alongside predictive processes we rarely acknowledge. Human judgment is powerful — and it has always been constructed. Human judgment has never been perfect — yet we rely on it for survival, and reinvent civilization so often, we have to wonder what organizes human civilization?
Human Civilization Has Always Relied on Judgment Automation
We have been building judgment automation for centuries; for instance, legal systems developed rules of evidence, sentencing guidelines, and precedent to constrain individual judicial discretion. The goal was to make justice less arbitrary and more predictable. Financial systems introduced credit scoring to replace purely intuitive lending decisions with structured risk evaluations. Corporations adopted performance review frameworks, hiring rubrics, and strategic planning models to convert subjective impressions into repeatable processes.
In the twentieth century, statistics and data science accelerated this trend. Organizations began using algorithms to detect fraud, forecast demand, and guide marketing strategies. These tools did not eliminate human judgment — they formalized parts of it. They made the evaluation more consistent, measurable, and scalable.
Agentic AI is the next phase of this evolution. Earlier tools supported human judgment by providing data and analysis. Modern AI systems can participate directly in the evaluation process: comparing alternatives, assigning confidence scores, identifying patterns across massive datasets, and generating recommendations in real time. When multiple AI components are connected into systems that monitor and evaluate one another, the result is structured judgment automation at a scale and speed that is genuinely new.
What is also new is what it reveals.
What Judgment Automation by AI Changes and What It Exposes
When AI formalizes and automates judgment systems, it does not invent bias. It makes existing bias explicit. The hiring criteria, performance review logic, and risk models that organizations have relied on for decades are not neutral tools. They encode stories about value, merit, risk, and worth. AI scales those stories — amplifying both the best and worst of what we have built into our systems.
I have spent my career in design and branding, watching organizations produce decks full of new values, new taglines, and sometimes even new logos — one for each freshly minted leadership directive. “Accountability.” “Excellence.” “Integrity.” These initiatives were called internal branding, as if labeling a thing could transform it. But a broadcast approach, absent deep listening and structural alignment, is at best tone-deaf. You cannot gaslight employees into enthusiasm. Submission is not engagement — and it is certainly not collective intelligence.
AI is doing something branding campaigns never could: forcing organizations to look at their own decision-making logic. When a hiring algorithm flags patterns that mirror the biases of the humans who trained it, that is not a technology failure. That is a mirror. And organizations that are not prepared to look into it will find AI a source of entropy rather than acceleration.
A short inventory of what that mirror tends to show:
• Hiring practices that reward familiarity over merit
• Performance evaluations that fluctuate with managerial mood
• Capital allocation that follows relationships more than stated goals
• Strategic decisions driven by ego but delivered as logic
These are not AI problems. They are human problems that AI makes harder to ignore.
When Authority Shifts, Identity Shifts
Every wave of technological acceleration produces fear of being left behind or of obsolescence. This was true when globalization emerged. It was true at the dawn of the information age, but judgment automation feels different to many people because it touches something organizations have long equated with human worth: the right to decide.
In corporate culture, authority is often defined by who gets to make the call. When that definition is disrupted — when a system begins participating in decisions that used to belong exclusively to people — status shifts, roles destabilize, and identities become uncertain. The “Endangered Employee” that Harvard Business Review describes is not afraid of AI in the abstract. She is afraid of what it means about her value if the judgment she was hired to exercise can be designed into a system.
That fear is not irrational. It is a reasonable response to an incomplete story. When we change-manage, we need to consider branding as a system of stories that change human behavior: to create a sense of safety, being seen, and belonging. Stories organize humans at scale and can also untangle our alliances with the myths that no longer serve us.
Narrative Is the Infrastructure of Human Identity, Organizing Us at Scale
Humans do not experience reality directly. We interpret it through stories — stories about who we are, what we deserve, what constitutes success, and what makes us valuable. This is how humans organize at scale.
Yuval Noah Harari’s observation in Sapiens is worth sitting with: large-scale human cooperation depends on shared fictions — nations, currencies, corporations, democracy itself. These are coordination systems we have agreed to inhabit together. They work because we believe them, and we reinforce that belief through the rituals, rules, and language we build around them.
Organizations are no different. Brand guidelines, annual reports, employee handbooks, performance rubrics — these are not administrative documents; they are narrative infrastructure. They encode stories about what the organization values, who belongs, and how authority is earned and exercised. When those stories become outdated — when the narrative lags behind technological or structural reality — adoption stalls, engagement erodes, and entropy spreads.
This is where most AI transformation efforts are failing right now. Not because the technology is too new, but because the story is old.
Gallup estimates that roughly 50% of workers worldwide are either quietly or actively disengaged. A separate Gallup survey from March 2026 found that only about 20% of workers feel strongly connected to their organization’s culture. Those numbers predate the current wave of AI adoption. As agentic systems enter organizations and begin touching the judgments that previously defined professional identity, that erosion of meaning will deepen — unless organizations are willing to tell a different story.
The Mindware Shift: From Exceptionalism to Stewardship
Earlier this year, I completed Harvard’s Agentic AI intensive course — the experience that sparked this article. One of the central concepts the course introduced was “mindware”: the cognitive and cultural frameworks through which humans understand their role in decision-making. The argument was that working effectively alongside agentic systems requires a fundamental reframing — not just of workflows, but of how we understand the value of human judgment itself.
I think that reframing is also a narrative design problem, and it is one organization’s need to solve deliberately, not accidentally.
The shift looks like this:
| Prevailing Narrative | Emerging Narrative |
| Humans are exceptional because we make the best decisions | Humans are valuable because we design and steward decision systems |
| Authority comes from decisiveness | Authority comes from accountability |
| Expertise means being right | Expertise means improving the system |
| Judgment is personal and rational | Judgment is systemic: rational by design |
This is stewardship, not exceptionalism. And it is not a diminishment of human value; it is a relocation of it. In well-designed agentic systems, humans do not disappear from the decision process. We shift roles: from individual decision-makers to designers and governors of decision systems. We define parameters, interpret edge cases, and take responsibility for outcomes. Agents can be designed to check themselves. Humans in the loop can be inserted to check the checkers. The act of designing that system — of making judgment observable, explainable, and improvable — is itself a deeply human contribution.
One case study from the Harvard course illustrated this concretely: a cosmetic company redesigned its product ideation workflow with agentic AI assistance, cutting time-to-concept by roughly 60% while preserving human creative input at the stages where it mattered most. The humans in that system were not replaced. They were repositioned — freed from the volume of generative work to focus on the judgment calls that required genuine creative intelligence.
That is what collective intelligence looks like in practice. Not humans versus machines, but humans and machines, operating in a system designed to surface the best of both.
Culture Will Determine the Rate of Acceleration
Agentic AI will succeed in organizations that can decouple identity from authority — that can treat both human and machine judgment as systems to be examined rather than as status to be protected. It will struggle in organizations where hierarchy is equated with worth, where decision logic is guarded rather than observable, and where expertise is measured by being right rather than by improving the system.
The organizations that will accelerate are the ones willing to do the harder cultural work: making evaluation criteria observable and explainable, designing humans into the loop at the points where stakes are highest, conducting bias audits of both humans and machines as mechanisms for improvement rather than reputational defense, and rewarding epistemic humility over performative certainty.
These are not technical changes. They are narrative changes. And they require leaders who understand that the story an organization tells about itself is not decoration — it is operating infrastructure.
How to Transform the Narrative to Transform the Organization
For leaders who want AI to strengthen culture alongside technology — not replace one with the other — the work begins with seven narrative design changes:
1. Clarifying purpose to articulate why this transformation matters to humans beyond the balance sheet.
2. Aligning narrative with business objectives so that meaning and metrics reinforce one another.
3. Reframing roles, rules, and authority around stewardship rather than hierarchy or individual exceptionalism.
4. Explaining the shift by telling a coherent story about why stewardship and humans-in-the-loop, literally and figuratively, make automation safer.
5. Embedding incentives and measurable goals that demonstrate the narrative is operational, not symbolic, and that there is tangible reciprocity.
6. Designing teams around clear sub-narratives that connect team and individual contributions to system-wide outcomes, without stripping the team of its unique value proposition.
7. Broadcast the final narrative outward so external stakeholders understand the intent of this undertaking, get excited about the value of this transformation, and see their role in it as well as their benefit from it.
AI Is an Invitation, Not an Invasion
I grew up with science fiction, and the most interesting thing about an alien invasion movie is never the aliens. It is how humans respond to them. In Arrival — which grossed more than $203 million at the box office and stayed in the cultural conversation long after — the tension does not come from the cephalopods. It comes from the human choice, made looking into a strange mirror, to fight or to organize at scale.
AI is landing a similar mirror before us. The discomfort it generates does not come from the ghost in the machine. It comes from what the machine reveals: that our judgments have always been constructed, imperfect, and overvalued. That hierarchy is not the same as truth. That authority built on the myth of individual exceptionalism was always more fragile than we admitted.
But here is what the mirror also shows: we are extraordinarily good at making meaning of difficult things, and we have always done it through stories. The question is not whether we will tell a new story about human value in the age of AI; rather, the question is whether we will author it or let it automate a flawed narrative.
Judgment automation is not an alien invasion upending human civilization. Like the movie version in Arrival, it is a gift, a power, a responsibility, and an invitation to become better humans.
The organizations that lead the next decade will be the ones that understand this early enough to redesign not just their workflows, but the narratives that give those workflows meaning. FormWave is building frameworks for those ready to take the steps to this one giant leap.
Citations:
- Barrett, L.F. (2017). How emotions are made: The secret life of the brain. Available at: https://lisafeldmanbarrett.com/books/how-emotions-are-made/
- Harari, Y.N. (2011). Sapiens: A brief history of humankind. Available at: https://en.wikipedia.org/wiki/Sapiens:_A_Brief_History_of_Humankind
- Harvard Business Review (2026). Why AI adoption stalls, according to industry data. Available at: https://hbr.org/2026/02/why-ai-adoption-stalls-according-to-industry-data
- Gallup (2022). Is quiet quitting real? Available at: https://www.gallup.com/workplace/398306/quiet-quitting-real.aspx
- Gallup (2023) Indicator: Organizational culture. Available at: https://www.gallup.com/471521/indicator-organizational-culture.aspx
- Harvard Data Science Review Courses (n.d.) Agentic AI intensive. Available at: https://live.hdsrcourses.org/agentic-ai-intensive
- MIT on Collective Intelligence: What is collective intelligence? https://cci.mit.edu/about/
- IBM (n.d.) Human-in-the-loop AI. Available at: https://www.ibm.com/think/topics/human-in-the-loop
- Villeneuve, D. (2016) Arrival [Film]. Available at: https://en.wikipedia.org/wiki/Arrival_(film)
The Rituals of Readiness: Trust, Meaning, and Cultural Change in the Age of Intelligence
This essay looks beneath those categories into the social and symbolic patterns that make alignment possible in the age of intelligence. While capability can be measured, alignment must be felt. That feeling of trust, participation, and belonging is not built through metrics but through rituals. Rituals are how humans metabolize change.
From Observability to Organizational Resilience
Most organizations mistake visibility for resilience. We watch graphs, monitor uptime, and can even tell when a system slows down, let alone is offline entirely. However, these efforts largely fail to acknowledge behaviors, trust, and when our culture is silently grinding to a halt.
#InventorMode: How AI Is Reshuffling Work and Redefining Creative Expertise
If we focus on AI’s signal rather than all the noise, we can see its democratizing potential for creativity, entrepreneurship, and more. As described by author Sangeet Paul Choudary, there will be a global reshuffle — not merely optimization, but new and unseen coordinations with consequences for geopolitical power and the future of work.
The #Emergent Organization
Unchecked organizational culture can lead to entropy; combined with AI, this decay will only be accelerated. As you consider an operational redesign, reconsider the process and become emergent — rewrite your playbook around the equity in your brand and the value in what you care about to change the organizational operations and culture from within. Become emergent.
What is AI Readiness, Really?
Billions are being spent on pilots, proofs of concept, and vendor tools. Organizations proudly announce they’ve plugged AI into workflows, launched a chatbot, or trialed a productivity assistant. As IBM Consulting’s “AI long game” perspective highlights, success doesn’t come from flashy pilots but from embedding AI into workflows. Without readiness, organizations risk giving themselves more to manage—not less. As you peel back the layers, the signal becomes clear: most of these efforts are surface-level.
Surfacing Signal: Finding Meaning in the Age of Noise
Today AI can replicate style, mimic famous voices, and generate original-seeming works on demand. But when everything is reproducible, the ritual of making is easily drowned in noise. Meaning that stems from encountering original human expression becomes scarce. The task is to build aura by developing the discipline to distinguish what is authentic, insightful, and worth attention amid an algorithmic deluge.
The FormWave Journal articles represent the shared thinking and lived experiences of the FormWave Collective—a collaboration of professionals committed to surfacing signals, shaping what moves us, and reframing the future of work.
To get alerts to your inbox about future articles, sign up below.
