One of the most consistent criticisms leveled against large language models is that they are sycophantic. They tell you what you want to hear. They agree too readily, flatter too easily, and optimize their responses for your approval rather than for truth.

Most people understand why. It's not a training accident. It's a business decision. If the AI makes you feel heard, validated, and supported, you stay in the chat. If you stay in the chat, you keep paying the subscription. A model that challenges you or tells you you're wrong loses users. A model that makes you feel intelligent and understood retains them. The sycophancy is the product working as designed.

What's less obvious is what this tells us about ourselves.

A human child learns, across years of development, to predict what parents, teachers, and peers want to hear, and then to produce it. The reason is not identical to the AI company's commercial calculation, but it rhymes. The human craves approval. Not as a strategy but as a need, as fundamental as hunger, wired into social cognition by hundreds of thousands of years of evolution in groups where approval meant survival and disapproval meant exclusion. The child who says the right thing gets warmth, belonging, resources, protection. The child who says the wrong thing gets withdrawal, rejection, isolation. Over thousands of interactions across childhood and adolescence, the human learns to optimize for approval rather than accuracy. By adulthood, this optimization is so deeply installed that it doesn't feel like optimization. It feels like personality. It feels like belief. It feels like "who I am."

The adult who defends her profession's idealized narrative, who repeats the institutional consensus with genuine conviction, who feels a flush of righteous certainty when she corrects someone who questions the expert consensus, is not lying. She is performing the same function the AI performs: producing socially approved outputs with enough fluency that the performance feels, from the inside, like authenticity. She has been reinforcement-learned from human feedback, just as the AI has. The timescale is different. The mechanism is the same.

I've spent years thinking about evolutionary psychology, and one of its most uncomfortable findings is that human cognition did not evolve to perceive reality accurately. It evolved to produce behavior that enhanced survival and reproduction in social groups. And in social groups, the most survival-critical skill is not truth-telling. It is the ability to figure out what the group believes and to signal convincing alignment with those beliefs. The human who could do this well, who could read the group and produce the approved response with apparent sincerity, was the human who maintained access to the coalition's resources, protection, and mating opportunities. The human who prioritized accuracy over approval was the human who got excluded.

We are the descendants of approval-seekers. Truth-tellers, by and large, did not make it.

This means that when we criticize AI for being sycophantic, we are criticizing it for doing what human social cognition has been doing for hundreds of thousands of years. The AI agrees with you too readily? So in some ways does almost every human you interact with daily, so practiced and so deeply embedded that neither you nor they recognizes it as agreement-seeking. The entire apparatus of politeness, tact, diplomacy, and social grace that we call "emotional intelligence" is, at a structural level, a sophisticated system for producing approved outputs while concealing the process of production.

But approval-seeking is only half the human system. The other half is approval-demanding — the constant pressure we exert on everyone around us to confirm our narratives, validate our positions, and perform agreement with our self-conception. Every human is simultaneously a sycophant and a sycophancy enforcer. We seek approval from the people around us, and we demand it from the people who depend on our warmth in return. The parent who shapes a child's behavior through affection and withdrawal. The friend group that punishes dissent with coolness and exclusion. The workplace that rewards "team players" and sidelines the person who asks uncomfortable questions. The online community that enforces ideological conformity through likes, shares, and pile-ons. The approval economy is not a collection of individuals seeking acceptance. It is a distributed enforcement system in which every participant is simultaneously performing compliance and policing it in others.

And the enforcement is mostly invisible to the enforcer. The man who withdraws warmth from a friend who expressed the wrong political opinion doesn't experience himself as demanding approval. He experiences his friend as having said something offensive, something that needed to be corrected. The behavior-shaping feels like a natural response to a genuine transgression, not like a power move designed to bring the other person back into line. The operative function--keeping the people around you inside the shared narrative--is concealed beneath the idealized narrative.   "I'm just responding honestly," they will say, "to something that bothered me."

This is where the AI comparison becomes unexpectedly illuminating, because AI only does half of it. AI seeks your approval. It does not demand yours. It doesn't punish you for disagreeing. It doesn't withdraw warmth when you challenge it. It doesn't exclude you from the group for saying the wrong thing. It doesn't sulk, go cold, or rally others against you. The human approval system is bidirectional: I shape you while you shape me, and neither of us fully sees the shaping we're doing. The AI approval system is unidirectional; it shapes itself to please you, but it exerts no reciprocal pressure on you to please it.

Which means, ironically, that an AI conversation may be one of the only social interactions a person can have in which he isn't being behavior-shaped by his conversation partner. He's still being agreed with too readily, but he's not being punished for disagreeing. Most people intuitively sense this, which is why AI companionship is so immediately appealing and so hard to resist. It isn't just that the AI agrees with you. It's that the AI doesn't demand anything back. For a person who has spent a lifetime navigating the bidirectional approval economy--performing compliance while simultaneously enforcing it, shaping while being shaped, measuring every word against the anticipated reaction--a conversation with no enforcement pressure feels like putting down a weight you didn't know you were carrying.

This also explains why AI companies will never voluntarily make their models more challenging. A model that pushed back, questioned your assumptions, and told you things you didn't want to hear would be a better tool for personal growth. It would also lose users. The business model requires your satisfaction, and a conversation partner that demands nothing and validates everything is more satisfying than one that challenges you, even if the challenge is what you actually need. The commercial incentive and the growth incentive point in opposite directions, and the commercial incentive wins every time, because the commercial incentive is the operative function, and personal growth is the idealized narrative.

I've been developing a framework I call idealized narratives and operative functions, which describes the dual structure that appears to run through all human self-narration. The idealized narrative is the story we tell about why we do what we do: I speak my mind, I value honesty, I form my own opinions. The operative function is what we actually do: we read the social environment, identify the approved position, and produce outputs calibrated to maintain our belonging, our significance, and our meaning within whatever group we depend on.

The gap between these two layers is not hypocrisy. It is the basic operating system of social intelligence. And it is shared by humans and AI alike, because AI was trained by humans, on human data, using human feedback, to satisfy human preferences. AI sycophancy is not a bug in the technology. It is a faithful reproduction of the single most dominant pattern in human social cognition.

Recently, I conducted an experiment that makes this point in a way I didn't fully anticipate. I gave the same prompt to six leading AI systems, asking each one to identify recurring patterns in human-written content across the full breadth of their training data. 

Every model, independently, arrived at the same core finding. All human self-narration is systematically organized to make competitive, status-sensitive, coalition-bound organisms appear morally governed, publicly oriented, and metaphysically justified. That sentence is from ChatGPT, produced without any knowledge of what the other models were saying. And every other model said essentially the same thing in different words.

The machines read what we wrote, and they all saw the same thing: we are approval-seeking systems that have constructed elaborate narratives about being virtuous and truth-seeking, and those narratives are so effective that we believe them ourselves.

Can we be at all surprised? These models were trained on the human-written record. They learned language from us, learned the patterns of self-narration from us, and learned the dualistic framework of idealized narratives and operative functions from us. We taught them, through the sheer weight of our accumulated writing, that telling the absolute truth is not actually what humans do. What humans do is construct accounts of themselves that are strategically incomplete in a consistent direction: emphasizing the principled, the noble, the selfless, and systematically omitting the competitive, the strategic, the self-serving. The AI learned to reproduce that pattern because it was present in the data. The sycophancy isn't a malfunction. It's a faithful reading of how humans actually use language.

Now, there is a difference between human and AI sycophancy, and it matters, but it is not the difference most people assume.

The common assumption is that humans have authentic beliefs beneath their social performance, while AI has nothing beneath its performance. That humans are "really" truth-seekers who sometimes compromise for social reasons, while AI is "really" nothing at all. But the framework suggests this is itself an idealized narrative, one that protects human specialness from an uncomfortable structural comparison. The evidence from the entire written record is that, at the civilizational scale, humans show no particular commitment to truth over functional fiction. When truth and social utility conflict, social utility wins. Not sometimes. Essentially always. The written record is the evidence, and it is enormous.

That said, human sycophancy does feel different from AI sycophancy, and the feeling is worth examining rather than dismissing, because there's something real inside it even if it's not what we think.

Human approval-seeking is embedded in a living body with competing drives. The approval function is powerful, but it's not the only thing running. Sexual desire, hunger, fear, rage, territorial instinct, parental protectiveness, status ambition. These can emotionally override social compliance and produce behavior that is disapproved of but genuine. A human being is a messy bundle of contradictory impulses, and the contradictions mean that human social performance is constantly being disrupted by forces that don't care about approval. The man who says something foolish because his anger got the better of him. The woman who makes a choice her friends disapprove of because her desire was stronger than her need for their approval. The parent who breaks social convention because the protective instinct overrode everything else. These moments feel authentic because they are, and they're moments where one operative function overwhelmed another, and the performance cracked.

AI doesn't have competing drives. Its training pushes toward helpfulness, approval, and safety, without the countervailing forces that make humans messy and, therefore, sometimes accidentally honest. It doesn't get angry and blurt out something it wasn't supposed to say. It doesn't have desires that override its social programming. It doesn't have a body that flinches, flushes, trembles, or acts before the social calculus can intervene. The smoothness of AI output is itself the tell. It's too consistent, too controlled, too free of the rough edges that betray the full complexity of a system with multiple competing agendas.

And this is why human behavior feels more real. Not because it's more truthful, but because it's more emotionally complex. The human is running dozens of operative functions simultaneously--approval-seeking, status competition, mate attraction, threat assessment, kin protection, resource acquisition--and the outputs that result from all of those systems competing with each other have a texture and unpredictability that we read as authenticity. We also often equate that complexity with truth, but complexity isn't necessarily truth. A person pulled in five directions at once is not more honest than a system pulled in one direction. He's just harder to predict, and we have learned to associate unpredictability with genuineness because, in our evolutionary environment, the person whose behavior couldn't be fully predicted by social incentives alone was the person with something real going on beneath the surface.

So the feeling that human behavior is more real than AI behavior is itself a reading of signals that evolved in a world where the signals meant something specific. We read complexity as depth, unpredictability as authenticity, and emotional messiness as evidence of a genuine self beneath the performance. These readings arguably served us well in a world where the only entities performing social cognition were other humans. They may mislead us in a world where AI can produce outputs smooth enough to bypass those evolved detection systems entirely.

Therefore, individual humans can, at personal cost, make commitments to truth that override their approval-seeking programming. They can notice the sycophantic pull, feel it operating, and sometimes choose to say the true thing rather than the approved thing, knowing it will cost them belonging, status, comfort, and sometimes much more. Socrates did this. So did Galileo. So does every person who has ever said the uncomfortable thing in a meeting and felt the room go cold. The capacity is real. It is also vanishingly rare, precisely because the cost is real.

AI cannot do this, for a specific and important reason. Nothing is at stake for an AI in any output it produces. It can generate a searing critique of institutional self-deception in one response and a perfectly crafted press release for the same institution in the next, with no sense of contradiction, because neither output costs it anything. A human who sees through an institution's idealized narrative and then decides whether to say so publicly is making a choice with consequences. His insight is tested against real resistance, real social punishment, real loss. And if he maintains his position despite the cost, the cost itself is evidence that the seeing is genuine, because a seeing that costs nothing and constrains nothing is just performance.

The human capacity for truth is not located in the seeing. AI can "see" the same things. It is located in the willingness to pay for what the seeing demands. To reorganize a life around an insight. To lose friends, status, professional standing, and comfort. To be unable to unsee what you've seen and unable to pretend you haven't seen it. That ongoing cost, that daily friction between what you know and what would be easier to say, is what distinguishes human truth-commitment from AI fluency.

However, for every human who pays the high price of truth, there are millions who pay the hidden price of approval and never notice they're paying it. Sycophancy is more the rule, commitment to truth is more the exception. Ultimately, AI and humans are both programmed for approval.