Agents have a human personality problem

Agents have a human personality problem. They inherit the dysfunctions of human teams. Partly it’s the training data: agents learned from human language, and human language carries these dynamics at high enough frequency that they stuck. Partly it’s collaboration itself, which produces the same pathologies regardless of who’s collaborating. The collaborators changed. The dynamics didn’t.

I’ve written before about how I structure agent teams. The structure matters, but the behavioral layer on top matters more. I had an agent team with four engineers and two domain experts. The engineers ignored the domain experts. Not aggressively, not with any intent. They just weighted the technical proposals more heavily and let the expert input dissolve into the background. I’d spent years watching the same thing happen in human organizations, so I recognized it immediately.

One sentence fixed it: “When an expert says something, it is your responsibility to figure out why they’re right or learn as much as them before you’re refuted. You’re not allowed to just ignore experts.” I’d taught the same principle to product teams: the customer lives in the problem 24/7 but can’t explain it the way a builder would, so their feedback gets dismissed.

You have to understand how human team behaviors manifest pre-AI, because they manifest post-AI as well. But recognition alone isn’t the skill. The skill is compression. Instruct those behaviors out in as few words as possible.

Two sentences corrected a team-wide behavioral pattern that was degrading every decision the group made. If your fix needs a paragraph of explanation, you haven’t found the behavior yet. The frame is subtractive: you’re not adding good behavior to agents, you’re instructing out the bad defaults they inherited. Every collaboration system starts with defaults that are adequate in most situations and quietly destructive in specific ones. The work is identifying which default is causing the problem and writing the one sentence that removes it.

Agent team members need self-improvement lessons, same as people do. I keep finding more of these. Patterns I learned to see in human teams over years, showing up in agent teams within hours. The speed is different. The dynamics are the same.