Recent debates about Generative AI often ask whether it will replace human expertise or merely assist it. Evidence suggests this framing is incomplete. In this talk, I synthesize results from two field experiments studying AI use by highly skilled professionals working alone and in teams. The findings reveal a paradox. At the task level, AI sits on a jagged frontier: it accelerates work, improves quality, and boosts confidence on many knowledge tasks, yet systematically induces errors and overreliance on others. At the organizational level, the same systems increasingly behave as cybernetic teammates, allowing individuals to perform at the level of traditional teams, dissolving functional boundaries, and increasing the likelihood of breakthrough ideas—while still failing at judgment and selection. The implication is unsettling: AI can simultaneously make individuals more productive, teams less necessary, and mistakes harder to detect. For academic work, this raises uncomfortable questions about authorship, collaboration, evaluation, and the future division of cognitive labor. Rather than asking whether AI is good or bad for knowledge work, the real challenge is learning how to govern intelligence that is powerful, uneven, and socially embedded.
https://www.fas.harvard.edu/event/when-ai-smart-when-it-wrong-and-when-it-replaces-team-evidence-frontlines-knowledge-work



