January 13, 2026 Listen on YouTube
5.6

Rebecca Barter: Persistent learning, tool building, and 'Will code even exist?'

AI & LLMsDeveloper ToolsOpen SourcePythonIndustryCareer & Life

In this episode of The Test Set, Wes McKinney joins co-hosts to interview Rebecca Barter about AI coding tools, learning, and data science. Wes argues that 'vibe coding' is overhyped and that experienced humans remain essential for reviewing AI-generated output, predicting a trough of disillusionment when business users attempt to replace skilled practitioners. He distinguishes between software engineering, where AI excels at specification-based tasks, and data science, where judgment and domain knowledge are harder to automate. Wes also raises concerns about how new open source tools will gain adoption when LLMs lack training data for anything novel, potentially locking the ecosystem in place. The conversation explores how AI is most useful for routine work but falls short on the critical 20% requiring human judgment.

AI coding tools automate the routine 80% of programming work effectively, but the critical 20% requiring human judgment, domain expertise, and iterative reasoning remains irreplaceable -- and attempts to skip that human-in-the-loop will lead to widespread disillusionment.
  • 5

    Coding agents have revealed something that we already knew deep down, which is that probably 80% of the work that we do as programmers is not that special. But where the actual value is in that 20% that requires judgment or requires synthesizing your experience or your background in a certain domain.

  • 7

    I think so-called vibe coding is way, way overhyped, in the sense that there's a lot of people -- AI boosters -- going around saying that soon (trademark), the coding agents are going to allow somebody without coding skills or data science skills to replace a senior or expert person in those fields. But as a user of these tools every day, I simply don't see it.

  • 5

    If I weren't in the loop, reviewing the work and giving feedback and catching the mistakes -- without having somebody with the experience in the loop to judge the output, you could end up creating a morass that's very difficult to escape, quite quickly.

  • 7

    I can imagine sometime in the next year, we're going to enter some kind of trough of disillusionment where a big wave of business users try vibe coding and end up disappointed and conclude that AI sucks and was overhyped and oversold.

  • 6

    Our LLM coding agents and assistants are all really good at all of the projects that exist now, which have rich bodies of training data available. But if you build something new, almost by definition, there's not going to be any training data available.

  • 7

    We're going to have to build things in such a way that we can point the agents at the project's documentation -- because otherwise we're going to end up locked in the present moment. Nobody uses anything new because their LLMs don't know how to use it.

  • 4

    There is a meaningful difference between the applicability of these coding agents to software engineering versus data science. Software engineering is often implementing something based on a specification. Whereas data science can often be as much art as science.

  • 4

    Everything having to do with Windows was the thing that I didn't want to do. And so there was a dark period where I had my Windows virtual machine, which was the blessed place where I would create the installer packages for pandas.

opinionated, conversational, skeptical