The Nascent Art of AI-assisted programming

Posted on Sun 30 March 2025 in Programming

In his recent conversation with ThePrimeagen (Michael Paulson), Lex Fridman made some interesting points about the emerging landscape of AI-assisted programming. What struck me most was Lex's insistence that developers should cultivate AI collaboration skills now rather than waiting for perfect tools to emerge. This perspective resonates with my own experience so far.

Just as understanding computer architecture makes you a better high-level programmer—helping you anticipate memory usage, performance bottlenecks, and implementation details beneath the abstraction—learning to work with today's AI assistants prepares you for tomorrow's more sophisticated tools. The fundamental skill isn't just using the technology but developing an intuition for its capabilities, limitations, and optimal applications.

This parallels the evolution we've seen with previous programming paradigms. Those who embraced object-oriented programming early gained insights that remained valuable even as languages and frameworks evolved. Similarly, developers who master today's AI collaboration techniques—effective prompting, output verification, integration strategies—are building transferable skills that will remain relevant even as the underlying AI models and related tooling mature.

Developing Through the Present Limitations

Working with today's AI coding assistants requires a lot of patience and new approaches to problem-solving. Their promise of instant productivity often collides with reality when the generated code contains syntax errors, undefined references, and logical gaps. What initially seems like a time-saver must often be repaid in debugging effort. But you can view this as a learning opportunity—developing your ability to quickly identify and correct AI's misconceptions.

To successfully utilize the new tools I try to keep in mind what the AI assistant does and doesn't understand. I'm learning to recognize patterns in AI errors and developing efficient workflows to verify and correct the output. I don't see this as wasted effort—it's cultivating diagnostic skills that transfer across projects and will probably remain valuable as AI tools evolve.

Think of these current limitations not as roadblocks but as training grounds. The ability to quickly spot inconsistencies, infer the AI's misunderstandings, and guide it toward better solutions is becoming a critical professional skill. You can become frustrated by these tools or learn to effectively leverage them.

Adapting to Evolving AI Capabilities

As AI systems improve in the coming months and years, we'll need to continuously adjust our collaboration strategies. Soon, these assistants will better understand project context through deeper integration with codebases—remembering architectural decisions across files and sessions.

One important skill will be knowing what kind of errors to look for. Early adopters who learn to work within today's constraints will have developed an intuition for critical test cases as capabilities expand. They'll let the AI take the lead on boilerplate code, then ruthlessly test its suggestions for critical functionality.

Learning the New Debugging Workflow

One of the most transferable skills you can develop is navigating the new debugging/testing paradigm that AI-assisted coding introduces. Every AI-generated piece of code requires a mental workflow like

  1. Understanding what the code attempts to do
  2. Finding its shortcomings
  3. Fixing it while preserving any correct parts

This mental context-switching between creation and criticism can feel more draining than writing code from scratch. But over time, you start developing a more systematic approach to code evaluation—a skill that enhances your abilities whether you're reviewing code from AI, colleagues, or your past self.

This challenge can also be viewed as an opportunity to strengthen your technical fundamentals. Faced with AI's misunderstandings of language features or design patterns, you're forced to deepen your own understanding. This creates a virtuous cycle where debugging AI-generated code reinforces your mastery of programming principles.

Experimenting with New Programming Paradigms

One way we discover bugs is by testing, manual or automated. With increasing development pace and amount of code not written by oneself (or even another human), high-quality automated regression tests become more important than ever.

As we become more comfortable with AI assistance, we'll likely start to discover entirely new ways of approaching development tasks. Instead of treating AI as a code generator that requires human debugging, we might explore flipping the workflow entirely. Consider experimenting with having AI generate code based on your executable specifications—detailed test cases that define correct behavior.

At first sight it may look like humans will be concerned with high-level functionality and debugging while AI will take care of the "middle part". High-level functionality, however, can be defined in terms of tests the quality of which is inversely proportional to time spent debugging. This way the middle part tends to become all there is below high-level specs.

In this collaborative model, you focus your efforts where you add unique value: defining system architecture, crafting precise specifications, and establishing robust test criteria. The AI handles the mechanical translation between your requirements and functioning code, refining its output through successive test cycles until the specifications are met.

This test-driven approach transforms the debugging process from reactive error-fixing into proactive quality control. It lets you operate at your highest level of abstraction while the AI handles implementation details.

What if AI gets really good at gaming our tests without producing the results we really want? Well, we we'll have to come up with better tests! This can become another virtuous cycle and take software quality to a new level.

Summary

Regardless of which specific development models emerge, one thing is clear: programmers must develop new skills to collaborate effectively with AI assistants. One crucial skill to develop is the ability to efficiently verify generated code.

What has your experience with AI-generated code been like? Tell everyone in the comments.