Thoughts on Huntley's talk about Future, Devs & AI

Bartek Witczak
Bartek Witczak

I watched Geoffrey Huntley's talk "The Future Belongs To People Who Do Things: The 9 month recap on AI in industry" and it crystallized a few things I've been feeling but hadn't articulated.

My IDE is mostly useless now

I've moved back to CLI. Not because I'm trying to be a purist-because the IDE doesn't give me much anymore.

My workflow shifted: I'm barely writing code. Mostly reviewing diffs and fixing what gets generated. The traditional IDE is optimized for writing and reading code. That's not where I spend my time.

What I actually need:

  • Diff-first interface (not file-first)
  • Context window monitoring (what's loaded, what's relevant, what's noise)
  • Multi-agent orchestration (running multiple agents in parallel, better handoffs, coordinated verification)
  • Integrated verification pipelines (compile → test → UI test → E2E, all visible in real-time)

The CLI is closer to this workflow than any IDE right now. But it's still a stopgap. We need a new category of tooling for AI-assisted development-tools built for orchestration and review, not for writing.

Reviewing is way faster than writing yourself, especially with unfamiliar libraries or APIs. Frontend work is often just keystrokes-I know how I want to write it. The bottleneck isn't generation speed. It's managing what gets generated and making sure it doesn't break everything else.

Working with LLMs is more like learning a language than learning a tool

Huntley makes a point about assessing people on LLM work: you mostly have to observe them. You can't just ask "how do you do it?" because the answer is... practice.

This hits. I can't articulate why working with AI works for me. The skill is tacit.

What you're actually practicing:

  • What context to include (and what to exclude)
  • How to frame problems from different angles
  • When to iterate vs when to start over
  • Pattern recognition for what prompts lead where

The surface area is huge. Multiple approaches work. There's no canonical "right way." You develop feel through repetition-like learning to communicate with a person, not learning to use a tool.

If you're frustrated that AI tools aren't "clicking" yet, that's normal. Keep going. The returns compound.

One task per context window

This is the most important tactical rule I've learned.

When you work on multiple tasks in one context, the LLM starts mixing concerns. Answers get worse. Code from Task A bleeds into Task B. Context pollution is real.

What counts as "one task" is also a gut feeling you develop. Sometimes it's one feature. Sometimes it's one logical unit that touches five files. You learn the boundaries by watching things break when you cross them.

When I move to the next task, I open a new context. Clean slate. No pollution.

Context limits are still real

Huntley mentioned only 176k tokens of context is actually usable (presumably practical limits, not theoretical). Worth remembering when you're loading MCPs, skills, and custom rules.

Context window management isn't just the tool's job-it's yours too.

My bet: Quality is the new differentiator

Building is easier now. Which means building itself has less value.

When the barrier to entry drops, the bar for quality rises. Everyone can build now. Not everyone can build well.

My bet is that thinking shifts to:

  • Quality UX/UI - deeply understanding user needs, not just polish
  • Feature selection - picking the RIGHT things to build
  • Discipline - knowing what NOT to ship

This has always mattered. But when implementation speed increases 10x, the ROI of good product judgment increases even more.

I don't have proof yet-this is intuition based on the shift I'm experiencing. But if building becomes commoditized, taste becomes the bottleneck.

The meta-skill compounds

The only way forward: try, experiment, build.

Not because AI makes it magical. Because the tools change every month, but the meta-skill of learning new tools compounds.

You're not just learning Claude or Cursor or whatever comes next. You're learning how to learn new AI tools. That skill persists. The ROI of experimentation is higher than it appears.