[{"content":"This is Part 1 of a series that takes a GNOME application from an empty directory to acceptance into GNOME Circle. Each post is self-contained, but the series follows a single arc — and a real app — through every stage of the journey.\nWhy GNOME If you\u0026rsquo;re building a desktop Linux application in 2026, you\u0026rsquo;ve got choices. KDE Plasma has Kirigami. Elementary has Granite. You can reach for Electron, Tauri, or a dozen other cross-platform toolkits and call it a day.\nI chose GNOME because of what it feels like to use. The GNOME desktop provides a clean, distraction-free approach to computing. Its consistency rivals early macOS — there\u0026rsquo;s a feeling that everything is in its place. When I started building Moments, my photo management app, that consistency carried over into the development experience in a way I didn\u0026rsquo;t expect. The GNOME Human Interface Guidelines aren\u0026rsquo;t suggestions — they\u0026rsquo;re a design language. Follow them and your app inherits a coherent visual identity, consistent interaction patterns, and accessibility support you didn\u0026rsquo;t have to design yourself. Your header bar looks right. Your adaptive layout works the way users expect. Your keyboard navigation is correct.\nThat matters more than it sounds like it should. Users develop muscle memory around their environment. An app that respects that muscle memory earns trust immediately. An app that doesn\u0026rsquo;t — even if it\u0026rsquo;s technically superior — creates friction. GNOME gives you a head start on eliminating that friction, but only if you actually use the toolkit the way it\u0026rsquo;s meant to be used.\nThe ecosystem is also unusually healthy for an open source desktop project. Flathub provides distribution. GNOME Circle provides recognition, infrastructure, and community. The tooling — Builder, Workbench, Cambalache — is actively maintained and improving. And the community, while not enormous, is engaged and welcoming in a way that larger ecosystems sometimes aren\u0026rsquo;t.\nWhy Rust GNOME has traditionally been a C ecosystem. GTK is written in C. GLib is written in C. The GObject type system — which underpins everything — is C\u0026rsquo;s answer to object-oriented programming, complete with manual reference counting and a macro-heavy convention system that works remarkably well once you\u0026rsquo;ve internalised it.\nYou don\u0026rsquo;t need to write C to build GNOME apps. The GObject Introspection system generates bindings for dozens of languages, and three in particular have strong GNOME stories: Python with PyGObject, Vala (a language designed specifically for GObject development), and Rust with gtk-rs.\nI\u0026rsquo;m writing this series in Rust, and I think it\u0026rsquo;s the right default choice for new GNOME apps in 2026.\nThe ecosystem has moved. Look at recent GNOME Circle apps — Fractal, Switcheroo, Hieroglyphic — the Rust cohort is growing fast. When you hit a problem building a Rust/GTK app, there\u0026rsquo;s now a meaningful body of real-world code to reference. That wasn\u0026rsquo;t true even two years ago.\nThe gtk-rs bindings are mature. The gtk4-rs and libadwaita-rs crates provide safe, idiomatic Rust wrappers around the full GTK4 and libadwaita API surface. They handle reference counting, type casting, and signal connections in a way that feels natural in Rust. These aren\u0026rsquo;t thin C bindings with unsafe blocks everywhere — they\u0026rsquo;re a genuine Rust API for building GTK applications.\nRust\u0026rsquo;s strengths align with desktop app needs. Memory safety without garbage collection. Fearless concurrency. A type system that catches entire categories of bugs at compile time. These aren\u0026rsquo;t just theoretical benefits — they show up in practice when you\u0026rsquo;re managing complex UI state, handling async operations, and trying to ship software that doesn\u0026rsquo;t crash.\nThe tooling works. rust-analyzer provides excellent IDE support. Cargo handles dependency management. The Flatpak SDK includes a full Rust toolchain. You can get from cargo init to a running Flatpak application without fighting your tools.\nThere are tradeoffs. Rust has a steeper learning curve than Python, and the GObject subclassing pattern in Rust adds a layer of complexity that doesn\u0026rsquo;t exist in Python or Vala. Compile times are slower. Iteration speed — especially building inside Flatpak — is meaningfully worse than Python\u0026rsquo;s edit-and-run loop. These are real costs, and I won\u0026rsquo;t pretend otherwise. But the learning curve is a curve, not a wall, and this series will walk through every part of it.\nSetting up the development environment Before you write a single line of Rust or GTK code, you need three things: GNOME Builder, the Flatpak SDK, and a working understanding of why we\u0026rsquo;re using Flatpak from day one.\nWhy Flatpak from the start Most development guides have you install GTK libraries on your host system, write code against them, and worry about packaging later. This works — until it doesn\u0026rsquo;t. The failure mode is specific and painful: you develop against GTK 4.18 on your host, everything works, you go to package the app for Flathub six months later, and discover that the GNOME SDK version you need to target ships GTK 4.16, and three APIs you depend on don\u0026rsquo;t exist.\nBuilding inside Flatpak from the beginning eliminates this entire class of problem. Your development environment matches your deployment environment. The libraries you link against are the libraries your users will have. The dependencies are explicit and reproducible. When you submit to Flathub, there are no surprises.\nIt\u0026rsquo;s slightly more friction up front. It\u0026rsquo;s dramatically less friction in total.\nInstall Flatpak and Flathub If you\u0026rsquo;re running Fedora, Flatpak is already configured. For other distributions:\n# Install Flatpak (if not present) sudo apt install flatpak # Debian/Ubuntu sudo pacman -S flatpak # Arch # Add the Flathub remote flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo Install the GNOME SDK The GNOME SDK provides the full development environment — GTK4, libadwaita, GLib, and the Rust toolchain — inside a Flatpak runtime. You need both the runtime and the SDK, plus the Rust extension:\n# Install the GNOME 50 SDK and runtime flatpak install flathub org.gnome.Sdk//50 org.gnome.Platform//50 # Install the Rust extension for the SDK flatpak install flathub org.freedesktop.Sdk.Extension.rust-stable//25.08 A note on version numbers: the GNOME SDK version (50, at time of writing) corresponds to the GNOME release. The Rust extension version (25.08) corresponds to the freedesktop SDK version that the GNOME SDK is built on. These version numbers will change — check the Flathub runtime page for current versions.\nInstall GNOME Builder GNOME Builder is the IDE purpose-built for GNOME development. It understands Meson, Flatpak manifests, and the GNOME SDK. It can build your application inside the Flatpak sandbox, run it, and provide code intelligence — all without you manually configuring build environments.\nflatpak install flathub org.gnome.Builder Builder isn\u0026rsquo;t the only option. You can use VS Code, Neovim, or any editor you prefer — I\u0026rsquo;ll cover that setup in a later post on developer experience. But Builder eliminates the most configuration friction for getting started, and it\u0026rsquo;s what I\u0026rsquo;d recommend for your first project.\nVerify the toolchain Open a terminal inside the Flatpak SDK environment to confirm everything is in place:\n# Enter the SDK environment flatpak run --command=bash org.gnome.Sdk//50 # Inside the SDK shell: rustc --version cargo --version pkg-config --libs gtk4 pkg-config --libs libadwaita-1 You should see a recent stable Rust version and successful pkg-config output for both GTK4 and libadwaita. If any of these fail, the SDK or Rust extension didn\u0026rsquo;t install correctly — reinstall them before continuing.\nA note on host development You can also develop on your host system by installing the GTK4 and libadwaita development packages directly:\n# Fedora sudo dnf install gtk4-devel libadwaita-devel # Arch sudo pacman -S gtk4 libadwaita # Ubuntu/Debian (may lag behind on versions) sudo apt install libgtk-4-dev libadwaita-1-dev This gives you faster compile times and a tighter edit-compile-run loop, which matters during active development. Many Rust/GTK developers work this way day-to-day, using host-installed libraries for rapid iteration and Flatpak builds for testing and release.\nThe risk is version drift between your host libraries and the Flatpak SDK. If you go this route, check which GTK4 and libadwaita versions your host provides and compare them to the GNOME SDK you\u0026rsquo;re targeting. As long as your host version is equal to or newer than the SDK version, you\u0026rsquo;re fine. If it\u0026rsquo;s older, you\u0026rsquo;ll hit missing APIs.\nMy recommendation: start with Flatpak-only builds until you\u0026rsquo;ve got a working app, then add host builds as an optimisation once you understand the version boundaries.\nThe tools you will be using Before we start writing code in Part 2, here\u0026rsquo;s a brief orientation to the tools that\u0026rsquo;ll appear throughout this series. You don\u0026rsquo;t need deep knowledge of any of them yet — just awareness that they exist and what role they play.\nMeson — The build system. GNOME apps use Meson, not Cargo, as the top-level build system. Cargo still builds your Rust code, but Meson orchestrates everything else: compiling GResources, installing desktop files, generating application metadata, and invoking Cargo as part of the build. Part 3 covers Meson in detail.\nBlueprint — A markup language for defining GTK user interfaces. Blueprint compiles to the XML that GTK\u0026rsquo;s GtkBuilder expects, but it\u0026rsquo;s dramatically more readable. You can also build your entire UI in Rust code without Blueprint — we\u0026rsquo;ll cover both approaches.\nGResource — GNOME\u0026rsquo;s system for bundling assets (UI templates, icons, CSS) into your application binary. Instead of loading files from disk at runtime, GResources get compiled into the binary and accessed by path. It\u0026rsquo;s how GNOME apps ship self-contained.\nAppstream — The metadata standard that app stores (including Flathub) use to display your app\u0026rsquo;s name, description, screenshots, and release notes. You\u0026rsquo;ll need to get this right for Flathub acceptance — Part 9 covers it.\nWorkbench — A playground app for experimenting with GTK widgets, CSS, and Blueprint templates. If you want to test a UI idea without rebuilding your whole app, Workbench is where you do it. Grab it from Flathub.\nWhat we\u0026rsquo;re building Part 2 opens with introducing the app we\u0026rsquo;ll build throughout this series. It won\u0026rsquo;t be a toy counter or a TODO list — it\u0026rsquo;ll be something with enough complexity to hit real architectural decisions, real packaging challenges, and real Flathub review feedback.\nEvery post will be self-contained enough that you can follow along building your own app. The problems we\u0026rsquo;ll solve — state management, async operations, data persistence, adaptive layouts, packaging — are universal to any non-trivial GTK application. We\u0026rsquo;re documenting the actual experience of taking a GNOME app from zero to Circle, including the parts that aren\u0026rsquo;t in any documentation.\nWhat comes next Before we build a window, we need to understand the type system underneath it. Every widget, every signal, every property binding in GTK is built on GObject — and the way GObject maps to Rust is the single biggest conceptual hurdle for developers coming from other Rust backgrounds.\nIn Part 2, we\u0026rsquo;ll build a simple data model from scratch and use it to understand GObject\u0026rsquo;s inner/outer type pattern, properties, signals, and property bindings. No GTK widgets yet — just the foundation that makes everything else click once we start building the actual app in Part 3.\n","permalink":"https://e156e727.fromthearchitect-dev.pages.dev/posts/gnome-rust-part-1-getting-started/","summary":"\u003cp\u003e\u003cem\u003eThis is Part 1 of a series that takes a GNOME application from an empty directory to acceptance into GNOME Circle. Each post is self-contained, but the series follows a single arc — and a real app — through every stage of the journey.\u003c/em\u003e\u003c/p\u003e\n\u003chr\u003e\n\u003ch2 id=\"why-gnome\"\u003eWhy GNOME\u003c/h2\u003e\n\u003cp\u003eIf you\u0026rsquo;re building a desktop Linux application in 2026, you\u0026rsquo;ve got choices. KDE Plasma has Kirigami. Elementary has Granite. You can reach for Electron, Tauri, or a dozen other cross-platform toolkits and call it a day.\u003c/p\u003e","title":"Building GNOME Apps with Rust, Part 1: Getting Started"},{"content":"What Extreme Programming taught us about collaboration — and why it maps perfectly onto working with LLMs\nThe current conversation is missing something If you spend any time in developer circles right now, the conversation about AI coding tools tends to collapse into one of two camps. The first camp believes we are months away from AI replacing software developers entirely — that the role of the human is already vestigial, a temporary inconvenience on the road to full automation. The second camp pushes back hard, arguing that AI is little more than a sophisticated autocomplete — useful for boilerplate, dangerous if trusted, and nowhere near capable of producing anything a senior engineer couldn\u0026rsquo;t do faster with a clear head and a good keyboard shortcut.\nBoth positions are wrong. Not subtly wrong — fundamentally wrong. And the gap between them is where something genuinely interesting is happening.\nThe developers who are getting the most out of AI coding tools are not the ones treating the AI as an oracle to be prompted, nor the ones dismissing it as a toy. They are the ones who have stumbled onto — or deliberately adopted — a different mental model entirely. One that has a name, a history, and a body of practice behind it. One that the software industry actually worked out decades ago, in a different context, for different reasons.\nIt\u0026rsquo;s called pair programming. And it turns out it maps onto working with an LLM almost perfectly.\nA brief primer on XP pair programming Extreme Programming — XP — emerged in the late 1990s as a reaction to the bloated, process-heavy software development methodologies of the time. It was opinionated, practical, and in many ways ahead of its time. Among its practices, pair programming stood out as the one that generated the most scepticism from people who had never tried it, and the most loyalty from people who had.\nThe premise is simple: two developers, one keyboard, one screen. But the premise is also misleading, because it implies the value is in the typing — that you are getting two people\u0026rsquo;s worth of keystrokes for the price of one. That is not what pair programming is about. In fact, that framing completely misses the point.\nXP described two roles in a pairing session. The driver holds the keyboard and writes the code. The navigator holds the broader picture — watching for mistakes, thinking about what comes next, asking the question the driver is too focused to ask. Neither role is superior. Neither is passive. The navigator is not watching. The navigator is thinking, out loud, in dialogue with the driver. The value is in that continuous conversation — the shared context that builds between two people working through a problem together.\nWhat pair programming actually produces is a tighter feedback loop than solo development. Mistakes get caught earlier. Dead ends get identified faster. The solution that emerges has been stress-tested in real time by two different minds approaching the problem from slightly different angles. The code is better not because two people wrote it, but because two people thought about it simultaneously.\nThis is why studies on pair programming consistently show that while it does take more developer hours per feature, it produces significantly fewer defects and requires less rework. The investment pays for itself. The conversation is the work.\nThe AI as pair programmer Here is where the analogy earns its keep.\nWhen most developers describe their workflow with an AI coding tool, they describe something that sounds like issuing instructions. They have a task. They describe it to the AI. The AI produces output. They evaluate the output, accept it or reject it, and move on. The human is a reviewer. The AI is a very fast junior developer who never gets tired and never takes offence.\nThat model works. It produces results. But it is leaving an enormous amount of value on the table.\nThe developers getting disproportionate results from AI tools are doing something subtly but importantly different. They are not issuing instructions — they are having a conversation. They bring the problem, not the solution. They share context before asking for code. They push back when something feels wrong, even if they cannot immediately articulate why. They ask the AI to explain its reasoning, then challenge that reasoning. They treat the session as a dialogue, not a transaction.\nIn XP terms, they are navigating. The AI is driving.\nThis is not a metaphor. The navigator role in pair programming is precisely about holding the bigger picture while the driver handles execution. It means knowing where you are trying to go, recognising when the current path is taking you somewhere else, and having the conversation that corrects course before you have written a thousand lines in the wrong direction. That is exactly what a skilled human brings to a session with an AI coding tool.\nThe AI, for its part, brings things that complement the navigator role almost perfectly. Breadth of knowledge across languages, frameworks, and patterns that no single human could match. Speed of execution that removes the friction between idea and implementation. Tireless willingness to explore alternatives, rewrite sections, and try a different approach without frustration or ego. And crucially — no stake in being right. An AI does not defend its previous output. Ask it to reconsider and it will.\nWhat neither party brings alone is sufficient. The AI without a thoughtful navigator produces technically correct output that solves the wrong problem, or solves the right problem in a way that does not fit the broader system, or makes decisions that are locally sensible and globally incoherent. The human without the AI\u0026rsquo;s execution speed and breadth spends too much time in the details, loses the thread of the bigger picture, and runs out of energy before the interesting problems get solved.\nTogether, the feedback loop tightens in exactly the way XP pair programming described. The conversation is still the work. It has just moved to a different medium.\nWhat the human loop actually looks like It is easy to describe the partnership in the abstract. It is more useful to describe what it actually looks like in practice — the specific moments where the human contribution is irreplaceable, and where the temptation to disengage is strongest.\nDefining the problem before touching the keyboard The single highest-leverage thing a human brings to an AI pairing session is a clear, well-considered problem definition. Not a feature request. Not a task. A genuine understanding of what you are trying to achieve and why, what constraints matter, and what success looks like.\nThis sounds obvious. It is surprisingly rare. The temptation with fast AI tools is to start immediately — to get something on screen quickly and iterate from there. Sometimes that works. More often, the cost of an underspecified problem shows up three hours later when you have a working implementation of the wrong thing.\nThe navigator\u0026rsquo;s first job is to think before the driver starts moving. Spend time on the problem. Write it down. Share it with the AI not as a prompt but as a briefing. The quality of everything that follows is shaped by the quality of this moment.\nKnowing when to push back AI coding tools are confident. They produce output that looks authoritative, is well-structured, and compiles. This is both their greatest strength and their most significant risk. Technically correct output that solves the wrong problem is harder to catch than broken code, because nothing obviously fails.\nThe human navigator\u0026rsquo;s most important skill is the ability to look at plausible output and ask whether it is actually right — not syntactically, but conceptually. Does this approach fit the broader architecture? Does this abstraction hold up under the cases we haven\u0026rsquo;t discussed yet? Is this solving the problem we defined, or a simpler adjacent problem that is easier to solve?\nPushing back does not require being certain the AI is wrong. It requires being willing to have the conversation. Ask it to explain its reasoning. Ask whether there is an alternative approach. Ask what the tradeoffs are. The AI will engage with these questions genuinely, and more often than not the dialogue surfaces something important that the initial output missed.\nBringing taste the AI doesn\u0026rsquo;t have There are decisions in software development that are not technical. They are aesthetic, strategic, or deeply contextual — shaped by knowing your users, your constraints, your history, and your values. These decisions do not have correct answers that can be derived from training data.\nWhat belongs in version one and what gets deferred? Which abstraction is clean enough to be worth the indirection? Does this interaction pattern feel right for the people who will use it? Is this the kind of code a contributor joining the project in six months will be able to understand?\nThese are navigator questions. The AI can inform them, offer perspectives, and flag tradeoffs — but it cannot answer them. The human is not in the loop for these decisions because the process requires it. The human is in the loop because the human is the only one who actually knows.\nRecognising when the output is wrong This is the hardest skill to describe and the most important to develop. It is the ability to read AI-generated output and feel that something is off — before you can articulate why. Before the tests fail. Before the architecture review. Before the bug report.\nIt is, in essence, experience. The same pattern recognition that a senior engineer develops over years of reading code, debugging systems, and watching abstractions fail in production. AI tools do not compress this. They make it more valuable, because the volume of output has increased while the need for judgment has not decreased.\nThe human who can generate a working implementation in an afternoon and also recognise which parts of it will hurt them in three months is in a fundamentally different position than the one who can only do the first.\nSteering, not accepting Perhaps the simplest way to describe the human loop is this: your job is not to evaluate what the AI gives you. Your job is to steer toward what you actually need.\nEvaluation is passive. Steering is active. It means coming to the session with a direction in mind, holding that direction as the work progresses, and continuously asking whether the current path is still heading the right way. It means being willing to say \u0026ldquo;this is good, but it\u0026rsquo;s not quite right\u0026rdquo; and continuing the conversation until it is. It means treating the first output as the beginning of a dialogue, not the end of one.\nThe developers who get the most out of AI tools are not the ones who are best at prompting. They are the ones who are best at knowing what they want — and staying in the conversation until they get it.\nWhat this changes for independent open source development Open source software has always had a talent distribution problem. The ideas are abundant. The developers who want to build are abundant. The time to build is not.\nA motivated independent developer working evenings and weekends on a project they care about has historically been constrained not by ambition or skill, but by hours. A genuinely useful, well-architected application — something with multiple backends, proper data persistence, a thoughtful UI, comprehensive documentation, Flatpak packaging, CI/CD, and the hundred other things that separate a hobby project from something people can actually rely on — has traditionally taken years of sustained effort from a solo developer. Many projects never get there. They stall at the interesting-but-incomplete stage, maintained inconsistently, never quite reaching the quality bar that would attract users or contributors.\nThat constraint is loosening.\nThe AI pair programming model compresses the distance between idea and implementation in a way that changes the economics of independent open source development fundamentally. Not because the AI does the work — but because the conversation-driven development loop eliminates the specific kinds of friction that cause projects to stall. The boilerplate that nobody wants to write. The documentation that always gets deferred. The architectural decision that requires holding too many things in your head simultaneously. The test scaffolding that feels important but not urgent. These are exactly the tasks where AI assistance is most effective and most reliable.\nWhat remains — and what the human navigator must still bring — is everything that cannot be generated. The decision about which problem is worth solving. The product sense that shapes a feature into something users will actually understand. The judgment that says this abstraction is clean and that one will hurt you in six months. The taste that knows what polished looks like, because you have used enough polished software to have internalised the standard.\nThis has an important implication. As the execution barrier drops, the bottleneck shifts. The scarce resource in open source software development is no longer time — it is judgment. Developers who bring genuine domain knowledge, strong product instincts, and the ability to recognise quality will produce disproportionate results. Developers who treat AI tools as a shortcut around thinking will produce more output, faster, with the same fundamental limitations they had before.\nThe other shift is in portfolio. The \u0026ldquo;one developer, one project\u0026rdquo; pattern that has characterised most independent open source work is giving way to something different. A developer with strong judgment and a productive AI partnership can now maintain multiple substantial projects simultaneously — not by spreading themselves thin, but by changing the nature of the work. The parts of software maintenance that consumed the most time — implementing well-understood features, writing documentation, managing boilerplate, scaffolding tests — are no longer the limiting factor. What remains is the interesting work. The work that required a human anyway.\nFor ecosystems like GNOME, where the quality bar for inclusion is genuinely high and the community of active developers is relatively small, this could be transformative. The gap between \u0026ldquo;interesting idea\u0026rdquo; and \u0026ldquo;production quality app\u0026rdquo; has historically been where most projects died. That gap is narrowing. The question is whether the developers entering the ecosystem with AI-assisted workflows bring the judgment to match the pace — and whether the ecosystem\u0026rsquo;s review and mentorship structures can scale to meet the increased volume of serious submissions that will follow.\nThe opportunity is real. So is the responsibility that comes with it.\nThe risks worth naming honestly Any argument for a new way of working that does not acknowledge its failure modes is not a balanced argument — it is advocacy. The AI pair programming model is genuinely powerful. It is also genuinely risky, in specific ways that are worth naming clearly.\nThe flood of mediocre output The same forces that allow a thoughtful developer to ship a production-quality application in days also allow a less thoughtful developer to ship something that looks like a production-quality application in days. The difference is not always visible on the surface. The code compiles. The UI renders. The README is comprehensive. The architecture document exists.\nWhat may be missing is the judgment that shaped the decisions underneath. An abstraction that seemed clean to the AI but will not survive contact with real usage. A data model that works for the happy path and fails at the edges. A feature set that was easy to generate but does not reflect what users actually need.\nOpen source ecosystems that rely on community review as their quality filter — Flathub, GNOME Circle, and others — will face increased volume as the execution barrier drops. The risk is not that reviewers will be fooled by AI-generated mediocrity. Experienced reviewers are good at finding the problems underneath a polished surface. The risk is that the volume of submissions outpaces the community\u0026rsquo;s capacity to review them thoughtfully, and that the filter becomes less effective simply because it is overwhelmed.\nThis is not a reason to avoid AI-assisted development. It is a reason for the ecosystem to think ahead about how its quality gates scale.\nUnderstanding what you have built There is a specific failure mode in AI-assisted development that has no real equivalent in traditional solo development. It is possible to arrive at a working implementation without fully understanding it. The code is correct. The tests pass. The feature works. But the developer who accepted the output without interrogating it cannot explain why certain decisions were made, cannot predict how the system will behave under conditions that were not discussed, and cannot confidently modify it when requirements change.\nThis is not the AI\u0026rsquo;s failure. It is the navigator\u0026rsquo;s failure — a failure to stay in the conversation long enough to genuinely understand what was built and why. The fix is not to distrust AI output. It is to hold yourself to the same standard of understanding you would apply to code you wrote yourself. If you cannot explain a decision, ask until you can. If an abstraction feels opaque, explore it. The AI will not tire of the conversation. Use that.\nThe expertise illusion AI tools are fluent. They produce confident, well-structured output across an enormous range of domains. This fluency can create the impression of expertise where expertise does not exist — in the AI\u0026rsquo;s output and, more dangerously, in the developer\u0026rsquo;s self-assessment.\nA developer who has shipped several AI-assisted projects may have genuine expertise in the problems those projects solved — or they may have accumulated a portfolio of working code without accumulating the underlying understanding that expertise actually represents. The distinction matters when things go wrong. When a system behaves unexpectedly in production. When a security issue emerges in a dependency. When the architecture needs to change in a fundamental way. These are the moments that separate the developer who understands their system from the one who generated it.\nThe partnership model described in this post is specifically designed to develop genuine understanding alongside working software. The navigator who asks why, pushes back on decisions, and steers the conversation toward clarity is building expertise as they build the system. The developer who accepts output uncritically is not.\nThe temptation to move on Fast tools create an appetite for speed. When you can scaffold a feature in an hour that would have taken a day, the temptation is to do ten features instead of one — to keep moving, keep building, keep generating. This is a real risk.\nThe parts of software development that AI does not accelerate — thinking carefully about the problem, sitting with an architectural decision before committing to it, getting feedback from real users before adding the next feature — are the parts that tend to get skipped when the rest of the loop feels fast. The navigator\u0026rsquo;s discipline is not just about what to build. It is about when to stop building and think.\nPace is a tool. Used well, it lets you reach a quality threshold faster than was previously possible. Used poorly, it lets you reach the wrong destination faster than was previously possible.\nThe invitation There is a version of working with AI coding tools that is transactional. You have a task. You describe it. You evaluate the output. You move on. It works, up to a point. It will continue to work, up to a point.\nThere is another version that is something closer to a genuine intellectual partnership. You bring the problem, the context, the taste, and the judgment. The AI brings the breadth, the speed, and the tireless willingness to explore. Together you have a conversation — the kind of conversation that pair programming has always held up as the ideal — and the work that emerges from that conversation is better than either party could produce alone.\nThe shift between these two versions is not about tools. It is not about prompting techniques or context window sizes or which model you are using. It is about how you show up to the session. Whether you come with a direction or just a task. Whether you interrogate the output or accept it. Whether you are willing to stay in the conversation until you genuinely understand what has been built and why.\nThe XP community figured out decades ago that the most productive unit in software development is not the individual developer working alone — it is two people thinking together. That insight did not age. It just found a new form.\nStop prompting. Start partnering. The results will surprise you.\n","permalink":"https://e156e727.fromthearchitect-dev.pages.dev/posts/ai_pair_programmer_post/","summary":"\u003cp\u003e\u003cem\u003eWhat Extreme Programming taught us about collaboration — and why it maps perfectly onto working with LLMs\u003c/em\u003e\u003c/p\u003e\n\u003ch2 id=\"the-current-conversation-is-missing-something\"\u003eThe current conversation is missing something\u003c/h2\u003e\n\u003cp\u003eIf you spend any time in developer circles right now, the conversation about AI coding tools tends to collapse into one of two camps. The first camp believes we are months away from AI replacing software developers entirely — that the role of the human is already vestigial, a temporary inconvenience on the road to full automation. The second camp pushes back hard, arguing that AI is little more than a sophisticated autocomplete — useful for boilerplate, dangerous if trusted, and nowhere near capable of producing anything a senior engineer couldn\u0026rsquo;t do faster with a clear head and a good keyboard shortcut.\u003c/p\u003e","title":"The AI Pair Programmer: Why the Human Loop Is About Partnership, Not Review"}]