Manifesto
As AI becomes a standard part of how software gets built, the question of how to use it well is still largely open. We're exploring what it looks like to build with AI in ways that preserve developer understanding, and whether speed and genuine competence can grow together.
In two years, AI has gone from autocomplete to generating multi-file features, drafting architecture documents, writing test plans, and making decisions that used to belong entirely to engineers. The gap between "AI helps me work" and "AI makes decisions I review" is collapsing faster than we can evaluate what we're trading away. Tools have become more capable, and that's genuinely useful. But somewhere in the race toward greater capability, a question got lost: what kind of developers do we become depending on how we use these tools? Whether speed and understanding can coexist. That starts with being clear about what matters.
Developers have always used abstractions we don't fully understand. We work with compilers, databases, and frameworks whose internals remain opaque. This is fine, we understand what these abstractions do, why they work, and how to debug them when they don't. AI that hides decision-making is different. When you can't explain why a solution solves the problem this way rather than another, you don't understand it well enough to maintain it. The risk isn't that AI writes code or drafts a document or proposes an architecture. The risk is that it makes choices you can't reconstruct, embedding assumptions you can't identify. Regardless of the task, if you don't understand the output well enough to evaluate it without AI assistance, you're accumulating a kind of debt: technical, cognitive, and professional.
But understanding what AI produces isn't enough. We also need to preserve the kind of work that builds understanding in the first place. Some tasks are worth automating: code formatting, boilerplate you've written a hundred times, transforming completed work into reviewable chunks. But the line isn't always clean: writing documentation can force you to notice where your mental model is fuzzy, and refactoring that feels mechanical can build pattern recognition you didn't know you were developing. The question isn't which category a task falls into, but whether doing it yourself still teaches you something. Work that forces you to think through the problem space differently — architectural decisions, test design, problem decomposition, critical review — is where you should remain in the driver's seat. AI can participate, but the thinking shouldn't be delegated. Not because AI can't assist, but because the friction in that work is often the point.
Preserving understanding and judgment requires control over how AI participates in your workflow. AI should be transparent, composable, and opt-in. You choose when and how it participates. Black-box solutions that "just work" but can't be inspected, questioned, or modified miss the point. You should be able to trace what a tool did and why. Combine it with other tools in ways the original designers didn't anticipate. Choose not to use it for tasks where human judgment matters more than speed. Tools should augment your capability, not replace your agency.
These principles only matter if we act on them. So we build open-source tools that automate mechanical work while preserving understanding and control. We conduct research on AI's impact through structured experiments and user studies, measuring real workflows and real outcomes, not just velocity metrics. We create space for critical discussion beyond hype and doom, examining actual practices and building collective understanding of how these tools change our work. The industry adopts practices faster than it evaluates their consequences, not out of bad faith, but because the tools are compelling and the pressure to ship is real. "Does this work?" is necessary but insufficient. We also ask: what are we gaining, what are we trading away, and what does this change about how we think, how we learn, how we grow as engineers?
This is a community-driven project. The code, the research agenda, and the conversation are shaped collectively. Propose new tools, challenge our assumptions, contribute to research, share your experience. If your ideas align with these principles, they belong here.
The future of development with AI isn't predetermined. We're building it deliberately, one tool and one conversation at a time.
