Blog
Announcements

Announcing the Modus Project

Today
modus logo

Two years ago, AI coding tools were a novelty. Today they're writing multi-file features, proposing architecture, and making judgment calls that used to be entirely yours. That's genuinely impressive. It's also worth pausing on.

The conversation in the industry has mostly been about speed and how much faster teams ship. But that framing misses something: what happens to the developer on the other side of that equation?

Why we started Modus

Here's the thing about abstractions: we've always used them. Compilers, ORMs, cloud infrastructure... most of us couldn't implement these from scratch, and that's fine. We understand what they do well enough to use them well and debug them when they break.

AI assistance can be different. If you've ever merged a chunk of generated code that worked but that you couldn't quite explain, you've touched on the problem. It's not that the code was wrong. It's that the understanding didn't transfer. And understanding gaps have a way of compounding quietly until something breaks in production at 2am.

We're not anti-automation, far from it. Plenty of development work is mechanical and repetitive, and AI taking that off your plate is genuinely useful. But some tasks are worth doing yourself not because they're hard, but because the process of doing them is how you build the mental models that make you a better engineer. Knowing which is which, and staying honest about it, is harder than it sounds.

That's the question at the center of the Modus Project. We're a community building open-source tools, running research studies, and trying to think carefully about what this shift in how we work actually means. If you want the fuller version of what we're about, our manifesto lays it out.

George: our first tool

George is a developer analytics system built for AI-assisted coding workflows.

It lives inside VS Code and tracks how you actually use AI tools (Claude Code, GitHub Copilot, Gemini) across your codebase. Which tools, how often, on which files, and how those patterns shift over time. A dashboard surfaces a view of your workflow you've probably never had before, because nobody's really been tracking this stuff.

The second half is more interesting: George runs adaptive assessments to test whether you understand the AI-generated code that's made it into your project. It builds a picture of your knowledge over time and adjusts difficulty as it learns what you know. The goal isn't to grade you or make you feel bad, it's to give you an honest signal, so you can make better decisions about where to lean on AI and where to stay in the driver's seat.

George is open source, built on a Rust core with a TypeScript/React layer for the VS Code integration and dashboard. You can find it at github.com/modusproject/george.

What comes next

George is the first tool, not the last. We're already working on others that follow the same principle: automate the mechanical stuff, preserve what builds real understanding. We're also designing research studies to get actual data on how AI tools change the way developers learn and grow over time.

We're doing all of it in the open, because these questions are too important to be answered by any one team's assumptions.

Get involved

Try George and tell us what you think. Propose a tool or a research question. Contribute code. Disagree with something we got wrong.

Everything is happening on our Discord. Come say hi.