Programmers generally report more success coding with AI on new, greenfield projects. You might think this is just because new projects are small and easier for AI to navigate. But an equally important reason is that AI-native projects build a stronger immune system for receiving AI contributions.
Part of the AI immune system is old: source control, comments, tests, linters, observability, and continuous automation. Mature codebases naturally build stronger automated defenses as they deploy more frequently and add more contributors. AI-first codebases must build these defenses even earlier, because the volume of output per individual contributor is unusually high.
The other part of the AI immune system is brand new: rules for AI. To build a healthy AI-first codebase, you’ll need rules that teach the AI how to contribute following the collective preferences of your human contributors.
At the Zo Computer Company, we believe many of the best practices we've learned from coding with AI can be applied to human-AI-computer collaboration in general.
What does the AI immune system look like when we go beyond code? Are there equivalents for linters, source control, etc? We're excited to find the answers.