Three types of AI engineering
“AI engineering” means three completely different things depending on who says it.
When someone says “AI engineering,” they could mean three completely different things. Let’s untangle that a bit:
Type one: building the models. This is the original ‘AI engineering’. The field formerly known as machine learning, data science, sometimes operations research. Deep research, deep statistics, data pipelines. These engineers build the foundational algorithms; a recommendation engine trained on purchasing patterns, a large language model consumed via API. It’s science-heavy, math-heavy, and requires understanding things most software engineers never touch.
Type two: engineering with AI as a tool. This is traditional software engineering, but with AI supercharging the craft. The systems are still deterministic; same input, same output. The engineer uses Claude Code, GitHub Copilot, or ChatGPT the way a previous generation used Stack Overflow. The skill here is judgment. Knowing when the AI’s suggestion is good and when it’s garbage. Managing a team of agents. Keeping code quality high when generating of code is cheap.
Type three: building products with AI inside them. This is the new discipline. Prompt engineering, token optimization, context management, cost control. The art of embedding a stochastic system, one that gives different outputs for the same input, inside a deterministic application. You’re introducing unpredictability into a system designed for predictability. That’s a fundamentally different engineering challenge.
