March 28, 2026
Programmer vs. Computer Scientist

Computer science is not the same thing as programming. Confusing the two is leading a lot of people to the wrong conclusion about AI.
Right now, there is a growing narrative that because AI can write code, a computer science degree is no longer worth it. Students are reconsidering the major. Parents are questioning the investment.
I think that conclusion is wrong. It confuses a tool with a discipline.
Programming is the act of writing instructions for a computer. It is a craft. It matters. And yes, AI is getting very good at parts of it.
Computer science is something deeper. It is the study of computation, algorithms, systems, complexity, information, security, and what is actually possible to build and operate at scale.
Computer scientists ask questions like: Which problems are fundamentally solvable? How do you get thousands of machines to agree on the state of the world? What are the theoretical limits of data compression?
There is a line often attributed to Edsger Dijkstra that captures this well: "Computer science is no more about computers than astronomy is about telescopes."
Programming has always been a tool computer scientists use. It was never the discipline itself. And now that tool is evolving. LLMs can generate code. They can autocomplete functions. They can scaffold applications. But they do not reliably judge whether the architecture is sound, whether the algorithm will scale, or whether the system will hold up under real-world conditions.
When calculators entered classrooms, many educators and parents feared they would erode mathematical reasoning. Instead, calculators helped push math education deeper into problem solving, modeling, and abstraction.
When CAD software arrived, no one seriously argued that engineers no longer needed to understand structural mechanics or thermodynamics. The tool made them more productive. The foundational knowledge made the tool useful.
LLMs are following the same pattern. They are a more powerful tool for building software. But they do not replace the need to understand what you are building, why you are building it, or whether it will work at scale.
LLM-generated code can pass small tests and still fail under real conditions. It can omit resilience mechanisms like retries, timeouts, and circuit breakers. And most dangerously, it can appear to work while silently producing wrong results.
Someone has to evaluate that output. Someone has to understand distributed consensus well enough to know the design will not hold up. Someone has to recognize that an O(n^2) algorithm will grind to a halt when it moves from 10 test items to 10 million real ones. Someone has to know what questions to ask in the first place.
That someone is a computer scientist.
The scarce skill now is not writing a sorting algorithm from scratch. It is looking at an AI-generated sorting algorithm and immediately seeing what is wrong with it. That requires more expertise, not less.
Take streaming services as an example. When you are serving hundreds of millions of customers worldwide, the core challenges are not about writing code. They are about designing systems that work reliably under real-world conditions.
How do you architect a recommendation system that drives a large share of viewing? That is machine learning, linear algebra, and distributed computing. How do you transcode and deliver video across a massive global device footprint with minimal latency? That is information theory and compression. How do you protect intellectual property worth billions? That is applied cryptography and security architecture. How do you make every app release faster and more stable than the last? That is systems engineering and performance analysis.
None of that gets easier because an LLM can write a function. If anything, the bar goes up. When routine code writes itself, the value shifts to the people who understand the deeper principles.
At the same time, traditional computer and information science enrollments have started to soften, even as adjacent technical fields continue to grow. That makes this moment easy to misread. The need for deeper technical thinking is not shrinking. It is becoming more differentiated.
The students who stay the course, who invest deeply in understanding algorithms, systems, theory, and architecture, are likely to enter a market with less crowding and strong demand. That is a rare combination.
The question was never, "Should I learn to program?" It was always, "Should I learn how to reason about systems, scale, and tradeoffs?"
Programming was one expression of that thinking. LLMs are another. The thinking itself, the ability to decompose complex problems, reason about tradeoffs, design systems that work at scale, and evaluate whether a solution is correct, does not become less valuable because we have better tools. It becomes the whole game.
If you are a student considering computer science, lean in. If you are a parent worried about the investment, understand what you are actually investing in. You are not paying for someone to teach your child to write code. You are investing in a way of thinking that will be even more valuable in the age of AI.
There has never been a better time to be a computer scientist. The tool just got an upgrade. The discipline matters more than ever.
The views expressed here are my own and do not reflect the official position of my employer or any affiliated organization.