Recently, I have read some articles and seen product announcements based on the idea that we need to make programming easier. Making it easier is supposed to make more people program, and the underlying assumption is that programming can be easy enough that everyone can do it. I have also talked computer science undergraduate students who kind of asked me just how many years at the university you should do, and what do you actually in practice gain from a university degree in computer science? When I think about it, these two discussions really come down to the same question: Should everyone be a programmer? Can everyone be a programmer? What does it mean to be a programmer? How do you best learn to program?
Personally, I am rather skeptical as to the ecosystem relevance of the “make programming easy enough for anyone” tools. I do not doubt that they can make throwing some bits of code together easier, but is that really something that is useful in a real-world setting?
The Carpentry Analogy
I think carpentry or wood working offers a nice analogy to coding. It is clear that being able to do a bit of carpentry at home is useful. Even if just means putting the last few screws into a piece of IKEA furniture, or something more advanced like building a deck. Most people know how to use a hammer and a screwdriver. Add some power tools and you can do a whole lot of screwing and drilling for house-hold needs… but that does not make most people into professional builders.
If you want a house put up that should last for decades, with safe electrical installations and proper care for water mains and heating, water-proofing of bathrooms, etc. you should hire professionals with proper skills, knowledge of building codes, certifications, et cetera (and insurance and liability). It is not a task you typically undertake on your own, since the scale of the operation and the cost of failure is much higher than a small home improvement project.
Computers and coding are the same – there is a big difference between a hobbyist at home and a professional developer in an industrial setting. For a small piece of software to automate something at home, I think most people could learn how to do it on their own. In the end, the implementation is likely not particularly polished, and it might well make a computer scientist laugh or shudder. But it does not really matter. Good enough is good enough.
However, for software that lives on the Internet or is delivered to paying customers, a whole new level of polish, discipline, and skill is required. Source code management, secure coding, static code scans, update releases, documentation, user interface design, user support, … just like a house is a lot more complex than a table, real software is a lot more complex and requires much more skill, discipline, planning, and process than a small hack for the home.
Nobody would think it sane to ask everyone to become builders when there is a shortage of housing, and the in the same way, it makes no sense to ask everyone to become a programmer because we have a shortage of IT skilled people.
So how does one become such a professional programmer? I think that formal education is an absolute need. While in theory it is possible to teach yourself a lot with the resources available on the Internet, I think the discipline and structure of a formal education program is necessary to truly form a good computer scientist and programmer. It is very hard to really learn something new entirely on your own, and it is very easy to miss important points if there is nobody there to point them out and force you to consider them.
Initially, I was a self-taught programmer myself. I picked up programming on a ZX Spectrum home computer as a young kid – with nothing but a thin manual and some collections of example code snippets to guide me (long before the Internet was a thing). While trying to program games for the Spectrum, I “discovered” concepts like loop unrolling, computed jumps, and self-modifying code without knowing what they were called or that they all were well-known techniques. I had no idea about state machines, floating point numbers, or algorithmic complexity. A few years later, I taught myself object-oriented programming on a Macintosh SE using an MPW-based Pascal compiler.
After this, I got into Computer Science at the university, and I learnt about all the theoretical and empirical underpinnings for computing – math like automata theory, algorithm complexity, databases, compiler theory, formal methods, as well as getting exposed to a range of languages and language types. I don’t see how I would ever have done that on my own, neither understanding the need for such courses or even realizing that they existed.
Thus, to me, the core benefit of a university-level education in computer science is the systematic exposure to a wide variety of concepts. There is huge value in being forced to take a look at many different aspects of computers and computer science. In a good CS program, students should do things like build a compiler, code in Haskell, fiddle with a first-line interrupt handler in an OS, do a bit of machine learning, and also prove the correctness of some simple algorithms. This is what makes a well-rounded programmer. The focus should be on concepts, ideas, and fundamental properties. Not on coding, syntax, or specific languages currently popular in industry.
Programming and Coding
The fundamental conceptual problem with many of the “let’s make programming easy” approaches that I see is the confusion between coding and programming.
Programming is about understanding problems and building algorithms and architectures to solve the problems. Block-based and simplified syntax systems help users by making code easier to write by making it less arcane. Proposed approaches like using machine learning to “fill in the blanks” in programs also focus on fleshing out the code. However, they do not address the core skill in programming, which is creating appropriate solutions to specific problems. In a way that is architecturally sound. The big picture is the important bit, not how you write the code.
Coding is the least interesting part of programming, and it should rarely be a problem for a professional programmer even if exposed to totally new language. It definitely helps to know a language and a set of libraries well in order to quickly write code, but that is something that can be learnt as needed. A virtuoso coder can create works of art in code, just like a great writer delights you with their use of language.
Languages are just tools to make computers do things, and real programmers tend to create the languages they need in order to solve problems or select a language that is the best for a particular problem. Great languages (and run-time systems) can make it much easier to write robust and fast code, and the study and design of languages is a core part of computer science.
Thus, I do not see coding as a very important hurdle in practice. It can be taught and students can learn it, and new languages can be built to make it easier to express certain solutions in an easier way. I am a strong believer in domain-specific languages, and in a way the “easy programming” solutions are yet another example of a domain-specific language targeting a certain class of users.
The best programming is done away from the computer and without code, using pen and paper, or a white board, and some smart colleagues. Once you know what you want to accomplish, you can select a language that makes it easier and write the code.
In summary, I think that we need to think of programming just like any other craft, trade, or profession with an intersection on everyday life: it is probably good to be able to do a little bit of it at home for household needs. But don’t equate that to the professional development of industrial-strength software. Just like being able to use a screwdriver does not mean you are qualified to build a house, being able to put some blocks or lines of code together does not make you a programmer capable of building commercial-grade software.
I think it is very important to society that more people understand how software and computer systems work… but there is a huge difference between that and believing that everyone should be their own programmer.
In the end, computing is just like all other areas of human endeavor: specialization and division of labor is what drives productivity, innovation, and quality of results. That pretty much always works, regardless of context and domain.
Update: note that there is now a part 2 available here.