Credit: CC0 Creative Commons

Why people shouldn’t learn to code

Amy J. Ko
Bits and Behavior
Published in
5 min readJan 5, 2018

--

As a scholar, I like arguing against myself. Thesis, anti-thesis, synthesis: the Hegelian dialectic can be one of the more productive and entertaining paths to truth.

And so, in this post, I attack the central thesis of my research: that the ability to program a computer, and the computational thinking that can come with it, is a power that must be democratized.

Why do I believe this? I believe that a severe concentration of power nearly always leads to injustice, and justice is one of my core values. That only 20 million people write the software that shapes the digital experiences of the 7.5 billion people on Earth is concentration of power second only to global income inequality. My research aims to lower the barriers to acquiring the power to code, which I hope will more evenly distribute this power, which in turn will reduce injustice.

Agree with me? Great! But that’s no fun. And it leaves this position open to attack, with no sense of how robust it actually is. My position might even be wrong.

So let’s consider three anti-theses to my thesis.

Ability is an arms race

One critique of my thesis is that the ability to code is an arms race. No matter how easy we make it to learn to code, this greater ease will only amplify the abilities of those who already could. The privileged few who learn to code now will learn younger and faster. All of those talented new engineers that didn’t have jobs before still won’t get jobs at Google because everyone else will be that much more talented. No matter what we do, power will remain concentrated, because the underlying social structures that protect that power will remain unchanged.

This is an instance of Kentaro Toyama’s argument about technology as an amplifier rather than a catalyst of social change. The argument is that technology of any kind, whether a learning technology, a better pedagogy, a simpler programming language, or a better developer tool, will only intensify whatever social structures exist. It’s up to us to change our behavior, our values, and ultimately, our institutions, if we want to redistribute power. More effective learning will not.

Software is evil

Another critique of my thesis is that the software itself is a net loss for humanity. Communication technologies have eroded our relationships, democratization of publishing has eroded truth, platforms have eroded innovation, and automation has eroded our livelihood. There may be some good things that come from digitizing information and automating decisions, but on the whole, they take more than they give. We should therefore have less software, not more, and so we should have fewer people that can code, not more. Like nuclear weapons, we should use software sparingly, if it all.

This argument abounds in pop culture of today. As all dystopian sci-fi has for a century, Black Mirror is popularizing this position, portraying how even small changes in how we use software can lead to plausible and horrifying outcomes.

Software is dangerous

One of the critiques I’ve heard most is the idea that software is too powerful to be democratized. As in medicine, engineering, and law, some knowledge should be regulated, only accessible to people with appropriate training. The risk of allowing everyone have the ability to code is that we increase harm. And perhaps were already seeing the result of unregulated access to the ability to code: software fails, people die. In fact, I analyzed 30 years of software failures reported in the news, finding that about once per month, the news reports at least one death, injury, or threatened access to food or shelter due to software problems. Is all of this faulty software really worth this increasingly frequent harm?

Some countries such as Canada do regulate software engineering. These efforts are often poorly implemented and premature, but not necessarily wrong in principle. We don’t want a billion people to know a little bit about heart surgery. Why would we want a billion people to know a little bit about software development?

Now, to synthesis. How can we reconcile these conflicting stances?

All four of these arguments have a kernel of truth. The small number of developers in the world really do concentrate power, and that does lead to injustice like algorithmic bias, poor software accessibility for people with disabilities, and innovations that primarily serve the privileged classes that created them. And yet, software does cause harm and can be evil. It’s entirely possible that by helping more people learn to code, we’ll just end up with more people with brittle knowledge of how to create software, more bad software, and the same people in power.

The fatal flaw that puts these positions in conflict is that none of them make explicit who will learn to code and what they will do with that knowledge. I envision a world in which a vast majority of educated people understand enough about code not to become engineers, but to advocate for justice. Some of those people will become software engineers, but they will be different, more diverse people, who represent society, unlike the engineers we have now. This larger group won’t make any more software than we would have made otherwise (and therefore won’t cause any more harm or evil than we would have had otherwise). Rather, this new majority of computationally literate citizens will be a political force that demands justice.

This literacy could not be more pressing. For the next century, we will be heavily debating net neutrality, privacy, the regulation of automation. We will be trying to parent in the presence of social media. We will be trying to make objective journalism sustainable and desirable. We need every parent, politician, and person in power to understand what code is and what it isn’t. And we need the 20 plus million developers in the world to reflect everyone, so the software they create serves everyone.

The other fatal flaw in all of the positions above is that they don’t make clear what “learning to code” means. What does everyone need to understand about software to be in a position to advocate objectively? It’s not necessarily knowing a programming language. It might mean knowing what programming languages are and are not capable of. It might mean understanding the intersection between computing and policy. It might mean understanding how software is engineered and who engineers it, so everyone can comprehend what any particular policy proposal they’re voting on would actually mean in practice. Some of these ideas have made it into our curricular standards and assessments, but most have not. We need to understand what this knowledge is and invent ways of teaching it effectively.

Software is not going away. It will continue to be evil and dangerous. It will continue to bring joy and prosperity. But it will not bring social change, and it will not provide universal access to knowledge about computing. That’s up to us.

--

--

Amy J. Ko
Bits and Behavior

Professor, University of Washington iSchool (she/her). Code, learning, design, justice. Trans, queer, parent, and lover of learning.