Let's Get Technical

By Jonah Bailey on 12 11 2014

The Debate is Over

The debate over whether designers in the tech industry should be technical has already happened. Like any good political debate, I don’t know whether either side succeeded in convincing anyone of anything. I’m already on one side of the debate by virtue of the skills that I have acquired in my career. Regardless of where you stand, this article is meant to be an encouragement toward exploration. I’d like to bolster your confidence as a designer so you can grapple with the technological side of your job and come to a less intimidated, more confident place.

If you’ve learned to communicate with a computer through a graphical user interface, you already have all the tools you need to learn to communicate with that computer via other means. Have you ever tried putting a friend or loved one who grew up before the information age in front of a personal computer with a point and click interface? Nothing makes sense to them. You are a genius in their eyes! You really are more clever than you think. Learning to interface with a computer through non-graphical means is just like learning another language. There's a new vocabulary, grammatical structure, and all sorts of cultural nuances that will be completely new for you. Sound intimidating? That’s ok! You're reading this, right? Then you’ve done this before.

My friend Scott Sullivan compared starting out in code to being punched in the face for the first time. Before it happens, you are scared to death of what might happen. I’ll tell you a secret: We have all felt like this at one time or another. As someone who has literally been punched in the face AND learned to code (to a degree), let me assure you that no matter what happens, you're not going to die. The first time will suck, it’s true! There will be a mess and you’ll need help getting on your feet, but I promise it will get better. You'll lose the fear. You'll become more adept at not getting yourself in to bad spots. Especially if you get yourself a little help.

Understanding Computers

Computers aren’t magic. I know, sometimes it seems like they hate you and are plotting your downfall but I can assure you that, at their most basic level, they are just a complex system of switches. Yes, switches. Like the one you flip on in your bedroom at night. Computers understand one unit of basic information: the bit. A bit is based on a physical reality; either a bit is “on” (there is electrical current) or it is “off” (there is no electrical current). Bits are to the digital world as atoms are to the physical one. As Steven Frank says in his excellent book “How to Count”:

Bits are the “atoms” of everything digital. Every piece of information stored or processed by a computer is made up of some number of bits. Sometimes just a few, sometimes thousands or millions. But everything the computer understands is, at its lowest level, made up of bits. Text, graphics, sounds, program instructions — all made of bits in one way or another.

At the core of computers is physics. It's pulsing electrons; it’s 1’s (on) and 0’s (off). At this level, a computer is not extremely useful to the human race. Like I said, it’s just a very complex series of innumerable switches. But these things should be able to do more than turn on the lights, right? That’s where abstractions come in!

Abstractions are the things that allow us as humans to interface with computers and put them to work on problems that matter to us. There are many layers of abstractions between what you see on your iPhone and the bits flowing around inside it. Computer programming is a layer of abstraction that turns these pulsing electrons into something useful: a problem solving machine. How can a series of switches solve problems? At it’s core, problem solving is a decision-making activity. And binary decisions based on data are something computers can do. Computers can solve problems and make difficult decisions that we as human beings would never be able to grapple with because of the speed or breadth of knowledge required.

However, these problem-solving machines also have limitations. Computers are lightning fast. But they are really dumb. Computers can solve very complex issues but they need to be broken down into smaller parts; they need hierarchical, well-defined steps. Computers are great at following mundane or procedural task paths with speed, regularity, and precision but they need us to break things down into bite-size pieces.

There’s a key lesson here: Computers don’t generate random results. When you're struggling with a computer-aided task and you aren’t getting the results you want, don’t blame the computer. The problem lies in your own understanding or practice. It’s the first, safest, and often only assumption to make. If that webpage isn’t laid out the way we expected in IE9, it isn’t the computer's fault. Its ours! It isn’t random, there’s no malice in it, we just need to find the error in our assumptions.

Understanding the Role of Design

As I said before, computers require hierarchical, well-defined problems. Computers can grapple with a breadth of data at speeds we simply can’t fathom. And although they're getting smarter all the time (ask some of my team members about machine-learning projects they are working on), I’m generally not worried about a dystopian future run by artificially intelligent robots. Here’s why: the world we live in isn’t full of well-defined problems. It’s full of complex, ill-defined dilemmas that require lateral, divergent thinking to solve.

These are design problems related to innovation and creativity. They're the types of problems facing designers of products, software, and services everywhere. They're broad, open-ended questions that need definition. This is where the partnership between humans and computers really shines. We are smart, but we're slow. Computers are fast, but dumb. If we can define these problems and break them into tiny pieces, then we can harness the ability of these incredibly powerful machines and put them to work solving some of society's greatest problems.

If we, as designers, don’t understand computers on a fundamental level then how can we begin to frame these problems properly? I know I said at the outset that the debate over whether designers should code has already happened; so, I guess I’m asking a fundamentally different question: Isn’t it time that we delve in and really begin to understand how computers work? We can solve these really big problems together but, in order to do that, we have to speak each others’ language. Designers: let’s stop treating computers like they're something scary and start using them to change the world.

Read the next post in this series >