The implication of code

There’s a reason I’m including these harmful acts in a book thinking about humanist ethics in design. We are creative beings, every single one of us — design title or not, designerly or in getting through everyday life. Wishing and hoping that what we’re designing is predicated on users as a matter of course — no users, no business! — is short sighted.  We wish for that to be the case, but most businesses are almost mono-focused on shareholder returns. They're kicking the can on worker wellbeing, systems dysfunction, and encroaching chaos; "customer happiness = business" is too long a profit turnaround.

Think of information as water. The spigot of software is huge. We are intending to make a mass impact; the profit comes by taking the understanding of a few and making it available to many, through content or algorithms. We are taking the contents of a teacup and filling all the teacups that want it (or we can convince to want it). The never-ending teacup has now shared enough to fill a river and drown a valley in metaphorical information, replicated at all it’s points of use. 

Now think about the toxins and tonics involved in environmental flip sides. One person’s right can be very wrong for another person or group of people. With everyone’s teacup filled, the interpretation is compounded, along with the toxin/tonic nature of it.

Information technology in the business of mass impact, leveraging information. The vast majority of that information is in the construct of the program, out of sight of the user – hell, out of sight of most of the business. It takes one person adding in a falsely determinative key to skew the results of a hundred thousand mortgage applications. 

Technically, yes, it’s unlikely that one person is the key holder. Usually there are subject matter experts, business stakeholders, project management, designers, developers and QA in the process — at least. Programming takes all of these input streams because it’s complicated, and it’s easy to gloss over the details that seem to be working. If race or sex or location, etc., has been part of the decision making process on and off for decades, the seemingly-logical thing to do is make sure there’s a way to include it…just in case. 

The standard way to build a program is to predicate it on history. It’s all the use cases to date, analyzed and coded to follow all the edge cases it may contain. This process might even find some new patterns that can be leveraged to make a more finely tuned algorithm. Which is great, until you consider all that history of all of our -isms, statically defined as a set of ‘good’ data. 

As quickly as that, a bias becomes always-on. History can be confabulated into evermore. Fairness can become an entrenched fable, something we dream about but know can never be real. 

Once it’s in the program, it’s often seen as something that can’t be talked over in the final interactions. The person manning the desk can’t touch it. Their manager has no more access, so while they might be pulled into the conversation they won’t have a different answer. If the software is being used by multiple organizations — and that’s the goal of mass impact — it’s not even about taking business elsewhere. 

We build software to provide tools that are intended to share information and understanding across a much broader population. Programming profits are predicated on mass impact. That means that our ‘little mistakes’  can accumulate into statistical relevance. Mass impact includes the harm of unquestioned decisions, through the entire data set.

I don’t think most of us mean to flood an abstract valley with our -isms compounded over the mass impact of a misguided algorithm, unquestioned and based on historical patterns or unquestioned knowns. But all of those underlying issues means that if we don’t question, we will only know after the damage is done. Automation bias will obfuscate the damage until it's overwhelmingly huge.

We have a deep-seated instinct within us to blame the victims. It goes back to where I started this chapter: the narrative holds threads of blame. Partly is from years of hearing our bad actors shunt their issues onto their victims. There’s also a big dollop of personal fear, of wanting to believe that I will be safe.

When we translate this into an unexposed, unquestioned, continuing misalignment of information, it heightens and escalates to genocide. It has several steps along the way: tribalism that shifts to bipartisan politics, doxxing, swatting, disenfranchisement. 

Information does not stay outside of us. It is not silo’ed from our behavior and actions. We have to consider the whole, and understand that we are deliberately making mass impacts with it.


My long-term synthesis and understanding.

code:
cognitive bias, failing information states, flip sides, systems flow, tools