Do we need to rethink how we deal with offensive language?
As a software engineer I spend a lot of time thinking about systems. About their design, if they’re built to last and whether or not they’re even fit for purpose. Not just computer systems either. Today I got thinking about one commonly used social system and how it is completely unsustainable.
I am, of course, referring to our system for dealing with offensive language in polite society. The system that allows us to say offensive words without offending people. The #-word system.
The most famous use case of this system is of course the 'N-word'. Possibly the most offensive word of all time is easily referred to in public by anyone, without risking offence.
This system is also commonly applied to the word 'Fuck'. This is typically the case if one is being chastised for saying the word. For example someone may say "You're not allowed say the F-word!" A more recent entry to the #-word system is the 'C-word'. A relative newcomer to the world of offensive language, but one that has had a meteoric rise to taboo.
The problem with this system should be obvious. We only have twenty six letters. This means that at any given time we can only maintain twenty six socially unacceptable words! Once this limit has been reached we will open ourselves up to unadulterated ambiguity, the likes of which has never before been dealt with in language.
Imagine a world where someone in the throes of offendedness declares that someone used the F-word, but you don't know whether they are referring to 'fuck' or 'faggot'. How would you be able to gauge your response? This is of even higher concern when talking about online interactions where context is often lacking.
Furthermore, new words are invented every year, and old ones can become offensive at any minute. What we need is to put a system in place where an unlimited amount of words can both be offensive but easily referred to in polite society. This system will also need to take into account scenarios where words are no longer so offensive. 'Cretin' for example, which once referred to a person who is deformed and mentally handicapped due to a thyroid deficiency, has become more of a general term used for light abuse.
So what do we do about this problem? My proposal is as follows. We decide upon a code that designates a word into an offensiveness subcategory, such as S for sexual, V for violent, R for racist, M for misogynist etc...
Next we take the word itself and run it through an encoding algorithm, which is a fancy computer term that basically just turns a word into a code. Here's what some offensive words look like once hashed:
Finally we score each offensive word on a scale of 1-10 for offensiveness.
So, to give a complete example, the N-word becomes:
'R' for racism
'bmln' is our encoded offensive word
10 is our scale value, as this usually seen as the most offensive word
This system allows us at a glance to decide exactly how offended we should be by a word. Moderate racism might be denoted by a lower scale value. For example, referring to an Irish person as a Paddy could be represented as follows:
'R' for Racism
'cgFK' for Paddy
1 for our scale value as this probably won't offend most people, but you can get offended if you want to.
This system also has the added benefit of allowing you to get offended by words you've never even heard of before. The subcategory will give you context and the scaling value with telling you to what degree you should adjust your outrage. In fact you never need to even see the offensive word! You can look up the code if you wish to expose yourself fully to offensive content, but there's no need if you simply want to be offended by someone else's words.
So this new system checks all of the boxes. It is sustainable in the long term as it accounts for the addition and removal of offensive words to and from our daily language. It's easy to add new categories and just about any word can become offensive. It even provides context around the use of the offensive word and a built in offense indicator.