Bursting your filter bubble since 2020

What is Technofascism? Part Two

18 May, 2020 — Dr Andy Farnell
Video at: https://youtu.be/MlPlaSD_-j8

Okay. The next topic which I would like to get onto in my defence of the phrase "technofascism", indeed in my support of the phrase, is to talk about the sociological aspect of Fascism, which is more or less ubiquitous fear. Now there have obviously been so many good books and documentaries and discussions of this, of this very deep and very broad subject, but one part that I'd like to single out to talk about is what I think is our very modern fear which we sometimes call FOMO - fear of missing out - and this is connected to many things, to vanity and insecurity, to our intrinsic need to feel a sense of belonging to a group, and so on.


I'd just like to say that there is an absolutely wonderful book on this subject and it's not one that I'm going to recommend, because I would say for the average reader it's a difficult and complex psychoanalytical text, but it's by a guy called Adam Phillips and the title of the book is "Missing Out", subtitled "In praise of the unlived life" and I have to say that for me it was absolutely massively influential. I was fortunate enough to read it at a time perhaps ten years ago now, that gave me enormous courage to follow my path, to trust my decisions in never getting involved in social media, in never creating a Facebook account, in never signing up for any Google accounts, for staying the hell away from things like Twitter.

Partly that's because I'd already got that out of my system in the mid-1990s, being part of a different wave of technological culture. I spent thousands of hours in Internet Relay Chat on the nntp news network, in gaming servers and so on - so when the popular side of social media like Facebook turned up it already seemed very old hat to me. But from the turn of the century and in, well certainly the past 15 years, I would say for those who've stayed outside of social media it's always felt like there is an enormous social... well pressure, yeah, but almost a screaming demand and accusatory stigmatization and distrust of people like myself who just don't want any part of that kind of thing!

And I found Adam Phillips analysis of what makes us fear 'missing out' to be enormously empowering... anyway, let's quickly mention some of the qualities of this, I pause to call it a psychopathology, but it's it's certainly a human weakness. It goes along with many aspects of what I previously referred to as the Apollonian, and that's an archetype that's very strong in all of us today. It's a desire for totality and a desire for certainty in things. Certainty is something that's just not possible. And in the same way that Eve Ensler said, you know the quest for security is just not possible, so many of us are engaged in these fools' errands, ideas like Total Information Awareness, as if such a thing could even be possible...

It's these aspects of the Apollonian mindset I think, as a scientist and as a humanist, which bring out the very dark side of science, they bring out the worst in technology, an almost fanatical tendency towards reductionism and need to control everything. The need to tag and track and measure and account for everything! To account for all people and their thoughts and feelings and whereabouts and ideas and associations. I would say it's the mark of a profoundly insecure mind, and if we were talking about individuals I think there'd be a temptation to label that as some kind of a fault, perhaps even a psychopathology or personality disorder, if you were inclined to such labeling.

But this fear I think, more than anything else today, is what is being leveraged to get us to engage with technologies that we don't really need. That there might be some vital piece of information that we don't have, or some unmissable opportunity which if we only had the right smartphone application we would know about. And I know these words will be meaningless to those of you who are deeply addicted to smartphones and smart things of all kinds, but I hope you can hear me when I say as a computer scientist, as someone whose life is in enmeshed in code every day, in building and debugging systems, in working with the very latest technologies, with machine learning and neural networks, all aspects of digital signal processing and so on, that there is absolutely nothing missing in my life - for not having a smartphone! The mental freedom that it gives me, the the necessary separation and headspace.

Well, okay..... you know... disclosure... this is my way of controlling my digital world. That irony is not lost on me. But I guess the point is I do control it. I make a very difficult and conscious choice not to have a smartphone in my life, not to allow those aspects of social media in, so that there's more time for me to think about what I think are important things. And when I hear other people say that they "couldn't possibly live without" their smartphone, that they feel existential dread of being disconnected from Facebook and Twitter, that they could not imagine a world without Google and Amazon, I know in my heart that I'm hearing the voice of every heroin addict and alcoholic I've ever met in my life.

Guys, I'm no stranger to addiction, and to seeing people's lives fall apart, and the parallels that I see between behaviours with technology and behaviours with substances are heartbreaking to me.

So really, the idea that you might be "missing out" in some way, I find it hard to take that seriously! And yet this fear remains very prevalent in our society. I don't see it diminishing as much as I hoped it would, at least not in the last five years maybe, and I feel sad to admit that I think the Covid 19 virus crisis we're going through now, and in the future, will actually make this unfounded fear much worse.

But to bring it back onto track... I think this fear is something that the tech giants are deeply aware of, and employ cynically and strategically to hook people and keep people on their wares.

I'd like to talk about another aspect of Fascism which may seem incongruous if you take the fascist project to be wholly directed by an Apollonian appetite for reason, and it's that in history, fascist movements are almost always characterized by elements of mysticism, symbolism and the occult. I'm sure there are many brilliant philosophers and psychologists whose work I'm completely unaware of that understand this connection and know why, but I'm not well read on the Jungian side of these things.

But Hitler, for example, and the whole Nazi regime, were absolutely enchanted by astrology. And when you look at the symbolism, the semiotics, the governing themes in these insane projects, it's quasi-religious symbolism... Very strong there. I said earlier that in some senses corporate logos can stand in for uniforms and flags, and I don't think it's any kind of a stretch to talk about companies and products like Apple as being part of a "cult movement". People don't buy Apple computers because they know anything about computers. It's one of the most incredibly successful triumphs of a symbol, of the brand of the Apple as a lifestyle symbol, and something to have allegiance towards. But we don't have to pick on specific companies or products.

Just think of the meaning of "the cloud", for example, what is the cloud? Where did that come from? Well, you know, there's this vernacular story that network engineers will tell you, of how we used to draw pictures of the Internet in diagrams, and somewhere in the middle there would be this kind of fuzzy bubble that looked a little bit like a cloud, basically shorthand for drawing 'an Internet', and somehow because of that distributed networks and distributed storage systems got this label of "the cloud".

Well yeah, but... honestly there's more to it than that! Words have to be picked up in order to gain common usage and currency. They need to be adopted and pushed by the right people. And the cloud, if you think about it, is loaded with a lot more meaning. It's much more than a nebulous network of distributed computers... it's a thing that floats in the sky. It's a thing that might have choirs of angels with harps on it. It's a thing that the Sun shines through. It might have a silver lining. And most of all, it's fluffy. And it's white. Perhaps what we don't think of when we think of "the cloud" is tempests and thunderbolts. But maybe we should.

And the occult element of modern tech fascism... and I'm certainly not the first to go here with this comment... it's located very much in Californian Silicon Valley, with movements that identify themselves as transhumanists, who are waiting for the singularity, so that they can all be uploaded to the cloud where they will no doubt enjoy immortality floating in space forever as part of some kind of everlasting Facebook party! I don't know?! I mean, it's hard to take seriously. But you can trace many influences to this, certainly some Russian science fiction writers and other mystical thinkers.

But to me, maybe it's because I'm British, I don't know, but I find the whole thing deeply unhinged, narcissistic and small-minded, and if they were just a bunch of people living on a hill who were all going to go and drink Kool-Aid it wouldn't be a problem. The problem is is that some of these people are leading some of the most important and powerful companies and organisations in the world right now. And their ideas are deeply deeply suspect. And from what I can see largely unchallenged.

We are so enamoured with the technology that Silicon Valley produces, we are such cargo cultists ourselves, of these shiny hand-me-downs, that we don't really question the minds behind them, and what is the project that they're really a part of.

Let's put aside for a moment that the Internet was originally developed as a military project for robust long-range communications, and think about its impact on economics and politics from the mid-1980s forward. When I was at school in the 1980s it was common to hear about the "information economy" and then later in the early 90s about "the information superhighway", but while politicians had a lot to say about this, and governments embarked on funding large projects of digital literacy, I don't think at the time that anybody really had any idea what the "information economy" was or might be.

We kind of had this naive notion that somehow by us all becoming producers and consumers of information an entire economy could exist independently of the production of goods and services. And what actually happened is in many ways paradoxical, and goes to the heart of a great contradiction in the idea of the Internet itself. That contradiction is that the free flow of information, rather than generating wealth for established interests, actually subtracted from it. In order for existing economies to continue to work, it was necessary to put the brakes on the free flow of information, to create artificial information scarcity. To control and regulate the flow of information.

This of course came about by a massive expansion in the meaning and scope of so-called "intellectual property", and in more recent years has led to a rise in censorship, and problematic concepts such as those of "hate speech", but overall the volume of, and opportunity to share information, for most of us has increased unimaginably. However, the cost of that has been an enormous change in our understanding of the meaning of the word "free".

When we talk about "free information flow", the idea that allowing unfettered global peer-to-peer communication would somehow magically create value and boost economies turned out to be a large mistake for established power. And that battle to keep the initial taste that many of us growing up got of a 'free Internet' is an ongoing, and bitter struggle sometimes.

There is a concerted effort, an enormously well-funded project to wind back the initial peer relations of the internet, and to restructure it into a hierarchical tightly-controlled topology with choke points, ubiquitous surveillance and monitoring, monetisation and a return to a dissemination or asymmetric broadcast model much more like television and radio were.

So I would suggest that this project to reverse the initial human value of the Internet, as a cultural and scientific tool of free exchange, is part of the definition of technofascism.

We've already touched on some of the causes of technofascism, and in most cases they're nothing more than the existing battles taking place between various parts of society, but amplified in some way by technology. And so one way of looking at this is that technofascism doesn't really have a cause. What I mean by that is, these struggles are, if you like, an endemic, an intrinsic or immanent part of how society functions. So there is no essential cause to technofascism.

We can suppose that a civilization, a good civilization, one that's worth living in, is something that requires constant maintenance, a constant gardening, a constant vigilance against decay towards tyranny. As the famous Lincoln quote goes: "The only thing necessary for evil to triumph is for a few good people to do nothing". Viewed this way, the rise of technofascism is really just the fall of our concern and care for the upkeep of our liberal society.

It's a decay that comes about through too much convenience, through things being too easy, such that we lose sight of the value of freedom, and we forget the enormous cost of fighting fascism and tyranny. The story that we tell ourselves about the British involvement in the Second World War, and Winston Churchill's representation of the will of the British people, is somehow encapsulated in the word Never! Never to accept fascism. To fight to the last drop of blood.

It's in some ways the slogan that epitomizes the British spirit in the 1930s. But today that courage seems a far and distant memory. Most people, it seems, tremble at the thought of mere economic disadvantage. Or losing their job. The terms "speaking out" and "whistleblowing" have come to take the place of simple honesty. And even mild criticism is frowned upon in the corporate workplace. People are afraid that their social media posts might not be "liked". Or that they might be "defriended" for voicing the wrong opinion.

In consequence, insecure people seem to cling together in cliques of mutually stroking and self-reinforcing groups, leading to ever more isolated silos and disconnection from the rest of the world. It's this fragmentation into many groups, all in mutual fear of each other's disapproval, that creates an amazing social control mechanism - divided and discoordinated people have no common voice and no ability to build common values, let alone the courage to fight an oppressor.

Another contribution to the rise of anti-humanist technologies is, I think, a decline in the ethics applied to research. This is something I have first-hand experience of, in connection with academia and universities.

People who work in universities will say: "come on Andy! You've got this completely ass-backwards. In fact, the problem is that ethics panels in universities are far too bureaucratic, and they stop almost all good research from happening." And I agree, and think that that is just one symptom of an underlying problem.

The problem is that most university ethics procedures are a total and absolute sham. And that's something that's very easy to verify. Just by asking any group of people who are on University ethics boards, what qualifications and education they have in ethics? The answer is invariably "absolutely none!" Ethics is a deep and difficult branch of philosophy. In the very best case I think it would take at least five years to study even an outline of ethics. Almost nowhere on a university ethics board will you find someone who can separate a Utilitarian argument from a Kantian postulate on 'The Good'.

Most of us assume that we osmose the principles of ethics just through ordinary lived life, and more dangerously, that expertise in a particular subject area, say biology or technology, leads one to have an intrinsically good idea of ethics about that subject. But the point is moot. Because in reality, those who sit on ethics boards have absolutely no time at all for reflection. Faced with a hundred research proposals and half a morning to get through them, the only way that ethics can possibly be conducted in universities is by a "tick box rubber-stamping" simulation of ethical consideration. It's based on the occurrence of key words like "children" or "animals" or any mention of "disability".

In reality what decides whether a research proposal is approved or not is the ties that the university has in terms of "technology transfer" and research grants to very large corporations. What this means is that ethics committees are bureaucratic juggernauts that will weed out any research proposals which are novel, interesting, and like all good research involve some element of risk, in favour of those that are simple, and fit profiles for journal impact factors, and please the corporate moneylenders.

So in reality, many dangerous and insidious projects are approved. Anybody who has had to teach a research methods seminar at a University will have taught a module on ethics, and if you have any philosophical inclination, if you're interested in the philosophy of science, then it's an absolutely heartbreaking exercise to do.

A modern corporate interpretation of ethics is absolutely one-dimensional. It's really about public relations, and not looking bad in the eyes of the press. Another problem comes from the ideas of research students themselves. As tenure has been eroded and the good professors have been driven from academia in favour of zero hours contract associate lectures and visiting professors there's a troubling shortage of mature well-considered guidance for young people. Many Masters and PhD students in their early 20s or 30s consider technology to be a playground, one where any device or data set can be connected to any other just for the fun of it. And indeed, that is a legitimate part of the pioneering and innovative nature of computer technology.

But rarely are young people given a most important piece of advice, which is: "just because you can, doesn't mean you should". It's a maxim which, if heeded, would never allow any university computer science department to work on say, facial recognition, where it seems obvious to anyone with the most basic grounding in political theory and ethics that it's the good, advantageous cases that are marginal, and that the vast majority of applications serve tyranny.

Now, I'm not saying that it's ever clear at all what the unintended consequences and side-effects of any research project might be... but it's very clear to me, that many young people find themselves enrolled on PhD or research programs without ever having been invited to make such a deep personal inquiry and figure out whether or not they think doing such a thing is right or not.

Over the years I've lost count of the number of Silicon Valley adventurers and tech startup entrepreneurs who have described themselves to me as "amoral business people", and I've never known quite how to respond to that. I think I've never had the heart to say straight to their faces that, outside of philosophical textbooks, there's no such thing as 'amorality'. There is only ambivalent disregard for others.

But the problem of moral relativism within an interregnum has been with us, well at least since Nietzsche's 'transvaluation', and it's a problem that just doesn't seem to get any better. It's growing and growing in the 21st century. I don't personally blame that on secularisation. I've known many secular humanists who are deeply moral, who are extraordinarily compassionate and generous people for whom the absence of a god has no impact on their human generosity and consideration for others, but I do think that the lack of boundaries within some permissive Western cultures, and particularly West Coast American cultures, has produced an entitled generation.

It's a generation that sees itself as the centre of the digital universe. One where the individual entrepreneur has some kind of absolute right to treat it as their personal playground. And I can kind of get that. And respect the frontier spirit - it's very noble, it's very American, it's... well, it's fine so long as you consider all indigenous and other interests to be disposable, or something you can sacrifice in your quest to build a new world. And that kind of comes out in documents like the "Hacker Manifesto".

That's something I've read many times with great interest, and empathy, and the desire to reject the constraints and oppression of the old world, and tradition, seem... well that's a very powerful force indeed! But there is such a thing as the foolhardy adventurer. And the archetype of the reckless engineer, the mad scientist, who blows himself up and everybody along with him, seems to be quite absent from that culture. It's a definite blind spot in the American tech entrepreneurs world.

To be an agent of positive change in the world, one needs to be a leader, and it's hard to be a leader when one is completely dependent and in a state of prolonged childhood. But unfortunately, that is a state I feel so many of us are forced to assume today. Not just in our actual dependence on electronic technologies to live, to shop, to vote, to find a partner, to book transport, to pay for things, and to find our way around the city, but there's a more fundamental kind of infantilisation that's going on. You can see that in the design of many applications, in fact.

It's one of the reasons I started a project, a course that I teach, which is called "Computing for Grownups". It's not something I'm going to go into now. But I think many of you will be kind of familiar with what I mean, in terms of how many digital technologies have dumbed down our lives.

But what accompanies this kind of regression is a feeling of unreality. A sense in which there are no real consequences to our decisions or actions, that everything has an "undo button". That can be nice. There can be some upsides to that. It's nice to feel that we're able to explore life without too much fear. But there can also be a sense in which we never really make choices, because we don't fully understand what those choices are.

What becomes, is an unbearable lightness to... an "unbearable lightness of non-being", of clicking a career option, and this week we'll be a doctor, and next week we'll be a truck driver, and maybe the week after that be a professional musician... Whatever! And a life like that leads us to a feeling of always being unvalidated, having an impostor syndrome. It's a life of unconnectedness, of low commitment, of shallow and casual interaction, and in many senses I think it's a virtual life.

That's interesting, because virtual reality as a concept is intruding more and more into our ordinary lived lives. It started out in recreation, in entertainments and games. I think that virtual reality does have its place in some niche applications, such as training or therapy, but on the whole for it to supplant real lived life with all of its risks and tragedies is um, well, it's a tragedy!

But from a technofascists point of view... it's an enormous win! You don't have to have watched The Matrix movie to really get the idea that containment of a docile population, or to have read Aldous Huxley's "Brave New World", to know that soma, drug or technologically induced state of pleasure, provides a means of almost absolute control. And during the early stages of the Covid 19 virus outbreak, I was really saddened to hear from so many people who said that they felt isolation in a single room with a computer was really no different than everyday life for them. That the lockdown really didn't bring any change to their lives.

Having grown up in a world where people took the piss out of nerds like me for being insular and anti-social, it kind of really depressed me to realise that we've all become that! That is everybody today. And having broken free of technological enslavement myself in my late 30s, and really widened my horizons through literature and travel - something that gives a purpose to my hacking now - it does strike me that the docile state that I see so many young people in is a terrible waste of life.

Okay, I'd now like to progress to an entirely different section. And in this section, I'd like to turn around and offer a critique against "technofascism" as a term. I'd like to question its usefulness and see what objections can be brought to bear.

So I'll start with what I think may may be quite a good argument - actually, and I'm calling this the "ambivalent world objection". This objection rests on the assumption that, to put it very bluntly, nobody gives a fuck. You see a word like "technofascism" would only have currency and meaning in a world where people objected to fascism. And if, as it sometimes seems, we are living in a world in which people are evermore greedily gobbling up every form of oppressive technological enslavement, then an expression like "technofascism" doesn't really have any value. And to say that nobody really cares about Fascism is just another way of saying that nobody really cares about freedom, and that may be true, in the sense that people today have a lot of freedom and really enjoy that freedom, but they don't value that freedom because they've never had to struggle to defend it.

And of course I'm reminded of the Thomas Jefferson quote that: "The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants". Let's hope it's a very long time before any or all of us have to suffer that again.

The next argument I'm going to raise against "technofascism", I'm going to call the "epistemological objection" and by that I mean that the word technofascism is useless because nobody can really have any useful knowledge of it. In other words, unless people get it, unless they understand it, it has no real currency. So let's contrast that with something that people do understand, let's say segregation, apartheid or slavery. These are things that people can relate to, and they can fear. They have historical precedents. But it may be that technofascism really is a new thing, and as such the forms of enslavement that it threatens are not things that we can easily grasp. Already, it's very difficult for people to understand how elaborate correlations of data, which they would have no normal cause to be suspicious of, can produce frightening possibilities for abuse and manipulation. It could be that, writ large, this ignorance of the dangers brought about by some socio-technical configurations are so obscure and difficult for people to get that they never really see it coming until it's far too late. And even then, they don't fully understand that they have been enslaved. Of course this is somewhat congruent with Plato's Allegory of the Cave, and the theme in the Matrix movie, and of course from that position we can argue... "well what does it really matter if people are happy in their ignorant enslavement?" "What could they possibly have lost that they never really had?"

Another criticism of technofascism that I would like to offer is that of its focus. That it is just too broad a term. By conflating a whole bunch of disparate and possibly unrelated concepts together it acts as a deflationary term, as a term that subtracts from intelligent debate. Now, earlier of course, in advocating for the term I had said that what we really needed was was precisely that, was a very broad umbrella term within which to bring about a dialogue - but I'm now saying that perhaps that is precisely its weakness.

The last attack that I want to make on technofascism is simply that it's wrong. Or rather the things that I am trying to assert here on behalf of that word are wrong. And this, of course, is the "fallibilist's concession". I do truly believe that one should never take seriously an argument with anybody who is not prepared to admit, at least in principle, that they could be radically wrong about everything that they believe.

So I'll offer up here a world in which the ideas of technofascism are just wrong! And in that world, mass surveillance combined with artificial intelligence, covert psychometric profiling, anticipatory justice, and a social control mechanism which leverages vanity, shame and fear would actually lead to the "Best of all possible Worlds", and that it's only for the want of some Panglossian insight that fools like myself and other paranoid, unhinged conspiracy theorists rage against the inevitable progress of mankind toward a wonderful Utopia.

But at some point as I fight against sarcasm and cynicism, it gets harder and harder to describe such a world. The purveyors of our liberating technology would have to be of the purest heart. Our governments, law enforcers and intelligence services free from all corruption or self-interest, and of course the technology itself would have to be infallible, never possibly making any mistakes.

Nonetheless, let's offer up this fallibilist concession as an objection and say - "You know what, maybe we just are really wrong about what's going on in the world and it's all going to be okay!

Let's have a section on references and links. None of this enchanting chatter would be much use if you couldn't go away and do some deeper reading, and follow up some of these ideas, so I've put together a short collection here.

One of the problems for me is that my installation of org-mode doesn't play very nicely with bibtex at the moment, so I haven't created a full reference listing. But I've split them into a few simple categories.

Let's start with psychology. There's a text by Adam Phillips that I mentioned earlier called "Missing Out", actually published 2013. I mentioned attachment theory and the classic, well in fact the foundational text in that field, is John Bowlby's "A Secure Base", and I also mentioned Eve Ensler's work. She's the author of Vagina Monologues, I expect you know, but she wrote also... I'm not sure if it's a stage script or an article, but she certainly read excerpts of it to a YouTube video... that's "Insecure at Last".

Okay. Next up are references on politics. Aristotle needs no introduction. The two works by Thomas Hobbes and Jean-Jacques Rousseau are Leviathan and Rousseau - it's the "Second discourse on Inequality" which I think kind of offers the alternative social contract. A really good book for those of you who believe in government and believe in political process is Crick, a great work, "In defense of politics".

I've a second section here on academic politics - which doesn't always tally with the real world. There were two great pieces there by the great leftist-anarchist Noam Chomsky. "Mis-education" is brilliant in that, well for one thing, Chomsky really attacks intellectuals and academics for their compliance and lack of courage in dealing with these matters. And "Manufacturing Consent" is one I expect you already know - which is on the "influence industry".

Okay. Benjamin Ginsburg's "Fall of the Faculty", very important text on the demise of the university system, as is Wendy Brown's, "Undoing the Demos", actually a much broader text, she is a professor of political science who has a lot to say about the university system and and its lack of courage in tackling the important issues of the day.

Okay, moving into some traditional tech critique, these three authors are ones that you should definitely know: Neil Postman has written many books, but the one I cited earlier was "Technopoly" from 1993. Jacques Ellul, a famous tech critic, probably "The Technological Society" is a good introduction to his work, and one of my favourites which was recommended to me by Joe Deken, is Ivan Illich's "Tools for Conviviality", very important in understanding how to build communities, and the essentially relational nature of human existence.

Going a little bit more into my own territory - with Systems Theory and Modelling and so on, Dana Meadows who I... I love Dana Meadows! "Where to intervene in a System" and "The limits to growth" are just must read, and a more recent work that's kind of followed in the footsteps of Meadows a bit, I think, is Kate Raworth's "Donut economics", that's been enjoying a bit of popularity lately.

Now, in terms of addressing tech-fascism directly, I did mention Edwin Black's "IBM and the Holocaust", one that was recommended to me by my friend and colleague Daniel James is "Friendly Fascism" which is by Bertram Gross, a veteran of the Carter administration. A chap whose writings I quite like his Tijmen Schep, I think I'm saying that right, he's written about what he calls "Social Cooling", being the chilling and inhibitory effect of surveillance and privacy invading technology.

And in reference to what I was saying about violence in the name of copyright... that article there is by Lee, which was on "SWAT teams enforcing copyright".

Okay. Coming right home to my field, to computer science and security engineering... the mighty Bruce Schneier, he's written two that I definitely recommend, his "Data and Goliath" and "Click here to kill everybody". But honestly, the books aren't as good as the blog. He has a brilliant online resource which is always filled with the latest analysis and stories that hackers absolutely love.

Bruce and some others kind of attempt a project which I think is very much aligned with anti-technofascism, which is broadly called "technology in the public interest".

To start drawing these arguments to an end, I hope the outline that I've given of what I think are the important qualities of Fascism, and the connections that I've made to practices and events within the current technology industry lend some credibility to the idea that we should take seriously the idea that there is a very real danger of fascist projects emerging from consumer technologies on the trajectory that we're currently traveling.

I'd just like to mention a few pieces of evidence that point to that. In general, there is an ever closer connection of big tech and sensitive state functions, and by that I mean things like census taking which is often conducted by private data companies, and elections. At what point do we pass the threshold of what the comedian Stewart Lee called "Mr. Fox's guide to henhouse security", where we let Facebook and Google run our elections? In Britain we already allow... I've forgotten the company, is it Palantir or Northrop Gruman, or one of these big data companies? They run our census. The British government do not conduct a census on the British people.

So there's a fusing of state functions with big tech companies, and the latest one we're seeing of course is in the health sector. Another one is, of course, a very uncomfortable relationship between law enforcement, private detective agencies, private security firms and companies like Amazon, and their "Ring doorbell system" which deploys kind of ubiquitous surveillance at the street level, at the private property level but can so easily be adapted to incorporate facial recognition and street-by-street tracking of individuals.

We are in a situation as Edward Snowden very clearly pointed out in which our smartphones are simply always-on surveillance and tracking devices, and if at this point you're not prepared to say... "Well, hold on Okay. Maybe there's something to this technofascism thing", it might be interesting to think about what has been a project towards the criminalisation of civic difference over the last 15 years.

An example of that is in copyright and "intellectual property", which was always a civil matter, but which media and technology companies have lobbied and aggressively pushed to move into the criminal realm, to the point where I think it was about 2004, where the first shooting occurred. It was actually a street vendor selling DVDs, and there was a time around 2003 to 2005 where the RIAA, that's The Recording Industry Association of America, had really garnered so much power, so much influence, that it had its own kind of paramilitary wing, you know, you had guys in SWAT suits with batons and guns busting down doors and shooting at people over some fucking music!

You know the saying goes "There's no McDonald's without McDonnell Douglas", well, "There's no Simon and Garfunkel without Heckler and Koch". It's a situation, like as I said about business interests being congruent with the pre-Hobbesian idea of anarchic feudalistic gangs, where an organization like the RIAA can have an armed contingent, and obviously this is in America where the militarisation of the police is a bigger problem than most places on the planet... but you may face a day when you face a real threat of physical violence for not having the right smartphone, or for walking past the wrong camera with the wrong look on your face.

It's not that this technology causes Fascism, it's it's that it enables it, so long as we do nothing to stop that enabling function.

When you look over the last ten years, just a decade ago having a mobile phone was still a choice. We lived in a world where we thought that polite people still mind their own business. In just ten years we've moved to a situation where we say, you know, people have no right to privacy, that "privacy is dead", and that we should be suspicious of people who don't carry a phone!

This profound fear of non- connection, of nonconformity of non- belonging, it actually seems like something potentially far worse than the Fascism that we've seen in the past, than Communism. And the transformation of technology "from tools to tyranny", that's what I mean by technofascism.

So, in summary, I think that this word is legitimate, and important, it has utility. It's more than a cute and smart-arsed phrase. I think it's something that could be useful to us in challenging the trajectory that we're currently on, if only as a strawman for serious debaters and serious arguments as to why what we're seeing is not technofascism.


Tags: technofascism, privacy, corruption

Share this post on Twitter