A Revolution and its Children
I have criticized AI (in the form of LLMs) in other posts of mine, specifically how they fundamentally do not reflect human thinking and creativity and how AI products are designed to influence and control our lives if we uncritically adopt them. In this post I want to critique what I call the “cult of the artifact”, which describes a thinking pattern in people that focus too much on the outcomes achieved by AI.
Top-Down Workforce Revolution
It’s another day, another opportunity to bolster the share-value of AI companies. Fortune happily obliged and published a summary of a Microsoft research paper which claims to have analyzed which jobs are the most affected by AI. While the original paper makes no direct claims that these jobs will be replaced by generative AI, the Forbes author gets a bit trigger-happy and already makes references to tech-layoffs and to Nvidia CEO Jensen Huang who has been claiming that AI will replace every job under the sun for years now.1
Propaganda. That’s what we call this garbage cloaked as reporting. Not only is it uninformative and completely misses the point of the original report, but it also makes misleading connections leading the reader to infer a meaning in the report which is not actually there. It’s a harmful contribution to a workforce revolution led by the most powerful economic entities on this planet. Autocratic and right-leaning governments in conjunction with Big Tech cloudalists are planing a revolution spanning our social life as well as our workplace.2 This revolution is not in the interest of the worker or humanity for that matter. It is only serving the ruling class.
The goal is to devalue our work to a point where we have to accept to work 40 hours a week for a pittance. The goal is to make us feel worthless and weak in comparison to an almighty AI that never tires, never falls ill and always performs on a level that we could never reach. The goal is to make us scared serfs.
It’s just Vibes
In my profession of software engineering, people seemingly tend to more and more take the position that we need to embrace generative AI before we get replaced by it. AI is seen as an inevitable force. Its ethical position is seen as neutral. It’s neither good nor evil, it just is.
You can smell the cold sweat. You can hear their racing hearts. These calls for assimilation are fed mainly by fear. Fear that the computer will take away our livelihood and that the powerful companies at the helm of its development will not show mercy with us. The Zeitgeist is one of uncertainty, be it political, social or, taken to its natural conclusion, about survival. It’s a depressing vibe that my fellow engineers are adopting.
To alleviate the pain, they are adopting tools like Github Copilot and Cursor. They see it necessary to debase themselves and to trust the computer if that means that, in the end, they are allowed to keep their job. To lighten the mood they will occasionally make jokes about the trivial errors their AIs make but instead of earnestly criticizing the technology, they hold on to the belief that in a few years the AIs will become perfect. They hope the revolution will not eat them if they preach the word of their cult.
One of these optimistic visions is the idea of “Vibe coding”, which refers to the idea that a person without any practical nor formal training in software can write sophisticated software by just asking an LLM to generate all code for them. Writing software is done on “vibes” and real knowledge of what the code does is obviously not required. If it feels right to the “vibe coder” then it must be good.
Such low-code / no-code solutions are nothing new. The idea that a layperson can create functioning software with predefined building-blocks has been validated many times. However, most of these solutions typically have a constrained domain. UI builders, visual scripting frontends and workflow automation editors can be used to achieve a specific task in a domain that leaves less room for error. The low-code / no-code solution knows which errors can come up in production and has appropriate safeguards built in. It’s therefore not a huge problem if the user of such solutions doesn’t understand the code that was generated or how it works.
However, “vibe coding” is different from this in that it’s domain is not constrained.3 The LLM will try to come up with code no matter what the user wants it to generate. A web frontend for a shop, an app to track sleep habits or even complicated database replication schemes in a distributed network. If you can describe it in plain English, the LLM will bring it to live.
Is this a faulty simplification of what software engineering is about? Constructing software is reduced to the mere production of code from human language descriptions. All usual aspects of performance and security considerations, the question of maintainability and the idea that code communicates ideas to other people are thrown out of the window, at least for the user operating the LLM prompts. The idea is that “producing code” is the valuable action in the software engineering process as this is the only thing that an LLM is capable of. It’s not about understanding the code or its inner workings. It’s not about engineering a solution. It’s only about the existence of an artifact as a proof-of-value.
Syntactic Catamorphisms
One similar usage of LLMs that I see people use more frequently is their ability to summarize text. This gets used to summarize e-mails, news articles, online videos and maybe even this blog post. Does smashing together semantically similar words constitute a useful summary?
What exactly makes for a good summary of a text? This question is obviously hard to answer as it is contingent on many factors. Who is the audience of the summary? What kind of text is being summarized? In which context will the summary be used? The usefulness of a summary is completely arbitrary and open to interpretation. Can this interpretation be done by a machine? Wouldn’t those decisions fundamentally necessitate human experience?
LLM produced summaries often fail to understand the importance and further implied consequences of the central points and themes of a text. They will summarize paragraphs by semantically similar words with no regard for nuance. A “sometimes” will be turned into an “often” without regard for how the message of the text changes. Some aspects of the text will get more attention while others are only mentioned in passing. A chain of arguments will be reduced to only it’s conclusion.
All of this is fine, if you simply don’t care about any of it. If you don’t care about these nuances, those AI summaries will get you to a superficial understanding of the text in a short amount of time. If you don’t value the time other people take to write text, AI summaries are a great time-safer. Just like with “vibe-coding”, it’s only about generating an artifact without any deep understanding of it. The summary is enough, to actually understand the text is just annoying excess.
The Value of the Artifact
It is here, where we can identify a philosophical rift between the proponents and opponents of the widespread usage of this technology. While the former value the artifact (output produced by the LLM), the latter value the inherent meaning and value that is usually associated with the artifact when produced by a human. The former strive to achieve gains in productivity by letting the computer create solutions without the hassle to engage too deeply with the problem at hand. The latter believe that there is inherent value in the pursuit and understanding of the solution, which is being destroyed when letting an LLM provide it for you.
I do not claim that one side is wrong while the other is right.
The Connectivist Triumph
This rift is something I have been struggling with in my own views on this subject. It first appeared to me during the triumph of neural networks in machine learning. With the advent of massively parallel computation on consumer hardware, we suddenly had these magical classifiers that outperformed all other techniques in many tasks. We had the results as numbers in black and white. Especially in computer vision and signal processing tasks, neural networks reigned supreme over decision trees, SVMs, bayesian networks, etc. However, a problem that was apparent pretty early on, was the poor describability of a trained neural net. What do the trained weights mean? What do they represent? How are features interpreted? Why does adding hidden layers sometimes improve and sometimes diminish the classifier’s performance?
These questions, while open, were irrelevant. The artifact (the classification performance) of the model was the thing that we cared about. I do believe this started a change in the way we started seeing machine learning techniques. More and more, we cared more about model performance and completely eschewed the understandability of our resulting models. Model architectures became more and more complicated and convoluted but as long as they performed well in benchmarks, all of it was warranted. We learned to love the black box, and we marched on to build LLMs with it.
Back then I did not like this change and still, to this day, I do see myself on the side which is more critical of the usage of these black boxes. However, it was hard for me then, and it is hard for me now, to simply ignore the value of the artifact. While I still believe in the inherent value of finding solutions yourself I cannot ignore how helpful a RAG chatbot for an unfamiliar software library can be. I cannot ignore, that sometimes, I am also relying on technology, that performs tasks in a domain that I do not understand. My grammar-checker checks my text, specifically because I don’t understand the grammatical rules on a formal level.
A classification model that performs well, performs well. A statistical model that we can use to predict the future, does its job. When people tell me that an LLM chatbot helped them to explore a new topic or improve their grammar, how can I sit here and claim that this does not have value?
The artifact has value, but at what opportunity cost?
The Value of Human Work
What do we loose when we focus on the artifact? I can ask ChatGPT for a summary of “Moby Dick” and it will provide me with something that you could interpret as “good enough” for understanding what the book might be about. The answer will be delivered in less than a second, while actually reading the book will take you many evenings. What is the difference?
This reminds of a conversation I had with a classmate back when I was in school. They argued that it is a waste of time to learn text interpretation and do mandatory reading in school, with their main argument being that authors of novels should just write down their central ideas in a short and digestible non-fiction text instead of writing a large and hard to decipher novel that incorporates these ideas. I think the main point they failed to see, is that some ideas lose all potency when treated devoid of human experience.
You can tell someone: “Obsessively seeking revenge is a net negative for yourself and everyone around you!”. While this statement is true, it has no weight. It is merely a conclusion without of any prior or further thought. It might be true, but it tells us nothing. On the other hand, the story of Captain Ahab, who wants to kill the whale that took his leg, illustrates the anguish brought upon by such a pursuit much more strikingly. We come closer to feeling what is going on in this fictitious person.
Stories are written by us and by doing so we infuse them with our human experience. Every text written by a human is the result of their accumulated influences, everything that moved them, made them think and shaped the person they now are. A summary created by a search for semantically similar tokens has no regard for this fact. The LLM does not know or care unless these facts were already present in its training data. When I ask ChatGPT for a summary of “Moby Dick” it doesn’t distill its themes itself; it takes that from a plethora of literary analysis and critique. But feeding it a new text, something it has never seen before, will make it guess. Extracting meaning from the text now becomes a game of chance and statistics, not of understanding.
Through the Meat Grinder
People hoping that AI will write articles, reports and meta-analyses do not value the expertise of writers. The ones prophesizing that AI will create new software do not value the ingenuity and experience of software engineers. The people thinking that AI will do math, research, science do not value the years of dedication that scientists have put into their profession.
All these people care about is the artifact. The ready-made solution repackaged for the millionth time, that does not require time, study, emotion or passion. The absolute reduction of all human pursuits to an economic product that may be monetized. Human spirit forced through the meat grinder, so that we can usher in a time of hyper-stagnation, where no one learns, no one invents, no one reaches; and the status-quo may be calcified in society and be protected forever.
I am not sure that this development is new. Computers have shown us, that there are many tasks they are fundamentally better at than we are. So what if this is just the natural progression of this technology and in a few years, I am the one who will regret not throwing all my programming expertise away in favor of “vibe coding”? What if I should just end my resistance and become one with the computer?
The Whispers of Li
So what happens once the revolution is complete and its children are devoured into the belly of assimilation? When our thinking has merged with the neural networks of AI? When our daily communication, decision-making and problem-solving is constantly influenced by a cabal of ultra-powerful technocrats?
Singularity Serfdom
Political puppets such as J.D. Vance and their ultra-rich sponsors like Peter Thiel are disciples of the dark enlightenment, a far-right, fascism-aligned ideology that aims to restructure society into one totalitarian, techno-feudal system. A crucial goal for this ideology to succeed is to bring about a feudal system that can be implemented among the whole of society. To achieve this is easy in theory: Ordinary aspects of the common man must be subject to rent and the common man must accept this.
Using AI, it is easy to extract rent from every aspect of modern life. AI products seep into our communications, making us pay with our data. Knowledge workers are convinced that they need to pay for the usage of AI to extend or replace their abilities. People are convinced that they are in control, even when their AI usage makes them less curious and makes them take shortcuts in research and reasoning. Once they have lost the ability to engage with complicated topics without asking for summaries by a statistical model, they will realize that the small tribute they pay every month was rent for the ability to think and to understand the world around them.
The cult of the artifact tries to convince us that it is necessary for us to embrace this development. Thinking and our own agency will become a fiefdom. I cannot think of a better way to usher in the age of dark enlightenment.
A Cruel Symbiosis
Whenever I hear people explain to me the virtues of AI, I do wonder what their own sense of self must look like. They are advocates for an idea that has never and will never benefit them. Like vassals exclaiming their loyalty and trust for their feudal lords, who are trying to exploit them. In the case of many software engineers their lord is called “Claude” or “Copilot”.
They embrace the symbiosis of the machine and their brain, which they believe they are in control of. It reminds of the “Li” paintings by H.R. Giger. A person, seemingly content, transformed into a flattened appearance made from steel and flesh. An amalgamation of living and deceased matter, a metal structure which mimics what is usually naturally occurring, replacing it and melting into a human skull. Only a glimmer remains in their eyes, but their humanity is gone. A snake covers their forehead, possibly the serpent of the Book of Genesis, that achieved its goal of corruption.

Li is whispering. It is not clear whether we hear them warning or inviting us. In their world, it is not clear whether we can resist. The transformation is already complete and behind them is nothing but emptiness. A true singularity.
Energy Constraints
Posthumanism and right-wing accelerationism do not come for free. This revolution is not made to last. The energy demands of this technology fundamentally limit its growth-potential. To quote the Bloomberg report “AI Is Already Wreaking Havoc on Global Power Systems”:
By 2034, global energy consumption by data centers is expected to top 1,580 TWh, about as much as is used by all of India.
It is unfeasible for the current growth to remain steady in the face of the amount of energy needed to support it. Not only can’t we support this trend in a society where we need energy to sustain our industries, public life and healthcare. More importantly, Mother Earth cannot support this trend either. When looking at energy statistics, it is clear that most of our energy remains to be produced by coal and gas.
Not only is the damage done to our planet via greenhouse gases an undeniable tragedy, but the current AI trend will accelerate conflicts that arise from the fight over clean freshwater. According to the Forbes article “AI Is Accelerating the Loss of Our Scarcest Natural Resource: Water”:
AI server cooling consumes significant water, with data centers using cooling towers and air mechanisms to dissipate heat, causing up to 9 liters of water to evaporate per kWh of energy used.
Combining this claim with the assumption of the Bloomberg report we must conclude that we will use up to 14220000000000 liters of water to cool data centers in the year 2034.4 If this seems like a comically large number it’s because it is. It reads as “fourteen trillion two hundred twenty billion”.
Whether you agree with this calculation or not, is not important. What is important is that we currently are destroying our planet in order to support this technology. Once again, this is nothing new. Countries all over the world would rather burn down the world, then to slow down their production facilities. This is especially true for China and the USA. Now, with AI, and the energy-hungry datacenters it comes with, a new sink for our many fossil energy sources has been found and with the current governments in these countries, it is clear that global warming will be accelerated in the next decade. Modern, capitalist society is not made to plan long-term.
In 20 years, we will ask an AI how to fix our climate, and, for once, its fixation on the current status quo will prove helpful. It will talk about renewable energy sources, policy changes and might finally suggest for us to leave our wasteful days behind us.
However, then, it will be too late.
Ad Infinitum
If there is one red line in the conversation of AI it is our fixation on the artifact. We want things to happen, things to be created, we want outcomes and results. And if there is one thing that I am certain about after talking to many people about this subject: Nobody gives a shit about AI. Nobody cares about LLMs, neural nets, text embeddings and the like. People excited for AI, are excited about the artifact. They don’t care where it comes from.
We, as a society, haven’t cared for decades. Our smartphones are made in sweatshops. Our clothes are woven by bloody hands. Our prosperity is upheld by suffering. We like it this way, because at least, at the end of the day, we are in a power position.
Many people are hopeful that they are in a power position with AI, which I don’t believe is true. They are paying rent to feudal lords, and it will take another revolution to overthrow this system. But that, once again, is just history repeating itself.
-
It will never not be funny that a staff writer of Forbes reports on his own replacement. What a soul-crushing job that must be. ↩︎
-
The term “cloudalist” refers to a new class of people and entities that subvert capitalism by extracting rent from cloud-based assets such as online-spaces, computing power, influence on communication systems, etc. I have taken it from the book “Technofeudalism” by Yanis Varoufakis; ISBN-13: 9781529926095. ↩︎
-
This is my main counter-argument when people call AI a “tool”. It cannot be considered a tool, when it claims it can do “everything”. ↩︎
-
I do realize that this calculation is pretty much useless as the reasoning behind its data is debatable. However, it still makes my point clear, that the whatever the amount of water and energy we will be using in 2034 will be ludicrous. ↩︎