In the Face of Tech Kaiju

In the Face of Tech Kaiju
Triclinium, Excavated in the House of Actaeon, Pompeii by Charles Frédéric Chassériau, ca. 1824, from The Metropolitan Museum of Art.

by Arne Brasseur

It's been nine years since the paper that introduced the Transformer architecture, which made LLMs possible, was published, and four years since ChatGPT kicked off the current genAI hype cycle. In tech years that is a long time, but opinions are still heavily divided on where this is going.

Many claim it's the undisputed future of work, of business, and of programming. At the same time AI is overwhelmingly unpopular in society, and the big AI companies are still losing money hand over fist.

Every single AI startup without exception does the same thing: turn hundreds of millions of dollars into tens of millions of dollars, or a few billion dollars into a few hundred million dollars. None of them are improving their margins. None of them have a solution.

So far you can still say this is the classic VC playbook. You first capture the market, try to become the dominant player at all costs, on the assumption that market position, vendor lock-in, and improving unit economics will eventually turn that around into a massive profit. VCs typically work on a 10 year cycle, and generally want to start seeing at least per-unit profit margins 5 to 7 years in. That clock is ticking, and given the current numbers, it's hard to imagine any of these companies hitting that mark.

Investors and policy makers equally have been banking heavily on AI as the next driver of economic growth, but they too are starting to get nervous. After missing the boat on mobile, Microsoft decided to go all in on AI, and is now facing its worst quarter since the 2008 financial crisis.

And there are early signs of the enshittification that is yet to come. ChatGPT now comes with ads, and last week Microsoft injected Copilot ads in 1.5 million Github Pull Requests before eventually backing down again.

It's worth noting that all the big US AI companies, Google DeepMind, OpenAI, Anthropic, and xAI, all have their roots in the TESCREAL ideology, with their aim to build AGI (Artificial General Intelligence) as a stepping stone to ASI (Artificial Superintelligence). A god-like entity which will solve all problems of humanity. In that context how much do you really care about something as petty as profitability? Remember that OpenAI was originally founded as a non-profit. But you need to keep the wheels turning long enough to get to that destination. And while some of the things LLMs can do are certainly impressive, at their core they are still dumb statistical devices. The claim that a future iteration of the same technology will bring forth AGI is simply not credible.

Meanwhile of course "AI" is everywhere. It is changing society, and it is changing how software development happens. The amount of capital being spent to push people onto the AI train is overwhelming, leading to a staggering amount of FOMO. Companies, developers, investors — at every level the message is the same, "we can't miss out on this". Some embrace it wholeheartedly, some resent but begrudgeingly accept it, afraid that speaking up publicly will hurt their employability. And some resist.

There is no better way to highlight this contrast than by putting my Mastodon feed next to my LinkedIn feed. On the Fediverse the general tone is to resist and reject genAI at all costs. Every day I am pointed at the dangers and downsides, often backed by solid empirical research. LinkedIn is all about the amazing advancements, the AI-powered product launches, the incredible case studies.

And the problem is, both feeds, both demographics, are right. Especially when it comes to software development, the (perhaps unfortunate) truth is that AI-assisted coding works. There are a lot of caveats to place with that assertion. It's better at some tasks than at others. It makes it easy to produce great volumes of software at speed, but it's hard to maintain a quality baseline. It's working exceptionally well for some use cases and some people, while being a frustrating, unsatisfying experience for others, creating tech debt at scale, while losing the theory building that allows you to still reason about the software you are creating.

The programming world is divided on AI-assisted coding, and a few different people have tried to put names to the two sides. While such a characterisation is necessarily reductive, I found some of these particularly shallow. Bjarnason comes closest to a deeper truth, pointing at the fact that many of us were already disillusioned in tech, especially financialised, VC-backed tech, and how it has taken up all the oxygen since 2008, and all through the ZIRP (zero-interest rate policy) era.

Fundamentally, looking again at my Mastodon feed, I see an idealistic demographic, involved with Open Source, with community building, with standards work and the open web. These are people who still believe that technology can do good, as an enabler for personal growth, and a catalyst for human connection. There are few communities that better exemplify a value-driven perspective.

LinkedIn is the perfect window into what a tech-first perspective looks like. Tech solutionism pur sang, a caricature of a greedy businessman turned social media feed.

The twist here is that these two are joined at the hip. It's easy to be idealistic, to dedicate free time to open source and community, when you have a cushy tech salary and little other worries. We wouldn't have the internet without the military funding of ARPA. For years Mozilla sustained the development of Firefox by selling its front page to Google. We are but tiny creatures in the face of tech kaiju.

I don't, however, believe in defeatism. It's a complex world, with many injustices. Right now most of us are trying to get by under late stage capitalism. You don't always have the luxury of choosing the moral high-ground. But our choices do matter, individually, and collectively. Be smart, honest, and open about the choices you make.

In this piece by Language and Communication researcher Mark Dingemanse he talks about the use of AI in research, and about shifting from a tech-first "permission" perspective to a value-driven perspective.

If you use these values as a compass to steer by, it’s easier to navigate the landscape of technology use. On the other hand, if you find yourself seeking permission, one useful thing to do is to step back and inspect the underlying value conflict.

...

The approach is not to recommend or forbid particular tech products, or particular uses; it is to provide a matrix for mindful choice. For any attested or conceivable use, you can ask: how would this help or hinder my upholding of high standards of research integrity?

I recommend reading the whole article. The values Dingemanse proposes are honesty, scrupulousness, transparency, independence, responsibility, which are also an excellent baseline for any engineer with a pulse. In the context of software development we can add some of our own, perhaps reliability (does what I ship meet a high bar?) or ownership (do I feel confident putting my name on this?).

How you create software is a complex choice. It always has been. It includes choosing technologies to build on, ecosystems to be part of, making architectural choices, organising teams, guiding individuals, communicating with stakeholders, setting up feedback loops. When, if, and how you add AI into that mix is one more choice. Gaiwan was founded on the premise to be more mindful about these kind of choices.

#tea-break

At Gaiwan we share interesting reads and resources in our #tea-break channel.

The research is out! Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task: "Large Language Models (LLMs) like ChatGPT and Grok don’t just help students write—they train the brain to disengage." Even worse, "Their neural activity remained below baseline, even after AI use was stopped."

Matheus Lima on "Nobody Gets Promoted for Simplicity": "The actual path to seniority isn’t learning more tools and patterns, but learning when not to use them. Anyone can add complexity. It takes experience and confidence to leave it out."

Scott Barker on "How to prepare for the next decade":
"Pick one project that will take a minimum of a decade to actualize (ie. building a body of meaningful work, mastering a domain of interest, raising children with intention, spiritual development) and commit to it.

Write your obituary. Then ask yourself, through that lens: What am I doing right now that that version of me will find trivial? and What am I neglecting that that version would care deeply about?"

What one project are you working on that will take at least 10 years to bear fruit?