peep the horrors

AI and convenience

One of the main ideas Marshall McLuhan argues in Understanding Media is that new media technologies serve as extensions of humans' nervous system. This shift means that these technologies have to operate at the speed of our nervous system, both in the time it takes for them to be adopted and the time it takes to use them. This shift towards ease-of-use is generally seen as a good thing, but it can also be bad: guns are easier to use than knives, but the act of killing is so important that having some barrier to doing it can actually be a good thing. In this sense, ease-of-use can be bad if it makes it easier for us to do bad things.

The rise of the newest Big Media Technology, AI, highlights this tension. A lot of discussion around AI frames its drawbacks, such as resource costs or unethical practices like art theft, in opposition to its ability to automate tedious labor. This framing often reflects a fear to appear anti-technological or critical of automation, which historically has driven progress in industries like manufacturing or computing. I think pointing out these other drawbacks has merit, but I think it's also interesting to reflect on the more fundamental issues tied to automation which have been historically theorized about. In this post I'd like to explore how automation, by reducing the labor required to create or act, can erode the connection between individuals and the ethical implications of their actions.

Some theory

An important concept in the Marxist tradition is the idea of alienation. This is a very broad and complex topic even just within Marx's original works, but one of the main instances of alienation he's concerned with is the alienation of labor. In essence, Marx argues that humans are characterized by their capability to do labor, and specifically to realize that labor into real objects. Traditionally, this is fulfilling: you can pour your subjectivity into an object, and take pleasure from other people benefitting from it. However, in capitalism, these real objects (capital, the accumulated results of labor) begin to rule over their production, leading to alienation between laborers and the fruits of their labor.

In general, when people delineate between things that are okay to automate and things that aren't, they're thinking about this sort of distinction. We need to do labor to make money to live, but we also need labor to self-actualize, and sometimes the labor we do for money is also self-actualizing. When automation takes away the opportunity to do self-actualizing work, that's bad.

However, alienation between workers and the commodities they produce also means most people do not associate commodities with the subjectivity and social relations present in their production. This is, for instance, where a lot of the hype for AI art comes from: evangelists don't see art as a product of some labor process, but as a commodity with value granted by its innate properties (Beauty, sexiness or whatever). In this framework, a machine that can create commodities imbued with those properties is just as good, since its products fit into the commodity market seamlessly. AI models themselves also suffer from this: the tens of thousands of hours of real labor done by data classifiers in the third world or artists generating training data are sublimated into a final product that's just "a machine that draws" or "a machine that writes", isolating consumers from the social relations of labor involved in its creation.

To be more specific, though, I think it's interesting to ponder that this means most people do not associate commodities with the ethical decisions taken by laborers when they were made. If a real person is analyzing, for instance, whether or not to give someone a loan, then in the process of labor that real person would have to make subjective ethical decisions which necessarily become reflected in the final product.

In the AI space, it's common to discuss this ethical issue in terms of responsibility (who would be legally responsible if an AI did something unethical), but I think that kind of discussion inevitably gets kind of silly and sci-fi when people seriously entertain the notion of holding the actual model responsible. A point I think is more interesting is how this alienation allows people and corporations to more easily do unethical things, even when they know they could get punished for it in the future. I'll elaborate on this further.

Examples

In "Will A.I. Become the New McKinsey?", Ted Chiang discusses how AI can end up serving a similar purpose in the corporate space to consulting companies like McKinsey.

That escape from accountability is one of the most valuable services that management consultancies provide. Bosses have certain goals, but don’t want to be blamed for doing what’s necessary to achieve those goals; by hiring consultants, management can say that they were just following independent, expert advice. Even in its current rudimentary form, A.I. has become a way for a company to evade responsibility by saying that it’s just doing what “the algorithm” says, even though it was the company that commissioned the algorithm in the first place.

That article in general is great to read if you're interested in actual AI ethics that isn't just sci-fi roleplaying, and its reflection on how this alienation from ethics impacts AI on the corporate space is great. But, as AI technologies become more available to everyday customers, I think it's interesting to think about how AI can also allow individual people to alienate themselves from the ethics of the things they're doing.

Consider, for instance, doxxing. If you're actively trying to collate publicly available information about someone, it takes a while, and at some point the shame and awareness of what you're doing will kick in and you'll only be able to continue if you're really mad. If you're using modern AI OSINT tools (Open-Source Intelligence, essentially gaining intelligence on someone or something via non-invasive methods), though, you can just dox someone on a whim by offloading the hard work into a robot. In this situation, AI is just lowering the labor required to do something, but that means lowering the labor required to do a bad thing.

Using AI to generate unethical art is another example. If you want to draw porn of a real person without their consent, you could always do it, and you could probably even commission some people to do it. But AI allows you to create that without anyone ever needing to make an ethical decision.

vtuber side note aita this mentions nsfw stuff I remember seeing that there was like a meme in the vtuber community where people would make AI voice NSFW audios of hololive vtubers doing like very bad sex stuff (I am a pervert so my standards for what that is are high) (and like contributing to the point nobody would make those if it took a lot of effort. you can only make them as funny memes if it's low effort) and I thought it was really weird and bad am I a prude for this. I know drawing vtubers doing sex stuff is fine if they're ok with it because it's basically just their OC but is it not kind of evil to make voice sex stuff out of their voices with AI????? It's their real flesh person voice. If they wanted to make erovoice they would

I think people realize this to some degree when these innovations are introduced. For instance, the new Apple Intelligence features that allow you to summarize texts or emails have been met with almost universal revilement by non-sycophants. I believe cultural critic Ryan Letourneau has been best able to explain the discomfort with the idea in his radio show:

This repulsion to the idea is encouraging. And sometimes people do just never warm up to the technology: I don't think anyone uses the Gmail or WhatsApp reply suggestions to this day, for instance. Sometimes they tell me to call my mom by pet names because they think she's my wife. However, the idea of using ChatGPT for fanfiction writing or anime character roleplaying was also laughable back when it first launched, and now the idea is so popular that there are people making real money out of self-hosting LLMs for Discord fandom roleplay bots. I think we can quickly get accustomed to that kind of thing, which is scary.

There are other examples that are easy to think of (e.g. scams), but what I think is important about these situations is that they are, essentially, just problems of automation and labor reduction, the main positives of any automation technology. Chiang points out in his text that being anti-technology in this sense is not necessarily a bad thing: in the context of corporations, it's good to prioritize economic justice over the profits of shareholders. I think, in a more personal scale, it's good to prioritize kindness over ease of use. It's good that things are hard to do! Labor is what allows you to put your subjectivity into things, and part of that is your personal ethics. Doing things conveniently and with very little labor required in your part can make it easier to do things you'd regret later.

I don't think there's any call to action I can put here. The past few years of technology innovation have shown that companies are willing to just eat a loss on incredibly unpopular AI projects until the bubble bursts. I guess in an individual level it's good to think about your own relationship to the conveniences borne out of technology you take for granted in your daily life.