51 Ways to Spell the Image Giraffe: The Hidden Politics of Token Languages in Generative AI

Day 2 23:00 Ground en Art & Beauty
Dec. 28, 2025 23:00-23:40
Fahrplan__event__banner_image_alt 51 Ways to Spell the Image Giraffe: The Hidden Politics of Token Languages in Generative AI
Generative AI models don't operate on human languages – they speak in **tokens**. Tokens are computational fragments that deconstruct language into subword units, stored in large dictionaries. These tokens encode not only language but also political ideologies, corporate interests, and cultural biases even before model training begins. Social media handles like *realdonaldtrump*, brand names like *louisvuitton*, or even *!!!!!!!!!!!!!!!!* exist as single tokens, while other words remain fragmented. Through various artistic and adversarial experiments, we demonstrate that tokenization is a political act that determines what can be represented and how images become computable through language.

Tokens are the fragments of words that generative models use to process language, the step that breaks text into subword units before any neural networks are involved. There are 51 ways to combine tokens to spell the word giraffe using existing vocabulary: from a single token giraffe to splits using multiple tokens like gi|ra|ffe, gira|f|fe, or even g|i|r|af|fe.

In one experiment, we hijacked the prompting process and fed token combinations directly to text-to-image models. With variations like g|iraffe or gir|affe still generating recognizable results, our experiments show that the beginning and end of tokens hold particular semantic weight in forming giraffe-like images. Yet even iraff produces the same visual concepts. This reveals that certain images cannot be generated through prompting alone, as the tokenization process sanitizes most combinations, suggesting that English, or any human language, is merely a subset of token languages.

The talk features experiments using genetic algorithms to reverse-engineer prompts from images, respelling words in token language to change their generative outcomes, and critically examining token dictionaries to investigate edge cases where the vocabulary breaks down entirely, producing somewhat speculative languages that include strange words formed at the edge of chaos where English meets token (non-)sense.

These experiments show that even before generation occurs, token dictionaries already encode a stochastic worldview, shaped by the statistical frequencies of their training data – dominated by popular culture, brands, platform-speak, and non-words. Tokenization is, therefore, a political act: it defines what can be represented and how the world becomes computationally representable. We will look at specific tokens and ask: Which models use which vocabularies? What non-word tokens are shared among models? And how do language models make sense of a world using a language we do not understand?

Speakers of this event