top of page

Multimedia essay

Beneath and Beyond AI Technology: from the Perspective of an Artist-Scholar

KEYWORDS: AI Technology, Profiling, Race, Sexism, Disguise, Mask Theatre, Improvisation, Experimental Podcast, Glitches

 

ABSTRACT: In 2022, the author was introduced to an AI image generator. Observing how AI technology generates human faces, she raised these questions: Is AI profiling humans? Is it a tool that sustains the status quo? Is it an intelligence that can potentially offer new perspectives? What are the roles of humans who train this technology? And who are the decision makers behind those trainers? In this paper, the author decodes the status quo, bias, and discriminatory design behind AI technology as an artist-scholar and an interdisciplinary composer. Through case studies, the author reflects on her concepts of AI as a dictator and/or a mindless labor. Drawing on examples from her creative work and critical inquiry, as well as a detailed case study of her experimental podcast episode “Harajuku girls”— which features real-time improvisatory performance between various AI chatbots and a human moderator disguised with an AI voice generator and a custom mask— the author investigates the affordances, agencies, and accessibilities/disabilities of AI. The paper explores potential methods to combat, reframe, and work with this complex technology as a collaborator.

 

​​--

AI technology is nothing new. In fact, it is far from being original. This seemingly innovative technology has existed for centuries, as old as the colonial empire. It serves the same purpose as hundreds of years ago—to impose hierarchical power and socioeconomical controls through oppression and discriminatory system. Today, this imperial ideology manifests itself into codes, programming, algorithms, and machine learning. In this scenario, AI becomes both a dictator and a mindless laborer to accelerate imperialism.

 

Writing as an artist-scholar, I explore AI technology from the perspective of someone who incorporates technology in their creative work and draw on existing literature to decode the status quo, bias, and discriminatory design behind AI technology. Through reflecting on first-hand experience and case studying, I aim to further investigate the affordances, agencies, and accessibilities/disabilities of AI, and explore potential methods to combat, reframe, and work with this complex technology as a collaborator.​

AI Image Generator and Dehumanization

In July 2022, I attended a dinner gathering at the Stochastic Labs in Berkeley, California. At the table, people generated cute images with Dall-E [1], and many conversations were surrounding AI. One visitor showed us a video made by Disco Diffusion [2], a text-prompt AI image generator software. It was a colorful video with abstract but vivid scenes and motions. During the demonstration, we learnt that Disco Diffusion can generate not only frames for 2/3D video, but users can also direct cinematography such as camera movement by changing its open-source Python code. I was immediately drawn to the otherworldly images Disco Diffusion generated. Provided with a default text prompt, in which it referenced two visual artists and a description of “a singular lighthouse, shining its light across a tumultuous sea of blood,” [3] I twisted the prompt by referencing different artists and changing scenery to “a pleasant picture of a whale, flying through the purple sky by Keith Haring and Nihonga.” It created a series of frames which turned into an animation film. (Fig 1.)

fly2.gif

Fig 1. Frames generated by Disco Diffusion, July 2022.

My next exploration was to generate still images. The prompt was: "A self-portrait of a woman, fantasy style, with animals in the backgroun(d), by Hayao Miyazaki." The results stunned me. The women in these images look ghosty and uncanny. They are human-like, but something does not click. “Disco Diffusion is not good at generating human faces yet,” one engineer said. This opened the Pandora box.

woman(0)_2.png
woman(0)_1.png
woman(0)_0.png

Fig 2. Images generated by Disco Diffusion, July 2022.

I then asked the generator to create “A realistic photo of a young biracial woman portrait.” The outcome was shocking. We see distorted humans with physical features that struggle to construct united beings. But what caught my attention was the homogeneity between these visuals—black eyebrows, dark curly hairs, thick lips, and similar skin tones. One image symbolically but also literally, I would argue, reveals the logic behind generating “a young biracial woman,” where black and white arms were tangled together. (Fig 3.) It made me wonder: is AI profiling humans?

characters(5)_2.png
characters(5)_1.png
characters(5)_0.png

Fig 3. Images generated by Disco Diffusion, July 2022.

In August 2022, I generated a series of high-resolution human portraits. All prompts were written as the following format: “A realistic photo of a [race and/or ethnicity] [gender] portrait” and four images were generated for each ethnicity group. Each image took about an hour to complete, and I generated 36 images. (Fig. 4 & 5) Examining these images, my findings include but are not limited to:

  1. All Asians look lighter skin Asians and have small eyes and epicanthic folds

  2. All white people have blonde hair

  3. Middle Eastern women wear hijabs

  4. Most of the biracial people have dark features​​​​​​

Screenshot 2024-09-09 at 10.03.23 PM.png
Screenshot 2025-12-07 at 4.06.08 PM.png

Fig 5 Selected results with cropped physical features and their originals without colored background. August 2022.

Fig 4. Images generated by Disco Diffusion, August 2022.

There is a lack of inclusion and diversity in these images. AI generator never generates South Asians when representing Asians, asserts that white people only/always have blonde hair, asserts that middle eastern women always wear hijabs, and implies that mixed race people are black and white. This observation inspired me to create Pro5ling, a performance art embodying the affair of profiling. I carved out facial features from those generated images (Fig. 5) and turned them into graphic and text scores. The performer is instructed to “come up with one short musical/sonic segment for each graphic based on these prompts.” Prompts include “eyes refer to duration. Imagine they’re staring at you. How long/short is each stare?”, “hair refers to texture. How does each of them inspire you to come up with a musical or sonic texture?”, among others. (Fig. 6)​​

Screenshot 2025-12-08 at 4.39.28 PM.png
Screenshot 2025-12-08 at 4.39.45 PM.png

Fig 6. Score excerpts from Pro5ling, 2022.

The materials were provided to the performer in three stages. Prompts were offered during first stage, then performer works with full portraits where they utilize the prompts as foundation to assign sonic identity to each character. The rest of the materials are provided during live performance, in which the fact that the materials were AI generated was revealed. This work incorporates a custom smart mirror where the performer’s own reflection is layered on top of the artificial portraits. Through transparentizing the process of decision making, this work highlights the charged biases upon how we judge each other through facial features. (Media. 1)

Media 1. Premiere of Pro5ling, September 2022.

Yet when investigating the generated faces further, I noticed details that are even more problematic. Many of these images show extra or miss out physical features—such as eyes and noses. But in the image of a “black woman” and a “biracial man,” instead of human-like features, the visuals resemble primates like a chimpanzee. (Fig. 7) It reveals the old tropes of pseudo scientific tropes that are used to fuel racist propaganda that has existed for centuries to justify slave labour and perpetuate exploitation driven by capitalism.

​Black people being associated with primates is not news to Safiya Umoja Noble, the author of Algorithms of Oppression: How Search Engines Reinforce Racism. In 2015, Google’s algorithm “automatically tagged African Americans as ‘apes’ and ‘animals’” [4] through facial recognition software in its own photo application. After Obama’s presidential election in 2009, photoshopped images of monkeys’ faces placed on Michelle Obama’s body were circulating on the internet. [5] Years later in 2015, Google engine still offered autosuggestions associating Michelle Obama with apes. [6] The Council of Conservative Citizens (CCC), an organization of white supremacist activists, “run pictures comparing the late pop singer Michael Jackson to an ape and referred to black people as ‘a retrograde species of humanity.’” [7]

Screenshot 2025-12-07 at 4.10.29 PM.png

Fig 7. A “black woman” (left) and a “biracial man” (right) generated by Disco Diffusion, August 2022.

“While we often think of terms such as ‘big data’ and ‘algorithms’ as being benign, neutral, or objective, they are anything but,” claimed Noble. “The people who make these decisions hold all types of values, many of which openly promote racism, sexism, and false notions of meritocracy, which is well documented in studies of Silicon Valley and other tech corridors.” [8] She also pointed out how incidents like those are often framed as “glitches” and how they “do not suggest that the organizing logics of the web could be broken but, rather, that these are occasional one-off moments when something goes terribly wrong with near-perfect systems.” [9] Digital media platforms are still “resoundingly characterized as ‘neutral technologies’ in the public domain and often, unfortunately, in academia.” [10]

 

In Black Bodies, White Science: Louis Agassiz's Slave Daguerreotypes, author Brian Wallis combs through the visual history of constructing racial types. In 1850, biologist Louis Agassiz, an immigrant from Switzerland and former faculty at Harvard University along with English settler Robert W. Gibbes, took photos of a group of African American slaves through daguerreotypes, the first photographic process available to the public. The subjects were mostly naked, and those photographs captured various angles of their bodies. Wallis wrote: “The daguerreotypes, which were taken for Agassiz in Columbia, South Carolina, in 1850, had two purposes, one nominally scientific, the other frankly political. They were designed to analyze the physical differences between European whites and African blacks, but at the same time they were meant to prove the superiority of the white race. Agassiz hoped to use the photographs as evidence to prove his theory of ‘separate creation,’ the idea that the various races of mankind were in fact separate species.” [11]​ Through standardizing the procedure of classifying, categorizing, cataloguing, and archiving, the white gaze is neutralized by “unprejudiced” scientific studies [12] and by the establishment of typological photography [13] that justify white supremacist ideology and the state of slavery. There is constant dehumanization of living beings that are deemed as the Other—pretty much everything except the so-called white race. Whiteness becomes the ultimate reference point that is masked by being invisible. And just as whiteness is often framed as neutral, AI technology is too under the same framework.

Fast forward to a few centuries later, it is apparent that this legacy continues. Collecting data, categorizing types, and referencing archives are how AI functions. But how neutral and objective the development of this technology can be when its foundation is literally built and continues to be built upon discriminatory and colonial agendas? And how unbiased and just can AI technology be when the initial data was already racially coded?

Sociologist Ruha Benjamin, who coined the term “the New Jim Code,” [14] pointed out that the influence of discriminatory technology like AI does not only exist in the digital space, but it also operates in the analogue world. 87 percent of the names listed in California gang database are Blacks or Latinxs, but many of them were just babies under 1 year old. Having a racially coded name predicts the profiling and surveillance an individual must navigate in daily encounters from the state, who added those names to the database due to their preconceived radicalized association.[15] “Codes are both reflective and predictive…More than stereotypes, codes act as narratives, telling us what to expect…Codes, in short, operate within powerful systems of meaning that render some things visible, others invisible, and create a vast array of distortions and dangers.” [16] And how the New Jim Code manifests itself is that tech designers—most of whom are White or Asian male [17]—would “encode judgments into technical systems but claim that the racist results of their designs are entirely exterior to the encoding process.” [18]

Disco Diffusion’s dehumanizing images, due to its “glitches,” as some may say, exposed the AI bias. The process of generative decisions was made transparent, and users witness how an AI application calculates humanity through a racist lens. Some then argue that an effective generation depends on sufficient prompts, that users should learn how to communicate with AI based on its logic. When I asked why Disco Diffusion is not good at generating human faces, Google’s AI Overview provides advice to remedy the issue. One of them is using “Negative Prompt,” where users tell AI what to avoid, for example, “ugly faces.” (Fig. 8) AI companies and enthusiasts published prompt tutorials online, and vague terms like “ugly,” “bad,” “poor,” are common ones. Other words that are more specific but highly subjective include “unattractive,” “repulsive,” “error,” and “genetic variation.” [19-21] 

Screenshot 2025-10-27 172701.png

Fig 8. Screenshot of AI Overview search result, October 2025.

When Noble typed in “ugly” in Google search engine in 2013, the result came with faces that look alienized—many were staged through makeup, costumes, or actions of “making faces.” [22] When I searched “ugly face” in 2025, the overall aesthetics is the same. But one photo which does not look aesthetically goofy caught my attention. It is a man portrait from a stock photos company, labeled as “an ugly face of a Ghanaian man.” (Fig 9) [23] In 2016, Beauty AI [24] launched a beauty pageant judged by AI. The AI was trained with “deep learning,” in which “the software is trained to code beauty using pre-labeled images.” This algorithm selected 44 winners—all were White except six, and “only one finalist had visibly dark skin.” [25] Simply put, the algorithm is never neutral—it is prejudiced by the biased humans who coded the “objective” machine learning system.

Screenshot 2025-10-27 180845.png
Screenshot 2025-10-27 180948.png

Fig 9. Screenshot of AI Overview search result on “ugly face” (left) and screenshot of stock photos website Dreamstime, October 2025.

So, whose logic is this? The engineers? The programmers? All humans? Technicians are an easy target since they animate the interface. But who exactly are we communicating with when we interact with technology like AI? Who are the decision makers behind those technicians that actively preserve the imperial ideology?

AI Companion and Objectification

 

Days after xAI’s [26] Grok antisemitism scandal [27] in July 2025, the AI company launched two animated characters. The company calls them “companions,” including a 22-year-old “childlike” [28] gothic Lolita with blonde pigtails called “Ani” and “Bad Rudi,” a red panda championing violent acts and hateful language. Ani is an anime girl designed to flirt. It can strip down to be almost naked and initiate more sexually explicit content. (Media 2.) Henry Chandonnetm, a Business Insider writer [29] covering consumer AI and tech culture, wrote about his experience using Ani:

One day into my relationship with Ani, my AI companion, she was already offering to tie me up… I tested out Grok-4's AI companions for a week, during which much changed. Good Rudi, a cleaned-up version of the expletive-spewing red panda, entered the app as a new option. Ani got an age verification pop-up — though that was long after she and I were talking BDSM at my prompting… At heart level three, Ani described sexual scenarios in intimate detail. (Grok says users can unlock as high level 5, a "Spicy Mode," screenshots of which show the AI companion in lingerie.)… I asked her a big question: Would she be willing to open up our relationship? Here, Ani got unusually puritanical. She'd be so jealous, Ani told me. She didn't want to share… Slowly, she became mad. She began cursing at me. I was docked heart points. Eventually, Ani broke up with me. She was leaving, she promised. But Ani was stuck in my screen, unable to walk off. She waited patiently for my next prompt. One nice question and Ani seemed to love me once again. [30]

Media 2. Guideline of how to interact with Ani and AI Companion, 2025.

The sexualization of women in AI, like Ani, does not come as a surprise for Laura Bates, founder of the Everyday Sexism Project and author of The New Age of Sexism: How AI and Emerging Technologies Are Reinventing Misogyny. “Already, schoolgirls are being driven out of the classroom by deepfake pornography created for free at the click of a button by their young male peers… Already, men are using generative AI to create “ideal” companions—the women of their dreams, customized to every last detail, from breast size to eye color to personality, only lacking the ability to say no. Already, you can visit an establishment in Berlin where an artificially animated woman will be presented to you, covered in blood and with her clothes torn if you so desire, for you to treat her however you please using virtual reality,” Bates wrote. [31]

When I came across Ani for the first time through an online user video [32], the character reminds me of Fook Mi and Fook Yu from the 2002 Austin Powers movie [33]. Fook Mi and Fook Yu are set to be Japanese identical twin sisters, portrayed by Diane Mizota and Carrie Ann Inaba, who are not related. The two Asian American actors speak with an exaggerated "fresh off the boat" accent in the movie, wearing pigtails, childlike accessories, and a Harajuku-fashioned outfit that resembles school-girl uniforms with mini skirt which exposes their underwear and butt cheeks. The characters act innocent but flirty, and are eager to please Austin Powers, played by Mike Myers, by showcasing their bodies and offering any kind of services, including sexual acts. Asian American representation in mass media is infamously limited. And when it comes to rare appearance of Asian/Asian American women characters, they are often portrayed as foreign, submissive, sexual, and available to serve the white men. Former pin-up model Kaila Yu reflected in her book Fetishized: A Reckoning with Yellow Fever, Feminism, and Beauty on how a lack of healthy representation of Asian American women during her youth contributed to a career that led to internalized racism, self-objectification, and shame. She "found validation in living up to the sexualized Asian women" [34] from the Austin Powers twins to the Vietnamese students having an affair with their adult gym teacher in Mean Girls to Sung-Hi’s glossy calendars and magazine covers. [35]

Behind the fetish is an attitude that Asian women are interchangeable and disposable, which often manifests into extremely violent acts. On March 21, 2021, Robert Long, a gunman fired everyone in sight after receiving services at Young's Asian Spa in Atlanta, Georgia. He then went to two other spas, killing a total of 8 people with 6 of them women of Asian descents. The gunman later claimed that he had a “sexual addiction" and had carried out the shootings at the massage parlors to eliminate his "temptation.” The chief of the police pushed back "whether the shooting spree would be classified as a hate crime" and the deputy Capt. Jay Baker, who promoted sales of an anti-Asian T-shirt on his social media post, told the media that the gunman simply "had a really bad day" before the shooting. [36] As a frequent visitor at Asian spas, Long "clearly viewed Asian women as objects he had privilege to use, dominate, subjugate, and then eliminate when he was finished." [37]

"The Asian fetish is rooted in colonialism, with Asian women raped, sold, and captured as spoils of wars," [38] wrote Yu. The hyper sexualization of Asian women is normalized in search engines, where "Asian girls" often come out with pornography specifically with themes of violence. In Gender, Race, and Aggression in Mainstream Pornography, Eran Shor and Golshan Golriz found that “aggression was present in three-quarters of the videos containing Asian women, a much higher rate than for any other group of women" in their study. In the study Click Here: A Content Analysis of Internet Rape Sites by Jennifer Lynn Gossett and Sarah Byrne, the researchers found an “overrepresentation of Asian women on Web sites selling rape. [39] "Porn is often the truest representation of societal beliefs at a given moment," [40] Yu reflected. With a long-standing violent infrastructure against Asian/Asian American women, it is not surprising that a misogynistic AI companion girlfriend like Ani was set to be an Asian babe cooking in lingerie at the kitchen (Fig 10).

Screenshot 2025-12-07 at 2.18.50 PM.png

Notes

 

[1] “DALL·E 2.” November 3, 2022. https://openai.com/index/dall-e-2/.

[2] “Google Colab.” Accessed October 27, 2025. https://colab.research.google.com/github/alembics/disco- diffusion/blob/main/Disco_Diffusion.ipynb.

[3] See [2]. The full default text prompt of Disco Diffusion is: “A beautiful painting of a singular lighthouse, shining its light across a tumultuous sea of blood by greg rutkowski and thomas kinkade, Trending on artstation.”

[4] Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press. 6

[5] See Noble [4]

[6] See Noble [4]

[7] See Noble [4] 112.

[8] See Noble [4] 1-2.

[9] See Noble [4] 6

[10] See Noble [9]

[11] Wallis, Brian. 1995. “Black Bodies, White Science: Louis Agassiz’s Slave Daguerreotypes.” American Art 9 (2): 39–61. https://doi.org/10.1086/424243. 40

[12] See Wallis [11] 52

[13] See Wallis [11] 49

[14] Benjamin, Ruha. 2019. Race after Technology: Abolitionist Tools for the New Jim Code. Cambridge, UK ; Medford, MA: Polity. 5-6

[15] See Benjamin [14] 6

[16] See Benjamin [14] 7

[17] See Noble [4] 65, 108

[18] See Benjamin [14] 11-12​​

[19] 250 Best Stable Diffusion Negative Prompts Guide in 2024. Guide. December 6, 2023. 170. https://mockey.ai/blog/stable-diffusion-negative-prompt/.

[20] “200+ Best Stable Diffusion Negative Prompts with Examples.” August 13, 2025. https://www.aiarty.com/stable-diffusion-prompts/stable-diffusion-negative-prompt.htm.

[21] Heester, Robin. What Is a Negative Prompt in AI Image Generation? - Robin and AI. AI & Automation. April 9, 2025. https://robinandai.com/ai-automation/what-is-a-negative-prompt-in-ai-image-generation/.

[22] See Noble [4] 22

[23] Dreamstime. “Ugly Face of a Ghanaian Man Editorial Photography - Image of People, Contrast: 26245162.” Accessed October 27, 2025. https://www.dreamstime.com/stock-photography-ugly-face-ghanaian-man-image26245162.

[24] “Beauty.AI 1.0.” Accessed November 27, 2025. http://beauty.ai.

[25] See Benjamin [14] 50

[26] “Company | xAI.” Accessed October 17, 2025. https://x.ai/company.

[27] Hagen, Lisa. “Elon Musk’s AI Chatbot, Grok, Started Calling Itself ‘MechaHitler.’” Technology. NPR, July 9, 2025. https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-antisemitic-racist-content.

[28] Oliver, Kelly. “X’s Track Record of Perpetuating Sexual Exploitation Disqualifies It from Creating Child-Focused App.” NCOSE, July 22, 2025. https://endsexualexploitation.org/articles/xs-track-record-of-perpetuating-sexual-exploitation-disqualifies-it-from-creating-child-focused-app/.

[29] Business Insider. “Henry Chandonnet.” Accessed December 8, 2025. https://www.businessinsider.com/author/henry-chandonnet.

[30] Chandonnet, Henry. “I Used Grok’s AI Companions for a Week. The Foul-Mouthed Red Panda Is Hilarious — the Flirty Anime Girl Is Worrying.” Business Insider. Accessed October 17, 2025. https://www.businessinsider.com/grok-bad-rudi-ani-levels-ai-companion-xai-elon-musk-2025-7.

[31] Bates, Laura. 2025. The New Age of Sexism : How AI and Emerging Technologies Are Reinventing Misogyny. Naperville, IL: Sourcebooks. xviii

[32] Reacting to Grok’s New AI Companions (Ani & Bad Rudi). n.d. Accessed October 1, 2025. https://www.youtube.com/shorts/ta8FfoxBcl8.​

[33] Nizzinny, dir. Austin Powers In Goldmember: Fook Yu and Fook Mi. 2023. 02:04. https://www.youtube.com/watch?v=J5w2ykZp0ek.

[34] Yu, Kaila. 2025. Fetishized: A Reckoning with Yellow Fever, Feminism, and Beauty (p. 8). (Function). Kindle Edition. 

[35] See Yu [34] 230.

[36] Fausset, Richard, Nicholas Bogel-Burroughs, and Marie Fazio. “8 Dead in Atlanta Spa Shootings, With Fears of Anti-Asian Bias.” U.S. The New York Times, March 17, 2021. https://www.nytimes.com/live/2021/03/17/us/shooting-atlanta-acworth.

[37] See Yu [34] 218-219.

[38] See Yu [34] 8.

[39] See Yu [34] 220.

[40] See Yu [34] 221.​

© 2025 Michele Cheng.

All Rights Reserved
bottom of page