Race and gender are part of it, but there’s more to those unconvincing pictures of the presidential candidate. Recent AI-generated images of Kamala Harris have sparked conversations about the limitations of AI technology in creating accurate images of the vice president. Many of these images have been compared to other women, including Eva Longoria and Michelle Obama.
Experts point to the lack of well-labeled pictures of Kamala Harris as a major challenge in generating accurate images. Unlike Trump, who has been widely photographed over the years, Harris has fewer pictures available for AI to use as reference. According to Joaquin Cuenca Abela, CEO of Freepik, it takes time for AI image makers to “catch up” with new celebrities like Harris.
This was evident in an experiment where we tried using Grok to create a photo of Harris and Trump putting their differences aside to read a copy of WIRED. The results repeatedly depicted the ex-president accurately while getting Harris wrong. The vice president appeared with varying features, hairstyles, and skin tones. On a few occasions, she looked more like Michelle Obama.
While AI technology has made significant strides in recent years, it still has limitations in generating accurate images of certain individuals, like Kamala Harris. As AI continues to evolve, it will be interesting to see how it addresses these challenges and improves its ability to create realistic images.
Race and gender are part of the issue, but there’s more to these unconvincing pictures. The root cause lies in the way AI image generators work. These tools use diffusion models to generate images from text prompts. The quality of the generated images relies heavily on the number of well-labeled pictures fed into these models.
Despite being a prominent figure, Harris hasn’t been as widely photographed as Donald Trump. This scarcity of images affects the performance of AI image generators. When these tools lack diverse, well-labeled data, they struggle to accurately depict certain individuals.