This AI tool fixes ‘Asian bias’ in AI-generated images
By Ryan General
A new tool aimed at addressing a bias toward Asian features in AI-generated images is gaining traction among members of the AI art community.
The “Style Asian Less” model, designed to remove the Asian influence from AI-generated images, has been downloaded nearly 3,000 times on the AI art community site Civitai.
Its creator, known as Zovya, said they developed the AI model after noticing that most AI-generated images were heavily influenced by Asian features and culture.
If someone wants to generate an image that looks less Asian on an AI image-generating tool such as Stable Diffusion, one simply needs to apply the AI model.
Zovya clarified that the model removes the influence of Asian imagery used to train the AI so that the generated images can be more effective in portraying other races and cultures.
Zovya said they were trying to create a different AI model intended to generate images of South American people and culture called “South of the Border.”
But even with the tool, Zovya discovered populated images still included Asian features and references. According to the programmer, this trend of images featuring Asian people or an Asian style has emerged in AI-generated images in general.
Zovya notes in the model’s description that such bias may be attributed to most popular AI models on Civitai being trained on Asian imagery.
Most of the recent, good, training has been using anime and models trained on Asian people and their culture. Nothing wrong with that, it’s great that the community and fine-tuning continue to grow. But those models are mixed in with almost everything now, sometimes it might be difficult to get results that don’t have Asian or anime influence. This embedding aims to assist with that.
Recent studies have shown that AI-generated images often reflect the biases present in the datasets that the models were trained on.
For instance, a study conducted by researchers at Boston University and IBM found that commercially available facial analysis algorithms from major tech companies showed higher error rates for women and darker-skinned individuals compared to men and generally lighter-skinned individuals.
Another study by researchers at the University of Cambridge found that an AI model trained on a dataset of online news articles was more likely to associate the word “man” with the word “computer programmer” than with the word “nurse.”
Biased AI models can have serious real-world implications as they perpetuate harmful stereotypes and reinforce existing power imbalances. Inaccurate facial recognition technology, for example, could lead to false arrests and wrongful convictions.
Mitigating bias in AI models remains a challenge for researchers who are now developing methods to address these issues. Some techniques for “debiasing” datasets may involve removing or balancing out biased examples.
Meanwhile, there are also those who are developing methods for “adversarial training,” in which an AI model is trained to recognize and overcome bias in its inputs.
Users on Civitai have commented positively on Zovya’s model, with some noting the importance of addressing the issue of Asian bias in AI images.
“The most important embedding since noise offsets!” wrote a user. “Makes it possible to use anime/illustration models for photorealistic embeddings. Thanks!”
“Incredible as always! you make the best stuff, I have noticed that most models [are] very Asian leaning. Going to test it right away,” another commenter wrote.
“It is great that someone is seeing and realizing this over biasing into the wrong direction,” said another.
Share this Article
Share this Article