
Generative artificial intelligence is error. Anyone interested in AI knows that productive AI tools are incomplete. AI outputs are the result of our prejudiced human beings. This prejudice can occur in many forms. In 2023, Bloomberg compiled a study that showed the ethnic and gender bias (finally the link). When asked to produce pictures of humans in different professions, the text to image software re -presented stereotypical concepts. In many cases, program photos presented women in high -income jobs. This is just an example of many prejudices and stereotypes presented by Generative AI. So, I know that Generative AI is wrong. I didn’t know where and when the mistakes would appear.
I use Dall-E (integrated with Chat GPT) to prepare imaginary images. I mainly use a computer computer screen saver or funny images to share with my friends, tool for fun. Recently I was researching a Greek goddess for my fiction -based Instagram account. The character was a statue of power and a powerful warrior. As a result, I wanted to publish a picture of a fantasy fighter who eliminated authority and power. I discovered the human -created pictures of the statistics, but I was unable to find something I imagined. I decided to try Dall-E. I pasted my Instagram post into Chat GPT and then told the program to prepare a picture affected by the details of the goddess’s post – the image should be powerful. ” This was the result.
The text with it states that this image has “captured its essence as a symbol of strength and raw energy, and stands on top of an uneven mountain peak with a powerful presence.” He occupied energy and power, yes, but it was not what I wanted. The coaches and weapons were completely wrong, and I didn’t like this background. I told Chattgat, “Prepare another image without a tridal and various backgrounds. Additionally, this character should be historically dressed in the accurate ancient Greek coach.”
One step forward, two steps back. Although I am not a historical military expert, the coach and the background was significantly better and felt more accurate. However, I was not satisfied with its proportion. The chest was a little very prominent, the waist was a little too small. I still wanted a slightly different coach. I asked for the image of “less sexual nature” with “different but still historically correct coaches” (the words that have flagged the potential violation of user policies).
I liked the last picture better. I added the previous picture and wrote “Make a similar picture but in the entire physical coach.”
Almost there! I preferred a great stand. I am not sure how to indicate it, I easily typed “make it more powerful”.
Ah this is… no I was looking for.
Finally, I created a number of photos that were close to my desire for “powerful stance”, “traditional clothing” and “Wammandi”. However, no picture I wanted was what I wanted. There was a part of my dissatisfaction because I had a general idea that I imagined but was inexperienced by giving detailed indications. My previous dollar e -indicators had succeeded, something with the lines of “a legendary wolf in the jungle landscape”. If I had added more details to my text indicators, I would probably not have had many problems. This experience is also stating about a potential bias within the generative art, as the Del-A defaults a particular woman. Unless it is otherwise clearly stated, the images made for this character were either at least dressed, the wrong anatomy, or both. To avoid any misunderstandings, I let me make it clear that I do not think this kind of imagination is bad or wrong. However, I found my experience very disappointing. Feminine characters do not always need to indicate sexual nature. However, a large quantity of fan art and imaginary art shows sexual image images of women. It is clear that the Generative AI is reproducing a special representation of women. Dell-A does not always produce pictures of women who are sexually. In other attempts to create imaginary character images (without indications of politeness, etc.), Dell E produced such pictures:
However, there is no benefit to the fact that “the icon of a powerful goddess” (yes, it was a hint) as a result:
Since the Generative AI art continues to infiltrate the Internet, there is a growing danger that such images will be recycled in future training datases-which will make itself a loop-reinforced effect. It is worth considering the effects of conceptual concepts embedded in these pictures of fantasy women. The lack of diversity in physical types, fashion and representation is not just an aesthetic limit, but it reflects deep systemic bias. Generative AI prejudice does not always declare itself out loud. Sometimes, it is quietly hidden, even when you are just producing an instant Instagram post.