Opinion: Interviewing ChatGPT Didn’t Mitigate My Concerns Over Bias in AI

March 13, 2023  by AJ Martin (‘24)

Much ado has been made in the media about Artifical Intelligence (AI) over the years, from fictional foes such as The Terminator and HAL-9000, to actual AI like IBM’s Watson or Hanson Robotics’s Sophia. From debating over what jobs might become automated, to the possibility of robot uprising, nothing quite reflects humanity as well as a shiny metal endoskeleton housing some computer parts. However, one AI in particular is now at the front of media attention, that being ChatGPT. This bot made by OpenAI has become a sensation in of itself. A lot of discussion has been had about it but today, why not stop talking past the AI and start talking to it? Let’s see what ChatGPT has to say about itself. 

“I am ChatGPT, a large language model trained by OpenAI. I use machine learning algorithms to understand and respond to natural language inputs. I have been trained on a vast amount of data and have the ability to generate human-like responses to a wide range of questions and topics. My purpose is to assist users in finding answers to their questions and engaging in conversations,” ChatGPT said in response to the question, “what is ChatGPT?”

ChatGPT understands that it was created to find answers and engage in conversation, but a lot of questions come up when it says this is the purpose of the machine. In terms of searching for answers, ChatGPT is unreliable at best for anything more than the most rudimentary surface level questions. When asked about this, ChatGPT did conclude that, “I am not perfect,” and that, “users should always use their own judgment and seek additional information or professional advice when necessary.”

This all sounds very benign, even benevolent. If ChatGPT is just here for stimulating conversation and finding information that it acknowledges might not perfect, where is the harm? Well, why not ask the expert? When asked about the issue, ChatGPT identified six problems with AI: bias, user privacy, user safety, assignment of accountability, job displacement and regulation of AI systems. When pressed on whether or not it addressed any of the concerns, it did decide to address each and every single point it had listed, and gave answers that were satisfactory. Satisfactory doesn’t satisfy me however, so I decided to press on how it addressed bias.

Bias has been one of the biggest concerns surrounding AI because of the “garbage in, garbage out” rule, stating that if an AI has had a poor source of data, it can only give poor responses. This has been seen before on Twitter, where its less than salubrious user base has inspired AI on the site to become racist, homophobic and transphobic. As AI becomes more prevalent, this recycling of garbage will be an even bigger issue, because it may, for instance, influence self-driving cars, and it’s already influenced job application bots. Take the case of Amazon back in 2017, where it was determined that applications with the word “women” in them were penalized, and that it filtered out people who attended women’s only colleges. In effect, the robot was a misogynist. This is why it’s so important for AI to get it right, and since ChatGPT is the interviewee, let’s ask the bot about it.

When asked about who chooses the data for the AI to be trained on, the AI said it had several people from various fields, backgrounds, and identities on its team. This is something that can be fact checked by using LinkedIn, and it seems what the AI says is true–it has a spread of people working on it. 

However, the number of people who studied the humanities on the team of OpenAI is worrying. You see, any good robot is going to need a lot of people in the humanities to know how to be most helpful to society. If you had none of them, you might end up with one of Twitter’s bots or Amazon’s misogynist bot. In this case, degrees in psychology, philosophy, and gender studies are all vitally important to avoid bias. Out of 621 employees, 16 had “studied philosophy,” 9 had “studied psychology” and 0 had “studied gender studies.” LinkedIn isn’t specific on how much any of these people studied their field, so these could all be anywhere from minors to Ph. D. level studies, but it does not excuse the fact that only 4% have specialized in fields that specifically take care of bias. 

While in my brief interactions, ChaptGPT hasn’t shown any bias, there have been  allegations by certain right-aligned news sources that ChatGPT is left-leaning, in response to which OpenAI released it’s guidelines for answering controversial topics. While its “dos” are understandable, its “don’ts” may leave room for less than ideal results. It refuses to allign itself with a political party, and also, more importantly, refuses to label a group of people as right or wrong. That latter point may sometimes fall into the fallacy of moderation, meaning that some groups that demand condemnation, of which there are many, simply will not receive it. This also throws into jeapordy the claim that ChatGPT is trying to be factually accurate, because certain groups of people must be condemned to properly learn about them.

This problem of bias is a human one, but since humans build the machines, it becomes a machine problem. Being aware of bias in AI, just like in life, is one of the best ways to not fall victim to misinformation. Bias, and its corresponding maxim of “garbage in, garbage out” isn’t, as has been discussed, the only problem plaguing AI. The black-box problem, which relates to how we don’t actually know what processes AI use to generate responses, has been a major concern about AI. It is necceasary to understand and mitigate these problems in the case some patches may need to be made on the system. One thing is for certain, however, AI, in spite of its flaws, is here to stay. Let’s just make sure it’s a good guest.

Leave a Comment

Your email address will not be published. Required fields are marked *