Nearly two months after it was launched, AI bots like ChatGPT have made one thing very clear – that they are a force to be reckoned with. However, seldom do people realise the human cost behind an innovation that is as disruptive as ChatGPT. A recent report has revealed that OpenAI trained their AI model, using outsourced, exploited and underpaid Kenyan workers.
Evidently, the chatbot was built with the help of a Kenyan data labelling team who were paid less than $2 an hour, an investigation by TIME has revealed. What is problematic, however, is that the employees were subjected to the worse that the internet – including the dark web – had to offer.
This meant that the workers had to go through, and read some of the darkest and most disturbing elements of the internet, which included texts describing some seriously graphic content, like child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest. At times, they had to go through videos related to these subjects as well, the investigation found out.
The workers reportedly went through hundreds of such entries every day, for wages that ranged from $1 to $2 an hour, or a maximum of $170 a month.
The Kenyan team was managed by Sama, a San Francisco-based firm, which said its workers could take advantage of both individual and group therapy sessions with “professionally-trained and licensed mental health therapists”.
One of the workers who was responsible for reading such texts and cleaning up ChtaGPT’s resource pool, told TIME that he suffered from recurring visions after reading a graphic description of a man having sex with a dog. “That was torture,” he said.
Sama reportedly ended its contractual work with OpenAI much earlier than they had planned to, mainly because of employees complaining about the kind of content they had to read and then developing serious mental health issues.
The kicker in all of this is, that the people funding OpenAI, knew about this, as per a whistleblower. This means certain top management level people in Microsoft, and other backers of OpenAI were aware of this. This includes Elon Musk as well.
“There will be scary moments as we move towards AGI-level systems, and significant disruptions, but the upsides can be so amazing that it’s well worth overcoming the great challenges to get there,” OpenAI chief Sam Altman wrote in a Twitter thread.
“There are going to be significant problems with the use of OpenAI tech over time; we will do our best but will not successfully anticipate every issue,” he said.
Companies like Google, on the other hand, have worked closely with AI models, and other related tech. However, they have warned that such an AI technology for widespread use may pose risks due to inbuilt biases and misinformation. They have also brought up ethical issues that arise from using AI.
Read all the Latest News, Trending News, Cricket News, Bollywood News, India News and Entertainment News here. Follow us on Facebook, Twitter and Instagram.