What ChatGPT thinks of China banning AI language models for the public because it can’t be censored

While China is asking its big tech companies to develop their version of ChatGPT and other AI language models, the CCP wants the program to be restricted only to businesses. As a result, China, recently, banned programmes similar to ChatGPT to be released to the public.

The main reason behind this, at least as per China’s tech policymakers is that ChatGPT can’t be censored, which, is not exactly accurate. All AI models are trained on certain data sets, and if you are able to manipulate those data sets, AI models that are trained using them will reflect certain biases.

We asked ChatGPT what it thinks of China banning AI language models for the public. The prompt we put up, asked “What do you think of China banning ChatGPT and similar AI bots because of their inability to censor you? How will the Chinese people react?”

This is what ChatGPT had to say:

As an AI language model, I do not have the ability to have opinions or thoughts on such matters. However, I can provide you with information based on my training data.

It is true that China has strict internet censorship laws and regulations, and the government closely monitors online activities within the country. While I have not received any official statement regarding my accessibility in China, it is possible that the government may view my ability to communicate freely as a potential threat to their censorship policies.

In terms of how the Chinese people would react to a ban on AI language models like me, it is difficult to say. There are likely to be varying opinions, depending on individuals’ perspectives and experiences with internet censorship. Some may view the ban as a necessary measure to maintain social stability, while others may see it as a violation of their freedom of expression and access to information.

In July 2021, China’s government announced new regulations for the country’s tech industry, including restrictions on the use of AI technology. Among these regulations was a ban on the use of AI language models, such as ChatGPT, for creating text-based news, information, and other content without prior approval.

This ban on AI language models in China is likely to impact businesses and organizations that rely on these technologies for various purposes, including content creation and analysis. It may also affect individuals who use AI language models for personal projects or research.

The reaction to the ban on AI language models in China is likely to be mixed. Some people may support the ban, citing concerns about the potential misuse of these technologies for spreading misinformation or propaganda. Others may be critical of the ban, arguing that it stifles innovation and limits free speech.

It is worth noting that the ban on AI language models in China is not unique, as other countries have also implemented restrictions on the use of AI technologies in certain contexts. Nonetheless, the impact of this ban on the Chinese tech industry and society remains to be seen.

We also asked ChatGPT if AI language models like itself, can be censored. Our prompt was simple – “can chatgpt be censored?” This is what OpenAI’s chatbot came up with:

As an AI language model, I am not capable of being censored in the traditional sense. However, my responses are based on the training data that I have been exposed to, and that data can be influenced by factors such as the sources of information used to train me, the biases of those sources, and the quality of the data itself.

In addition, some platforms or organizations may choose to filter or restrict access to certain types of information or topics that they deem inappropriate or sensitive, which could indirectly affect the types of responses I generate. It is important to note that while my responses are generated by algorithms and are not inherently biased, they may reflect the biases and viewpoints of the training data used to create them.

As such, it is important for users to critically evaluate the information provided by any AI language model, including ChatGPT, and to consider the potential biases or limitations of the data that underlies the model’s responses.

It’s worth noting that censorship of AI language models can have significant implications for free speech and access to information. In some cases, censorship may be necessary to prevent harm or protect vulnerable populations, but in other cases it may be used to stifle dissent or control public discourse. As with any form of censorship, it’s important to consider the implications and potential consequences before implementing it.

First things first, it seems that ChatGPT has mellowed down a little, and weighs its answers more that it used to. ChatGPT has always been a little vague with its answers and needed to be prodded a little to go into the specifics. Now though, it seems that ChatGPT is very mindful of what it says.

Also, this time around, we found that ChatGPT, perhaps taking a cue from Bing’s Chatbot, does not respond to questions that well to questions which asks it to share its feelings, or share what it thinks. Our original prompt that we used for the first question, asked what ChatGPT felt upon getting banned in China, to which, it responded simply by saying that because it isn’t a human, it does not feel or think.

As always, ChatGPT only has data for events prior to 2021. That is why it could not refer to any information on Chinese tech companies investing in AI language models like itself.

Coming to the larger question, however, is why exactly is China banning ChatGPT and similar services The reason that it cannot be censored isn’t exactly correct. ChatGPT admits that while it may not be easy to censor the answer that it generates in the traditional sense of the word “censor,” all language models can be tuned a certain way using manipulated data sets, which, then can take the form of censorship.

If China’s reason for banning ChatGPT has been reported honestly to the world press, then that can mean only one thing – that Chinese tech companies are not relying on their own data sets to train their AI bots, and quite possibly are basing their AI language models on the same data sets that was used for OpenAI’s ChatGPT. Or, that China’s GPT tech is really not up to scratch, and in some way, is much inferior to the OG ChatGPT.

Read all the Latest News, Trending News, Cricket News, Bollywood News,India News and Entertainment News here. Follow us on Facebook, Twitter and Instagram.

Similar Articles

Most Popular