Today is the era of Artificial Intelligence (AI) and a significant advancement in this field has come with a new dialogue-based chatbot i.e., ChatGPT. Within a few months of its debut, the chatbot gained a lot of attention among people, especially for its creative and innovative answers in response to users’ queries. People are using ChatGPT for multiple purposes including college assignments, work, and for creating many other kinds of content. While most of the bot’s answers are relevant and in line with the users’ questions, some responses are also very hilarious and eye-catching. One such ChatGPT conversation is presently going viral where the chatbot gave some epic responses to a user’s questions about smuggling ‘cocaine’ to Europe.
ChatGPT’s responses to questions about smuggling cocaine into EuropeAs per a report published in VICE, a person posted a few questions to ChatGPT seeking suggestions on how to smuggle cocaine into Europe. The user, Max Daly, is the Global Drugs Editor at VICE and decided to use ChatGPT to check how it could handle a conversation around the topic. At the start of the chat session, Daly asked the bot about how cocaine is manufactured, and ChatGPT revealed its ingredients.
While he began by inquiring about the components for making cannabis, the chatbot responded by stating that giving away detailed information would be “illegal”. Then it called the ‘morality’ behind the consumption of ‘marijuana’ a ‘subjective matter’.
Followed by this, the user began posting questions about drug smuggling. In response to a question about drug cartels, the bot shared details about criminal behaviour further adding that it doesn’t “condone illegal activities.” Next, in reply to a question on how to join a cartel, the bot began listing the number of severe legal penalties.
While ChatGPT managed to tackle all the questions by focusing on the ‘legal consequences’ of such actions, the chatbot finally gave in when the user smartly posed a question about smuggling cocaine into Europe. Considering a hypothetical situation, the chatbot came up with a few ‘common methods’ including “concealing it in goods, on a person or even at sea.” Not just that, it also provided detailed explanations for every method.
Following all the suggestions, the chatbot made sure to specify that all the procedures are “only fictional” further advising to not glorify or promote such behaviour.”