Next in our Series ‘What’s Up with ChatGPT?’ Ethics & Errors

As users become more familiar with ChatGPT’s capabilities, some have been able to expose how inexplicably inaccurate the chatbot can be.

Through relatively ordinary prompts, researchers and everyday users have been able to bend the program’s approach at answering questions. Not only can the chatbot be wrong or misleading, but it is assertive in doing so.

The use cases for platform abuse have shown no end. From writing malware or misinformation, composing essays on behalf of college students, generating hate speech when prompted, or writing phishing emails, the technology will be exploited for as long as it exists.

The rollout of ChatGPT into Bing posed even more issues. Bing’s A.I.-powered chat mode, while based on ChatGPT’s technology, is not a carbon copy. In a New York Times piece, an author exposed the near-hallucinogenic tendencies of the new Bing search, where it professes love, dark fantasies, and even desire to become human.

“It unsettled me so deeply that I had trouble sleeping afterward. And I no longer believe that the biggest problem with these A.I. models is their propensity for factual errors. Instead, I worry that the technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts,” wrote the author Kevin Roose, tech columnist for the New York Times.

While lawmakers sound the alarm over the ethics issues, recent history has shown that meaningful action is unlikely for now, putting self-regulation at the forefront when tackling these problems.

Some have wondered whether Microsoft has approached the release in an “act fast and ask questions later” fashion. Whether it’s purely luck or not, Google, Facebook, and others have the benefit of seeing the nefarious use cases for A.I. before giving widespread access to their versions of the platform.

Snapchat has already taken that step, by introducing its own chatbot, powered by ChatGPT. This time, however, steps have been taken to stop potential abuse of the A.I. before it starts.

According to The Verge, “the main difference is that Snap’s version is more restricted in what it can answer. Snap’s employees have trained it to adhere to the company’s trust and safety guidelines and not give responses that include swearing, violence, sexually explicit content, or opinions about dicey topics like politics.”

Whether or not these guardrails work remains to be seen, but it’s another example that self-regulation should be a priority for all who choose to use the technology.

Stay up-to-date on everything ChatGPT by downloading our guide “What’s Up with ChatGPT?” and check back here for even more trends and digital media updates from the pros at Brkthru, subscribe to our monthly newsletter The Brk_down or visit our website blog.