OpenAI’s ChatGPT generating wrong information? investigation launched

A response from ChatGPT, an AI chatbot developed by OpenAI, on its website is seen in this February 9, 2023 illustration photo.  Reuters/File
A response from ChatGPT, an AI chatbot developed by OpenAI, on its website is seen in this February 9, 2023 illustration photo. Reuters/File

The US Federal Trade Commission (FTC) is currently investigating OpenAI, the maker of the popular ChatGPT app, due to concerns that the app is generating false information.

The investigation raises questions about the potential harm to consumers and the mishandling of user data by OpenAI’s technology.

In a letter to OpenAI, the FTC asked for information about incidents in which users were unfairly promiscuous and details about the company’s efforts to prevent the recurrence of such incidents. The inquiry comes as regulators are increasingly examining the risks associated with artificial intelligence (AI) technology.

FTC Chairwoman Leena Khan expressed her agency’s concerns about ChatGPT’s output during a Congressional committee hearing, saying, “We’ve heard reports where people’s sensitive information appears in response to inquiries from someone else.” We’ve heard about defamation, defamatory statements, completely untrue things that are coming out. This is the type of fraud and deception that we’re concerned about.”

OpenAI CEO Sam Altman, who testified before Congress earlier this year, acknowledged that the technology can have flaws. He underlined the need for setting up of regulations and a new agency to oversee AI security.

The FTC’s investigation focused not only on potential harm to users, but also on OpenAI’s data privacy practices and the methods it used to train and inform its AI technology. The company’s larger language model, GPT-4, forms the foundation of ChatGPT and is licensed to several other companies for their own applications.

While OpenAI has made efforts to increase the security and reliability of ChatGPT, concerns remain about objectionable or inaccurate content generated by AI models.

In April, Italy banned the use of ChatGPT due to privacy concerns, reinstating it only after OpenAI implemented an age verification tool and provided additional information on its privacy policies.

OpenAI and the FTC have yet to comment on the ongoing investigation.

As the use of AI technology, particularly large language models, becomes more prevalent, it has become increasingly important for regulators to address potential risks to consumers. The outcome of the FTC’s investigation will have an impact not only on OpenAI but also on the broader AI industry, as companies race to develop and deploy similar technologies while grappling with issues of accuracy, privacy and user safety.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top