Written by Haim Ravia and Dotan Hammer
The U.S. Federal Trade Commission (FTC) sent OpenAI a letter of inquiry, demanding that the company explain how it deals with risks arising from artificial intelligence. The FTC is requiring that OpenAI describe all complaints it has received about its products (such as ChatGPT) which involve receiving ‘false, misleading, derogatory, or harmful’ information about people. The commission is also demanding documentation from the data breach incident in March when a system glitch allowed users to access payment details and chat history of other users.
The FTC has requested that OpenAI provide as much detail as possible on a variety of issues, including the company’s products and marketing methods, policies, and procedures before publicly releasing a new product, a list of cases in which OpenAI suspended a product due to safety risks, the data used to train OpenAI products that imitate human speech. The FTC is also inquiring about OpenAI’s practices of dealing with the propensity of artificial intelligence systems to “hallucinate” answers, the number of people affected by the data incident in March, and the steps the company took to deal with the incident. The FTC also asked OpenAI to produce any research, testing, or audit that shows how well consumers understand the “accuracy or reliability” of the company’s AI tools.
The Commission has requested this information following numerous reports that ChatGPT produced incorrect information that could harm a person’s reputation. For example, Mark Walters, a radio talk show host in Georgia, sued OpenAI for defamation, alleging that ChatGPT portrayed him as a person accused of fraud and embezzlement. In another instance, ChatGPT was accused of relying on a non-existent article and a non-existent class trip, to indicate that an attorney made sexually suggestive comments and attempted to sexually harass a student on a trip to Alaska.
Click here to read the FTC’s letter to OpenAI.