Search This Blog

Thursday, March 30, 2023

FTC Complaint Targets OpenAI's ChatGPT, Urges Suspension Of New 'Bias Reinforcing' Chatbot Deployment

 Tech ethics organization Center for Artificial Intelligence and Digital Policy filed a complaint with the Federal Trade Commission on Thursday, asserting that Microsoft-backed startup OpenAI's recently introduced ChatGPT-4 product violates federal consumer protection law. They have urged a halt on all new generations of artificial intelligence chatbots by OpenAI for commercial deployment. 

In a complaint to the agency, CAIDP asked the FTC to investigate and suspend further deployment of OpenAI's commercial products until the research firm complies with the FTC Guidance for AI products. 

CAIDP stated ChatGPT-4 is "biased, deceptive, and a risk to privacy and public safety." The complaint is led by privacy advocate Marc Rotenberg who said:

"The FTC has a clear responsibility to investigate and prohibit unfair and deceptive trade practices. We believe that the FTC should look closely at OpenAI and GPT-4.

"We are specifically asking the FTC to determine whether the company has complied with the guidance the federal agency has issued."

OpenAI launched ChatGPT-4 in early March. CAIDP pointed out that the technical description of the AI chatbot describes a dozen major risks, including "Disinformation and influence operations." OpenAI even warned that "AI systems will have even greater potential to reinforce entire ideologies, worldviews, truths, and untruths, and to cement them or lock them in, foreclosing future contestation, reflection, and improvement."

CAIDP said in the complaint that ChatGPT-4 fails to meet the FTC's standard of being "transparent, explainable, fair and empirically sound while fostering accountability." They warned the chatbot is a potential risk to society. 

The complaint comes days after Elon Musk, Steve Wozniak, AI pioneer Yoshua Bengio and others signed an open letter calling for a six-month pause of new AI chatbots more powerful than ChatGPT-4. 

"We've reached the point where these systems are smart enough that they can be used in ways that are dangerous for society," said Bengio, director of the University of Montreal's Montreal Institute for Learning Algorithms, adding, "And we don't yet understand."

Their concerns were laid out in a letter titled "Pause Giant AI Experiments: An Open Letter," which was spearheaded by the Future of Life Institute - a nonprofit advised by Musk.

Musk - an early founder and financial backer of OpenAI, and Wozniak, have been outspoken about the dangers of AI for a while. We've outlined some of those dangers, such as political bias: 

And the bias isn't just with ChatGPT:

Tech investor David Sacks recently revealed: "There is mounting evidence OpenAI's safety layer is very biased... If you thought trust and safety were bad under Vijaya or Yoel, wait until the AI does it."

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.