AI Group Urges FTC to Stop GPT-4 Release
The Center for Artificial Intelligence and Digital Policy (CAIDP), a non-profit AI research group, has filed a complaint with the Federal Trade Commission (FTC) against OpenAI, calling for an investigation into the release of GPT-4 and halting its deployment. In its complaint, the group claims that GPT-4 is biased, deceptive, and poses a risk to privacy and public safety.
CAIDP is urging the FTC to perform an independent evaluation of OpenAI's products before deployment and throughout the GPT AI lifecycle, as well as ensuring compliance with FTC AI Guidance before future releases.
Despite OpenAI's claim that external experts assessed the potential risks posed by GPT-4, concerns about the rapid progress of AI have been raised by various groups. The Future of Life Institute published an open letter, which was signed by many professors and notable tech-industry figures such as Elon Musk and Steve Wozniak. The letter urged AI labs to cease the training of AI systems that are more powerful than GPT-4 for at least 6 months.
CAIDP’s complaint cites concerns about the lack of transparency around GPT-4’s “architecture, model size, hardware, computing resources, training techniques, dataset construction, or training methods,” leading to worries about potential bias, discrimination, and commercial surveillance. OpenAI has previously acknowledged the potential to reinforce and reproduce particular biases, including harmful stereotypical and demeaning associations for some marginalized groups.
CAIDP argues that OpenAI’s product does not meet the FTC’s requirements of being “transparent, explainable, fair, and empirically sound while fostering accountability.”
The complaint also raises concerns about the safety of children when using GPT-4 and the lack of detail in OpenAI’s safety checks during its testing. There was no mention of any measures put in place to protect minors.
Additionally, there are security and privacy worries, with CAIDP citing a Europol warning that GPT-4 could be used to draft highly realistic text for phishing purposes or to produce text for propaganda and disinformation.
CAIDP also notes that GPT-4’s ability to provide text responses from photo inputs has huge implications for personal privacy, as it could let users link a picture of a person to their personal information. The group argues that this capability could also be used to make recommendations and assessments regarding the person.
More generally, CAIDP is pushing for regulations that set minimum standards for goods in the AI market sector in an effort to mitigate the potential risks of ever-improving AI systems.