
Alexandria Rep. Don Beyer (D-8th) is doubling down on a push for new transparency standards after a controversy surrounding OpenAI and actress Scarlett Johansson.
OpenAI claimed its new ChatGPT assistant, which sounded eerily similar to Johansson, wasn’t based on Johansson despite the actress saying the company previously tried to hire her for the chatbot. Further adding to evidence that it was, in fact, based on Johannsson’s voice was OpenAI CEO Sam Altman posting a one-word reference to the 2013 movie Her, which notably features Johansson playing an AI.
Beyer said legislation he’s been advocating for would create transparency standards showing how the AI models are trained.
Beyer is the sponsor of the AI Foundation Model Transparency Act, which a release from Beyer’s office said “would prompt the establishment of transparency standards for information that high-impact foundation models must provide to the FTC and to the public, including how those AI models are trained and information about the source of data used.”
“Anyone who believes their voice is used without their permission would ask the same questions Scarlett Johansson is asking now,” Beyer said in a release. “The AI Foundation Model Transparency Act would ensure that those questions are answered.”
Beyer said while Johansson’s care is high profile, it exhibits a trend in technology. Last month, a high school athletic director in Baltimore was arrested for allegedly using AI software to manufacture a racist and antisemitic audio clip made to impersonate the school principal.
“Scarlett Johansson’s is not the first case of this kind and will not be the last, but it is a high-profile example of the growing need for transparency in AI models,” Beyer said. “Congress can help solve this problem by requiring creators of AI foundation models to share key information with regulators and the public, which is exactly what my bill would do.”
According to the release, the act would:
- Direct the FTC, in consultation with NIST, the Copyright Office, and OSTP, to set transparency standards for foundation model deployers, by asking them to make certain information publicly available to consumers;
- Direct companies to provide consumers and the FTC with information on the model’s training data, model training mechanisms, and whether user data is collected in inference; and
- Protect small deployers and researchers, while seeking responsible transparency practices from our highest-impact foundation models.