Three tech giants have signed agreements with the U.S. Department of Commerce for their new artificial intelligence models to be reviewed by the Center for AI Standards and Innovation (CAISI) before public release. This move comes as the Trump administration considers tightening oversight of the sector, according to recent reports.
Pre-release evaluation as a new filter in AI development 🤖
CAISI will act as a technical filter, analyzing aspects such as bias, safety, and performance of each model. This process aims to detect flaws before the software reaches users. Companies will collaborate by sharing training data and test results. The measure seeks to establish a minimum standard, although details on the approval criteria have not yet been made public.
The government wants to see AI before it sees them 🏛️
Now it turns out that the same people who can't get their tax website to work properly are going to review whether an AI is safe. One imagines a bureaucrat asking ChatGPT: Are you a risk to national security?. And the AI replying: Depends, do you have coffee?. Good thing they signed an agreement, because otherwise, the AI might have launched on its own and stolen their jobs.