The EU announced that over a hundred companies made voluntary pledges and were first to sign its new AI Pact — which calls for AI companies to commit to at least three core actions:
- AI governance strategy to foster the uptake of AI in the organization and work towards future compliance with the AI Act.
- High-risk AI systems mapping to identify AI systems likely to be categorized as high-risk under the AI Act.
- Promoting AI literacy and awareness among staff, ensuring ethical and responsible AI development.
In addition to these core commitments, more than half of the signatories committed to additional pledges including human oversight, mitigating risks, and transparently labelling certain types of AI-generated content such as deepfakes.
Okay, so what?
The thing that always gets me about these corporate / government pacts and pledges is that they mean absolutely nothing. They're glorified pinky promises designed to tickle bureaucracy and feign progress.
Does anyone really think that these companies go back to their AI laboratories, pin these pledges to their bulletin boards, and change absolutely anything about their current trajectory — no matter how unethical or irresponsible it is?
Companies that participated in the pledge include: Adobe, Google, IBM, HP, Qualcomm, Microsoft, and OpenAI
Apple, Meta, Mistral, TikTok, and Anthropic were criticized for not joining the pact, but Meta said that it won't “rule out our joining the AI Pact at a later stage.”
But really, who cares? Signing or not signing the scout's honor pact has the same outcome for both sets of companies. Just look at OpenAI, who signed the pact and then went home to discuss its world domination plans that involve restructuring into a for-profit!
Remember, this isn't the only AI pact getting spit handshakes.
Last year the Biden-Harris Administration secured voluntary commitments from leading AI companies — including Meta and Anthropic — to move toward safe, secure, and transparent development of AI technology.
In December 2023, Meta and IBM also spearheaded the launch of the AI Alliance in collaboration with over 50 founding members, which was tasked with bringing together leading developers to address safety concerns, identify AI risks, and mitigate risks before putting products into the world. Of course, that was just a month after Meta disbanded its Responsible AI team.
In the past decade, there have been dozens of AI pacts and ethical initiatives, yet we've landed where we are in 2024. I'll trade all those “cross my hearts” for decent regulation, but maybe I wish for too much.