Breaking AI News - 7 Top Companies Agree to AI Guidelines
What is it - What does it mean for education?
On Friday the news came out that the 7 Top A.I. companies - Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI have made several commitments, including those listed in this N.Y. Times Article about it:
As part of the agreement, the companies agreed to:
Security testing of their A.I. products, in part by independent experts and to share information about their products with governments and others who are attempting to manage the risks of the technology.
Ensuring that consumers are able to spot A.I.-generated material by implementing watermarks or other means of identifying generated content.
Publicly reporting the capabilities and limitations of their systems on a regular basis, including security risks and evidence of bias.
Deploying advanced artificial intelligence tools to tackle society’s biggest challenges, like curing cancer and combating climate change.
Conducting research on the risks of bias, discrimination and invasion of privacy from the spread of A.I. tools.
What This Means For School Libraries and Education:
Librarians are in the business of educating students on how to identify fake news and bias, and that is becoming much harder to do in the age of generative A.I. The commitment to implementing watermarks on images and audio and developing means to identify information generated by A.I. could be a game-changer for determining what is A.I. and what is not.
Great Video:
A few months ago, CBS did a segment on DeepFake Technology and How Tech Companies are looking for ways to help people identify it with watermarks:
What are your thoughts on this new pledge? Let’s keep the conversation going on in the AI School Librarian Facebook Group.