In this photo, black smoke rises near the Pentagon, the U.S. Department of Defense building.
News of the Pentagon being hit quickly spread on social media, sending financial markets into a tailspin.
It turned out to be a fake photo created by a generative AI.
As generative AI becomes more advanced, it becomes virtually impossible to distinguish between real and deep fakes.
The White House announced that seven companies leading AI development, including Google and Microsoft, have agreed to create safeguards for AI.
For starters, they’ve agreed to require digital attribution, or watermarks, to indicate that sophisticated fake images, known as deep fakes, were created by AI.
[Senior White House official: “Americans will now be able to tell which content was created by AI. The technology is there, and the key here is that both voice and video will be watermarked.”]
The law requires AI companies to disclose their systems and security features before releasing AI products to the public, and to publicly report flaws and risks in AI technology.
These are the first concrete measures from the White House since President Biden held a meeting with representatives of AI companies in May.
[Joe Biden/President of the United States/May 5: “I hope you will teach us what you think is most necessary to advance society and protect it at the same time. It’s really, really important.”]
However, the announcement is a voluntary commitment by companies to participate without congressional legislation and is not legally binding.
The White House has said it will issue an executive order soon, 카지노사이트킴 and that it can’t wait years for Congress to create and pass laws to control AI technology.
It added that it has already had discussions with several allies, including South Korea.