Henry Kissinger: AI Will Invite Think About What It Means To Be Human: Broadband Breakfast

June 3, 2021 — The emerging and growing phenomenon of video manipulation known as deepfakes could pose a threat to the country’s national security, policymakers and technology experts said at an online conference Wednesday, but the best way to deal with it divided the panel.

A deepfake is a highly technical method of generating synthetic media in which a person’s likeness is inserted into a photo or video in a way that creates the illusion that they are actually there. A well-done deepfake can make a person appear to be doing things they’ve never done and saying things they’ve never said.

“The way technology has evolved, it’s literally impossible for a human to actually detect that something is a deepfake,” said Ashish Jaiman, director of technology operations at Microsoft, at an online event hosted by the Information Technology and Innovation Foundation.

Experts are wary of the associated implications of this technology increasingly offered to the general population, but the best way to resolve the brewing dilemma divides them. Some believe that better technology to detect deepfakes is the answer, while others say a change in social perspective is needed. Others argue that such a change in society would be dangerous and that the solution actually lies in the hands of journalists.

Deepfakes threaten democracy

Such technology was no problem when only Hollywood could afford such impressive special effects, says Rep. Anthony González, R-Ohio, but technology has advanced to a point that almost anyone can get their hands on. He says that with the spread of disinformation and the challenges of establishing a well-informed public, deepfakes could be instrumentalized to spread lies and affect elections.

As of yet, however, no evidence exists that deepfakes have been used for this purpose, according to Daniel Kimmage, the acting coordinator of the Department of State’s Global Engagement Center. But he, along with the other panelists, agree that technology could be used to influence elections and increase the already growing seeds of mistrust in the news media. They believe that it is better to act preventively and resolve the problem before it becomes a crisis.

“Once people realize that they cannot trust the pictures and videos they see, not only will they not believe the lies, but they will not believe the truth,” said Dana rao, executive vice president of the software company Adobe.

New technology as a solution

Jaiman says Microsoft has been developing sophisticated technologies to detect deepfakes for more than two years now. Deborah Johnson, emeritus professor of technology at the University of Virginia’s Faculty of Engineering, calls this method an “arms race,” in which we must develop technology that detects deepfakes at a faster rate than deepfake technology advances .

But Jaiman was the first to admit that despite Microsoft’s hard work, detecting deepfakes remains a grueling challenge. Apparently, it is much more difficult to detect a deepfake than to create one, he said. He believes that a societal response is needed and that technology will be inherently insufficient to solve the problem.

The change of society as a solution

Jaiman argues that people must be skeptical consumers of information. He believes that until technology catches up and deepfakes can be detected more easily and misinformation can be easily hushed up, people should approach news online with the prospect that they could easily be fooled.

But critics believe this approach of encouraging skepticism could be problematic. Gabriela Ivens, head of open source research at Human Rights Watch, says that “it becomes very problematic if people’s initial reactions are to believe nothing.” Ivens’ job is to research and expose human rights violations, but says growing mistrust of the media will make it more difficult for her to gain the necessary public support.

She thinks that we must resist a “zero trust society”.

Deer WineGoogle’s vice president and chief internet evangelist says it’s up to journalists to prevent the growing spread of mistrust. He accused journalists not of deliberately lying, but often of misleading the public. He believes that the real risk of deepfakes lies in their ability to erode America’s trust in the truth, and that it is up to journalists to restore that trust that is already starting to erode by being completely transparent and honest in their actions. reports.


Source link

Comments are closed.