Gartner: Deepfake and generative AI let us enter the world of zero trust

Gartner has released a series of “Predicts 2021” research reports, one of which outlines the significant impact artificial intelligence (AI) will have in the next few years, including a wide range of ethical and social issues.

Five Gartner analysts predicted the changes that will occur in 2025 in the report “Predictions for 2021: The Impact of Artificial Intelligence on Humans and Society”. The report specifically focuses on the so-called secondary consequences of artificial intelligence. The consequence is the result of unexpected new technology.

For example, generative AI can now create very realistic photos of people and objects that don’t actually exist.

Gartner: Deepfake and generative AI let us enter the world of zero trust

Gartner predicts that by 2023, 20% of accounts will be taken over by this AI-generated Deepfake. The report states: “The ability of AI to create and generate ultra-realistic content will have a transformative impact on what people believe their eyes see.”

The report made predictions from 5 perspectives of the AI ​​market, and made recommendations on how companies should respond to and adapt to these future challenges:

By 2025, pre-trained AI models will be mainly concentrated in 1% of suppliers, which will make the responsible use of AI a social concern

In 2023, 20% of successful account takeover attacks will use Deepfake, which will become part of social engineering attacks

By 2024, 60% of AI providers will mitigate harm/abuse as part of their software

By 2025, 10% of governments will use synthetic AI to avoid privacy and security issues

By 2025, 75% of conversations in the workplace will be recorded and analyzed to increase organizational value and assess risks

Each of these analyses is enough to attract enough attention from AI observers. If these predictions are combined, they can outline the grim future situation, which involves ethical issues, potential AI abuse, and loss of privacy in the workplace.

how to respond

If Gartner’s analysts’ predictions are accurate, concerns about the impact of AI and the truth will be the main topic in the next few years, and successful companies will need to be prepared to quickly adapt to these concerns.

An important theme in the report is the establishment of ethics committees in companies that rely on AI (whether services or products). Gartner said that for companies that plan to record and analyze workplace conversations, it is important to pay special attention to this point. A board of directors with employee representatives should be established to ensure fair use of conversation data.

Gartner also recommends that companies establish responsible AI use standards, and give priority to suppliers that “can prove that AI can clearly develop and solve related social problems.”

As for the security issues surrounding Deepfake and generative AI, Gartner recommends organizing and arranging training on Deepfake. The report said: “We are now entering a world of zero trust. Nothing can be trusted unless it is certified by an encrypted digital signature.”

The report has a lot of content worth paying attention to. According to statistics, the best Deepfake detection software will reach its peak with a 50% recognition rate in the long run. It is also predicted that by 2023, a large American company will use dialogue analysis to determine employee compensation. There are many areas to worry about in these analyses, including potential solutions.

  

The Links:   LB104S02-TL01 6D120A-050

Related Posts