AI expands the range of threats and challenges the worldwide cyber ethics

How to forecast, prevent, and mitigate the effects of malicious uses of AI is a topic featured in a new report created by 14 worldwide institutions. New threats will impact the digital infrastructure, design and distribution of AI systems
Seán Ó hÉigeartaigh, Executive Director of the Cambridge Centre for the Study of Existential Risk

In 2017, the amount of investment in Artificial Intelligence has tripled. Just last year, 5 billion dollars was invested in machine learning ventures. Companies will spend about $2.5 billion worldwide on AI in 2018.  Last February, Microsoft co-founder Paul Allen announced the investment of $125 million to teach good sense and judgment to computers over the next three years through the Seattle-based Allen Institute for Artificial Intelligence, known as ‘AI’2. The enterprise focus on a new need: predict, avoid and mitigate the malicious use of AI.

“AI may pose new threats, or change the nature of existing threats, across cyber, physical, and political security”, Seán Ó hÉigeartaigh

The question posed by a 100-page report released last February. Written by 26 authors from 14 institutions such as e Future of Humanity Institute, the Center for the Study of Existential Risk, OpenAI, and the Center for a New American Security, the article provides a sweeping landscape of the security implications of artificial intelligence.

The authors argue that AI is not only changing the nature and scope of existing threats, but also expanding the range of threats. “We focused on ways in which people could do deliberate harm with AI,” said Seán Ó hÉigeartaigh, Executive Director of the Cambridge Centre for the Study of Existential Risk. “AI may pose new threats, or change the nature of existing threats, across cyber, physical, and political security”.


The technologies considered, are those already available or that are likely to be within the next five years. The message therefore is one of urgency. “The malicious use of AI will impact how we construct and manage our digital infrastructure, as well as how we design and distribute AI systems, and will likely require policy and other institutional responses”.

AI systems tend to be more efficient and more scalable than traditional tools, he says. Additionally, the use of AI can increase the anonymity and psychological distance a person feels to the actions carried out, potentially lowering the barrier to committing crimes and acts of violence. Moreover, AI systems have their own unique vulnerabilities including risks from data poisoning, adversarial examples, and the exploitation of flaws in their design. “AI-enabled attacks will outpace traditional cyberattacks, because they will generally be more effective, more finely targeted, and more difficult to attribute”.

The kinds of attacks which companies need to prepare for, are not limited to sophisticated computer hacks. The authors of the report suggest there are three primary security domains: digital security, which largely concerns cyberattacks; physical security, which refers to carrying out attacks with drones and other physical systems; and political security, which includes examples such as surveillance, persuasion via targeted propaganda, and deception via manipulated videos.

To prepare for malicious uses of AI across these domains the report suggests robot registration requirements, points of control, and countermeasures. To improve digital security, companies can promote consumer awareness and incentivize white hat hackers to find vulnerabilities in code. “We may also be able to learn from the cybersecurity community and employ measures such as red teaming for AI development, formal verification in AI systems, and responsible disclosure of AI vulnerabilities”.

To improve physical security, policymakers may actively seek to expand the range of stakeholders, and domain experts involved in discussions of these challenges. Meanwhile, media platforms may be able to minimize threats to political security by offering image and video authenticity certification, fake news detection, and encryption.

In practice

Currently, AI systems can scan and read text, interpret some pictures, and play board games. However, they can’t react to unexpected situations, or tell you which way water would flow on a hill. “This project won’t be something that is perfected and sold within a couple years”, said AI2 CEO Oren Etzioni about the results of the Microsoft’s investment. “It takes time and patience to teach machines”.

Although there are limitations, today’s 15% of the largest and leading organizations are using machine learning and AI for customer satisfaction, improving their growth and other aspects related to the business management according to the channels’ veteran John Shaw, Ceo of Austin, a Texas based company, in his speech during the last The Channel Company’s XChange.





Cambridge Centre for the Study of Existential Risk

As opiniões dos artigos/colunistas aqui publicados refletem exclusivamente a posição de seu autor, não caracterizando endosso, recomendação ou favorecimento por parte da Infor Channel ou qualquer outros envolvidos na publicação. Todos os direitos reservados. É proibida qualquer forma de reutilização, distribuição, reprodução ou publicação parcial ou total deste conteúdo sem prévia autorização da Infor Channel.