Applications of AI and other technological growth areas for the security industry will likely require developments in critical areas of risk management and control for all organizations using them. Namely, use of AI in security applications could strengthen companies’ cyber preparedness, as it allows advancement in mitigant techniques against threat actors, rapid analysis of vulnerabilities, an ability to simulate various scenarios involving threat actors, data integrity, security and utilization, and other applications. However, this amplifies, not replaces, the need and demand for robust risk management schemes. The absence of robust risk management may result in companies facing limitations in their ability to proactively identify, assess, and mitigate risks effectively. Therefore, entities may be ill-prepared to address the dynamic landscape of cybersecurity, even with the use of AI (and other technologies) for security.
Security for AI
The other major aspect of the security-AI intersection is the mitigation of security exposures related to the implementation and application of AI. These include security vulnerabilities that may be incorporated in the body of both open-source and proprietary software on which AI is built, the exposure of AI/ML functionality to misuse or abuse, and the potential for adversaries to leverage AI to define and refine new types of exploits.
This area has already begun to affect the cybersecurity products and services markets, from startups to major vendors and systems integrators, including a significant presence at the 2023 RSA Conference's Innovation Sandbox and the Black Hat Startup Spotlight. Practitioners are growing the body of research on threats to security and privacy that target AI, and they are identifying ways to detect and defend against malicious activity across a number of concerns. Among the most prominent recent examples, the Generative Red Team Challenge hosted by the AI Village at DEF CON 2023 was, according to organizers, the largest "red teaming" exercise held so far for any group of AI models. Supported by the White House Office of Science, Technology and Policy; the National Science Foundation's Computer and Information Science and Engineering Directorate; and the Congressional AI Caucus, models provided by Anthropic, Cohere, Google LLC, Hugging Face Inc., Meta Platforms Inc., NVIDIA Corp., OpenAI, and Stability AI, with participation from Microsoft Corp., were subjected to testing on an evaluation platform provided by Scale AI. Other partners in the effort included Humane Intelligence, SeedAI, and the AI Vulnerability Database (AVID).
Existing approaches that have demonstrated value are getting an uplift in this new arena. MITRE Corp., for example, spearheaded an approach to threat characterization with its Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) knowledgebase, which describes threat attributes in ways consumable by detection and response technologies to improve performance and foster automation. Recently, MITRE introduced a similar initiative in Adversarial Threat Landscape for Artificial-Intelligence Systems (ATLAS), which seek to bring the same systematic approach demonstrated with ATT&CK to threat characterization for AI. While ATT&CK focuses on threats, the AI Vulnerability Database, noted above as a participant in the Generative Red Team Challenge, is a separate effort to catalog exposures, described as "an open-source knowledgebase of failure modes for AI models, datasets, and systems."
Techniques that have been used more broadly to secure the software supply chain are also being applied to AI by those specializing in this domain. Another perspective being brought to bear on the challenge is that of safety, whereby those with experience in both AI and safety engineering are applying the practices of safety assurance to AI, with security included among the objectives.
The aim of these initiatives is not only to help increase assurance for those adopting AI. They also seek to help make AI safer by taking more of an active stance in defending innovative technology and providing foundations for proper digital governance, auditability and controls for security, privacy, safety, and other risks. Many of these issues are in their infancy, and an increase in viable use cases will inevitably yield standards, norms, and regulation to help enable the balance of safety and security, as well as innovation and progress.
Such standards, norms, and regulation will then need to be used in the form of updated governance and risk management strategies across organizations if they are to succeed in an increasingly digital future.
Successful companies will need to maintain effective governance for AI and other technological developments — a hallmark of adaptive, successful companies today. In our view, effective governance includes the establishment of policies and procedures for AI usage, oversight from boards of directors, and a proactive approach to assess and mitigate risks. Furthermore, governance should include regular audits, transparency in AI decision-making, and mechanisms for adapting to changing threat landscapes, ensuring responsible and secure AI integration across the organization.
Frequently asked questions:
What is the relationship between AI and cybersecurity?
There are two main aspects: AI for security, and security for AI. Engaging AI in threat recognition and streamlining processes of data collection and response to better mitigate threats are two examples of AI for security: engaging AI in ways that improve cybersecurity efforts for organizations. A focus on the actual or potential vulnerabilities and exposures of AI speaks to efforts to improve security for AI, which in turn helps assure confidence in AI and the increasingly significant role it is playing in technology evolution.
What are the security and privacy risks associated with AI, and how can these be mitigated?
The range of risks is broad, and investment in addressing these issues is increasing, especially given the focus on generative AI. Any summary of these risks will therefore be shaped by the ongoing evolution of AI — an evolution that is happening with breathtaking speed — and lists composed even in the near future may therefore differ from any presented today. Part of the challenge with generative AI in particular is that its interactions can be very broad and are dynamic because large language models learn from ongoing “conversational” interactions. An understanding of the nature of their risks and exposures is therefore developing along with them. Among the risks already seen, however, are the potential for manipulating large language model (LLMs) to disclose sensitive or protected information; the risks of exposing sensitive content to LLMs as training data beyond the acceptable control of organizations; the possibility of malicious actors “poisoning” training data to skew outputs; malicious implementations of AI that can be used for nefarious purposes; and potential compromise of the software supply chain used in developing AI implementations. These tactics may be employed in efforts ranging from misinformation or disinformation to privacy exploits to a variety of security threats.
How is AI manifesting itself in the landscape of cybersecurity technology?
With respect to “AI for security” (as defined above), AI already plays a significant role in cybersecurity, and the potential for generative AI applications has become as apparent in this field as it has in other technology domains. Many of generative AI’s major players also have a significant presence in fields such as cyber threat detection and response. These efforts leverage the ability of AI to catalog and recognize adversary tactics and pull together relevant contextual data and threat intelligence quickly, which can help accelerate response to security threats and mitigation of their impact. Current efforts target the ability of AI to digest overwhelming volumes of security telemetry and help augment the ability of skilled security experts to respond to demand. The ability of generative AI to create programming code, meanwhile, has potential for accelerating security automation.
In terms of security for AI, major players in generative AI as well as several startups and innovators have come forward with new approaches to address concerns regarding the security of AI. Many of these have been featured at major cybersecurity conferences such as the RSA Conference and Black Hat/DEF CON. These innovators are seeking to mitigate many of the known or potential threat vectors already seen targeting AI — and aim to position themselves to tackle those that emerge as the rapid pace of AI innovation continues.
Related research
- 451 Research’s Voice of the Enterprise: AI & Machine Learning, Infrastructure 2023, November 28, 2023, S&P Global.
- 451 Research’s Voice of the Enterprise: AI & Machine Learning, Infrastructure 2023, July 19, 2023, S&P Global Market Intelligence.
- 451 Research’s Voice of the Enterprise: AI & Machine Learning, Infrastructure 2023, July 19, 2023, S&P Global Market Intelligence.
- RSA Conference 2023: AI everywhere all at once, May 16, 2023, S&P Global Market Intelligence.
- Changes abound for information security teams to address skill shortages and labor retention – Highlights from Voice of the Enterprise: Information Security, June 30, 2023, S&P Global Market Intelligence.
- Microsoft, OpenAI partnership provides cybersecurity's generative AI moment, March 28, 2023, S&P Global Market Intelligence.
- Generative AI likely to disrupt security, too, Feb. 24, 2023, S&P Global Market Intelligence.
Contributors