While GenAI is used by enterprises to improve their cybersecurity posture, the same technology is being used by adversaries to create and spread highly effective disinformation. To mitigate this threat, Gartner recommends that CISOs clearly define responsibility for anti-disinformation programs and invest in appropriate tools and techniques.
LLMs) use vast amounts of data and continually create new data, GenAI applications can exacerbate security and privacy risks. Organizations will need to take a multifaceted approach to building a responsible AI system. Additionally, stakeholders will need to work together to comprehensively assess the impact of GenAI on enterprise cybersecurity and find solutions to address the issues.
Mitigating Shadow AI Risks by Implementing the Right Policies
20.03.2024
With the rise of artificial intelligence, security is becoming saudi arabia mobile database important in every organization, which creates a new problem - "shadow AI" applications that need to be checked for compliance with security policies, writes Kamal Srinivasan, senior vice president of product and program management at Parallels (part of Alludo), on the Network Computing portal.
The shadow IT problem of recent years has evolved into the shadow AI problem. The growing popularity of large language models (LLM) and multimodal language models has led to product teams within organizations using these models to create use cases that improve productivity. Numerous tools and cloud services have emerged that make it easier for marketing, sales, engineering, legal, HR, and other teams to create and deploy generative AI (GenAI) applications. However, despite the rapid adoption of GenAI, security teams have yet to figure out the implications and policies. Meanwhile, product teams building applications are not waiting for security teams to catch up, creating potential security issues.
Because large language models
-
- Posts: 535
- Joined: Mon Dec 23, 2024 3:13 am