The shadow IT problem of recent
Posted: Thu Feb 06, 2025 4:01 am
Mitigating Shadow AI Risks by Implementing the Right Policies
With the rise of artificial intelligence, security is becoming increasingly important in every organization, which creates a new problem - "shadow AI" applications that need to be checked for compliance with security policies, writes Kamal Srinivasan, senior vice president of product and program management at Parallels (part of Alludo), on the Network Computing portal.
years has evolved into the shadow AI problem. The growing popularity of large language models (LLM) and multimodal language models has led to product teams within organizations using these models to create use cases that improve productivity. Numerous tools and cloud services have emerged that make it easier for marketing, sales, engineering, legal, HR, and other teams to create and deploy generative AI (GenAI) applications. However, despite the rapid adoption of GenAI, security teams have yet to figure out the implications and policies. Meanwhile, product teams building applications are not waiting for security teams to catch up, creating potential security issues.
IT and security teams are grappling with cayman islands mobile database applications that can lead to network intrusions, data leaks, and disruptions. At the same time, organizations must avoid a too-rigid approach that could stifle innovation and prevent breakthrough product development. Enforcing policies that prevent users from experimenting with GenAI applications will negatively impact productivity and lead to further silos.
The Shadow AI Visibility Problem
Shadow IT has created a community of workers who use unauthorized devices to support their workloads. It has also given rise to “citizen developers” who can use no-code or low-code tools to build applications without going through official channels to obtain new software. Today, we have citizen developers using AI to build AI applications or other types of software.
These AI-powered apps drive productivity and speed up project completion, or show how far LLMs can go in solving a complex DevOps problem. While shadow AI apps are typically not malicious, they can consume cloud storage, increase storage costs, pose network threats, and lead to data leaks.
How can IT departments gain visibility into shadow AI? It makes sense to strengthen the practices used to mitigate shadow IT risks, with the caveat that LLMs can make anyone a citizen developer. At the same time, the volume of applications and data generated is increasing significantly. This means a more complex data protection task for IT teams, who must observe, monitor, learn, and then act.
With the rise of artificial intelligence, security is becoming increasingly important in every organization, which creates a new problem - "shadow AI" applications that need to be checked for compliance with security policies, writes Kamal Srinivasan, senior vice president of product and program management at Parallels (part of Alludo), on the Network Computing portal.
years has evolved into the shadow AI problem. The growing popularity of large language models (LLM) and multimodal language models has led to product teams within organizations using these models to create use cases that improve productivity. Numerous tools and cloud services have emerged that make it easier for marketing, sales, engineering, legal, HR, and other teams to create and deploy generative AI (GenAI) applications. However, despite the rapid adoption of GenAI, security teams have yet to figure out the implications and policies. Meanwhile, product teams building applications are not waiting for security teams to catch up, creating potential security issues.
IT and security teams are grappling with cayman islands mobile database applications that can lead to network intrusions, data leaks, and disruptions. At the same time, organizations must avoid a too-rigid approach that could stifle innovation and prevent breakthrough product development. Enforcing policies that prevent users from experimenting with GenAI applications will negatively impact productivity and lead to further silos.
The Shadow AI Visibility Problem
Shadow IT has created a community of workers who use unauthorized devices to support their workloads. It has also given rise to “citizen developers” who can use no-code or low-code tools to build applications without going through official channels to obtain new software. Today, we have citizen developers using AI to build AI applications or other types of software.
These AI-powered apps drive productivity and speed up project completion, or show how far LLMs can go in solving a complex DevOps problem. While shadow AI apps are typically not malicious, they can consume cloud storage, increase storage costs, pose network threats, and lead to data leaks.
How can IT departments gain visibility into shadow AI? It makes sense to strengthen the practices used to mitigate shadow IT risks, with the caveat that LLMs can make anyone a citizen developer. At the same time, the volume of applications and data generated is increasing significantly. This means a more complex data protection task for IT teams, who must observe, monitor, learn, and then act.