Shadow AI and How To Avoid It

IT professionals have long been familiar with shadow IT, in which a business’s employees create IT solutions without official authorization. More recently, the rising use of generative artificial intelligence (AI) has resulted in the emergence of shadow AI, which is slated to increase exponentially as businesses incorporate AI into more operations.

This form of Shadow IT is of particular concern to employers due to its ready access from a browser. Shadow AI poses several significant risks to businesses, but multiple strategies are also available to mitigate them.

The IT Guide to Workflow Management

Build the best version of any workflow for any team.

Download now

What is shadow AI?

Employees engage in the practice of Shadow AI when they use generative AI tools like ChatGPT without the knowledge or permission of their employer. It’s usually well-intentioned, as it can automate repetitive tasks like writing batch emails and responding to customer queries. However, Shadow AI poses security risks to systems and data.

How does shadow AI occur?

Shadow AI occurs most often when companies fail to engage their employees in conversations about AI, especially when teams don’t receive the tools they need to succeed through authorized means.

In these cases, employees may use large language models (LLMs) without informing their IT department.

A 2023 Salesforce study showed that nearly half of respondents have used generative AI, and over one-third use it daily.

Their findings indicated that employees routinely use generative AI tools to perform tasks like writing copy, creating images, and writing code. This practice creates big challenges in IT governance, as the IT department must then decide which uses of AI are acceptable and which ones to prohibit while maintaining normal business operations.

The same Salesforce study indicates that AI use is expanding; most respondents reported an increase in generative AI use compared to its first appearance at their organization.

Shadow AI Risks

The risks of shadow AI may be classified into risks involving sensitive information and those related to remote access.

Sensitive Information

Data security is a major concern in shadow AI, as it raises the possibility of sensitive information entering a publicly available LLM. This risk can result in the disclosure of personally identifiable information (PII) and copyright infringement of intellectual property (IP).

Another risk of this type is that software developers can enter code into AI tools, helping hackers design malware to exploit that code. This risk is particularly high at smaller businesses using free versions of generative AI like ChatGPT 3.5, which is only trained on data through January 2022. As a result, ChatGPT 3.5 can yield AI hallucinations and inaccuracies when using more recent data.

The deployment of generative AI has caused breaches of sensitive information for major companies like Microsoft and Samsung over the past two years. While AI can help organizations drive innovation through creative testing, businesses should not give full reign to this technology. However, complete prohibition isn’t the answer either: employees will not only fail to adhere to such a policy, but it may also ostracize the best talent.

A good strategy for protecting data in the face of shadow AI is to set reasonable boundaries, as many high-performing employees show particular determination in using AI to streamline their workflow.

Remote Access

The threat of shadow AI is greatest when users require remote access, whether they’re geographically separate from the main business hub or attempting to access the internet through cloud-based AI platforms.

Security brokers can address both concerns with built-in features like privacy and security settings. Microsoft Azure OpenAI is an Azure service that allows administrators to control the data users can upload.

Another tactic for protecting remote access environments from shadow AI is to monitor the data flowing through an organization. Existing security solutions should already detect unusually large information flows, indicating an AI-based breach.

Most organizations follow a policy of controlled allowance, although notable exceptions exist. For example, defense contractors are usually better off with a complete ban on generative AI, due to the strict security requirements of these companies.

Most businesses will find compromises between security and the enhanced productivity generative AI yields. The technology is particularly exciting for engineers, who can use AI to perfect a script in minutes that used to take weeks to prepare for production.

How to Avoid and Combat Shadow AI in Your Business

IT can avoid shadow IT with the following three steps:

  1. Establish a policy for accessing generative AI
  2. Communicate those policies to employees
  3. Reinforce acceptable use through training

1. Establish a policy for accessing generative AI

Developing policies for accessing public AI tools is the first step in preventing shadow AI, as many employees already use these tools. Approaches for managing access include written guidelines, configuring firewalls, and controlling access with Virtual Private Networks (VPN).

The best strategy for your organization will depend on its unique set of needs, but it will need to strike a balance between the benefits of AI tools like increased productivity and innovation and risks such as security vulnerabilities and data leakage.

2. Communicate those policies to employees

Once you’ve established your organization’s policies on AI, communicate those policies to all internal users. In particular, you need to describe which tools they’re allowed to use, the purposes they may serve, and the data they can use.

These policies should be as specific as possible about the authorized uses of generative AI, detailing use cases across all lines of business. They should also inform employees of the ways those policies impact use cases.

In addition, communicate policies as soon as they’re developed and frequently thereafter. Organizations often fall short of this requirement, as shown by a study by Asana. In that study, 31% of executives reported that their company had AI use policies, while only 18% of individual contributors made that claim.

These results indicate that less than one-third of organizations have AI policies and that employees don’t frequently receive information or guidance regarding those policies. Executives should therefore communicate their company’s AI policies more often than they think necessary.

3. Reinforce acceptable use through training

Informing people about your organization’s AI policies doesn’t mean they automatically know how to use them responsibly. Fighting shadow AI also requires hands-on training and self-help resources to show users how to get the most from AI tools while protecting sensitive data.

Training can take a variety of forms, including self-paced modules, webinars, and workshops. Educational resources also need to be easily accessible across the organization.

Summary

Managing access, communicating and enforcing policies, and educating employees are all important steps to take in preventing shadow AI.

However, organizations will need to adopt broader strategies as the use of generative AI becomes increasingly common. These measures could include developing new tools or customizing existing ones to better address shadow AI.

A recent survey by Dell shows the value of these efforts due to the rapid rise in AI. This finding is based on responses by three out of four IT leaders that generative AI will play a significant role in future transformations of their organizations.

Furthermore, two-thirds of the respondents expected those changes to occur within the next year.

Pipefy keeps your organization’s sensitive data and PII safe while you build automated workflows for greater efficiency. We protect your data by complying with information security best practices and ensuring environmental security maintenance.

We also maintain a dedicated security team, provide constant training, and conduct internal and external audits at regular intervals. In addition, we hold many certifications and compliance attestations; find out more by contacting our security team.

Find out why more IT departments trust Pipefy
Learn more

Related articles