This article contains information on the transformative impact of Generative AI tools like ChatGPT, their associated business risks, and best practices for mitigating these risks, including policies, training, and tools like Incydr for safeguarding sensitive data.
Overview
Generative Artificial Intelligence (GenAI) tools, such as Gemini, ChatGPT, DALL-E, Make-a-Video, Midjourney, and Stable Diffusion, are transforming organizational workflows. As technology evolves rapidly, it is crucial to understand how to safeguard your organization. This course provides a comprehensive overview of the business risks associated with the use of GenAI tools, and outlines best practices for companies to effectively mitigate these risks.
This course was largely created and improved using Articulate AI Assistant (and reviewed by humans). Articulate and Articulate AI Assistant were fully vetted by the Code42 (acquired by Mimecast) security team.
Course Objective
By the end of this course you will learn how to:
- Understand the transformative impact of GenAI tools on organizational workflows.
- Recognize the importance of comprehending and mitigating the business risks linked to GenAI tools.
- Know how to implement best practices to effectively mitigate the risks associated with the use of GenAI tools.
Audience
This course intended for all security practitioners.
What To Expect
This course will show you how Generative AI tools like ChatGPT, DALL-E, and others are transforming organizational workflows and the associated risks. It also covers best practices for risk mitigation and effective use of these tools.
Prerequisites
- You are an Incydr Administrator or Security Practitioner, with beginner to intermediate experience level.
- You are familiar with Incydr.
Introduction
https://openai.com/blog/chatgpt
In November 2022, OpenAI launched ChatGPT, its latest artificial intelligence (AI) product, which quickly gained immense popularity. Within just two months of its release, ChatGPT amassed over 100 million users, making it “the fastest-growing consumer application in history.” ChatGPT is part of a broader trend, as other companies are also developing generative AI (GenAI) tools. These tools not only generate text similar to ChatGPT (commonly known as chatbots) but also create images and even videos.
GenAI tools operate by taking an input known as a "prompt," which can be either text or a file, and then generating content based on that input.
Other GenAI Tools
ChatGPT is one GenAI tool, but there are many others. To name a few:
- Google's Gemini (formerly known as Bard): Gemini is designed to handle complex language understanding tasks with high accuracy.
- Anthropic's Claude: Claude focuses on providing safe and reliable AI interactions, emphasizing ethical considerations.
- OpenAI's Blackbox: Blackbox is known for its robust performance in generating human-like text and understanding intricate contexts.
Before We Begin
While any new technology presents inherent risks, managing the risks associated with Generative AI tools is similar to established risk management techniques used for other technologies, such as file synchronization, browser uploads, and shadow IT. Companies may discover that they already possess the necessary people, processes, and technology but need to refine them to encompass this new category of technology.
However, companies should not assume that their existing policies adequately address this technology; instead, they must be intentional in defining the people, processes, and technology involved in mitigating risks associated with GenAI tools.
Let's get started.
The tools and services referenced in this course are intended solely for educational and illustrative purposes within the GenAI marketplace. Their inclusion or exclusion does not imply any endorsement or recommendation by Code42 (acquired by Mimecast).
This course will primarily focus on ChatGPT, as it is one of the most widely used GenAI tools available today. However, the principles of utilizing GenAI tools in an organization are applicable regardless of the specific tool employed.
Tools and Risk
What is ChatGPT?
ChatGPT is an advanced artificial intelligence chatbot designed for engaging in conversational interactions. It processes text prompts, images, or a combination of both from users, enabling it to respond not only to initial inquiries but also to follow-up questions, challenge incorrect assumptions, and decline inappropriate requests, though it is not infallible. Common applications include idea generation, writing assistance, data summarization, and many, more.
For instance, one example on their website demonstrates the prompt: "What can I make with these ingredients?" accompanied by a picture of eggs, flour, and milk. The outcome? Pancakes, waffles, French toast, and many other delicious options. Check it out (click the "Visual input" tab).
AI generated image. Prompt: eggs, flour and milk
In today's world, where technology is advancing at an unprecedented pace, it is crucial to understand why not everyone is eager to adopt these innovations. As with any technological development, it is important to carefully evaluate the potential risks alongside the benefits. This evaluation ensures that all users can make informed decisions regarding the use of technology.
The Risks of AI Tools
What Risk Does ChatGPT Present to Businesses?
ChatGPT poses significant risks to businesses in four key areas:
Many Generative AI (GenAI) tools, including ChatGPT, are designed to accept a manual "prompt" from users. Depending on the specific tool, this prompt can take various forms, including manually entered text, copy/paste, text files, images, videos, or a combination of these. Based on the provided prompt, the GenAI tool performs actions such as generating text or images. However, a significant risk arises from the fact that GenAI tools continuously learn from previous interactions.
Upon the launch of ChatGPT, the terms of service allowed the use of all conversations for training purposes. Consequently, when users input sensitive or classified information, such as personally identifiable information (PII) or company financial data, that information became accessible for training ChatGPT. In response to the evolving landscape of GenAI development, OpenAI has introduced additional privacy options. As of April 2023, ChatGPT now offers an option to disable chat histories, enabling users to opt out of having their conversations used for future training.
Intellectual property infringement poses significant challenges for both end users and companies when the output of the GenAI tool infringes upon the intellectual property rights associated with the sourced data. For instance, Stability AI, the developer of Stable Diffusion, is currently facing a lawsuit from Getty Images due to the use of Getty's images as source material without compensation, resulting in the tool generating images that display the Getty Images watermark. At its core, ChatGPT is trained on extensive datasets of text and other materials sourced from the internet, although the precise origins of these training materials remain ambiguous.
While asking about the color of the sky may not pose significant issues for users, the implications are more serious for those engaging with ChatGPT to develop a new marketing strategy, craft a public speech, or design a logo for their organization. The potential for duplicated content can undermine the uniqueness and competitive edge of a business's output, leading to potential reputational and operational risks.
Moreover, GenAI tools can be inaccurate and may produce biased output. The algorithms behind these tools are ultimately designed by humans, who inadvertently introduce their own conscious and unconscious biases into the data. Additionally, the extensive training data can lead to biased results. Depending on how this output is utilized, it could result in operational and reputational harm to a company. This inaccuracy can also lead to legal complications if the output contains any inaccuracies and the organization publishes that content.
Safeguarding
In light of the inherent risks, how can organizations effectively safeguard themselves while remaining competitive?
Organizations can mitigate the risk of misuse of GenAI tools by employees, vendors, and contractors by implementing a comprehensive strategy that includes people, processes, technology, and training. This approach not only reduces risks but also enhances productivity and fosters a collaborative culture.
Before exploring mitigation techniques, we will analyze real-world examples of organizations that have successfully utilized and misused GenAI tools, along with the strategies they are employing to address these challenges.
Real-World Examples
Warnings to Employees
AI generated image. Prompt: inner office memo
Early in 2023, Microsoft, Amazon, and Walmart issued memos to their employees about the use of chatbots like ChatGPT. These organizations emphasized that while it is acceptable to use generative AI tools, employees must be cautious about the sensitivity of the data they input. The memos highlighted the importance of protecting confidential and proprietary information.
Employees were advised to avoid entering sensitive business information into these tools to prevent potential data breaches. The warnings serve as a reminder of the risks associated with using generative AI in a professional setting.
Marketing and Customer Support
AI generated image. Prompt: marketing
Many companies are leveraging generative AI to enhance their marketing and customer support efforts. Organizations like Snapchat, Instacart, Hot Wheels, and CarMax are either planning to use or are already using AI to improve customer interactions and marketing campaigns. For example, Coca-Cola utilized Stable Diffusion to inspire and design elements of their 'Masterpiece' advertisement.
These AI tools help companies create more personalized and engaging content for their customers. By automating certain tasks, businesses can focus on more strategic initiatives and improve overall efficiency.
Copyright Law
AI generated image. Prompt: copyright law
Copyright law presents challenges for AI-generated content. A graphic novelist recently lost copyright protection for AI-generated artwork in her novel because U.S. Copyright law only grants protection to human-created works. The law states that copyright protects 'the fruits of intellectual labor' that are founded in the creative powers of the mind.
In contrast, some countries like India, Ireland, New Zealand, and the U.K. grant copyright to the programmer who created the AI tool. This approach recognizes the effort involved in developing a program capable of generating creative works, even if the actual creation is done by the machine.
Sensitive Data
AI generated image. Prompt: data breach
Generative AI tools can pose risks when handling sensitive data. Organizations must be mindful of the data they input into these tools and the potential consequences.
OpenAI's terms of use (effective January 31, 2024) state that both input and output data may be used to improve their services, raising concerns about data privacy and security. OpenAI does allow you to ask them not to train on your content via their Privacy Request Portal, but it's an opt-out rather than an opt-in.
Incidents at Samsung highlighted these risks when employees entered confidential information into ChatGPT. Such actions can lead to data breaches and violate non-disclosure agreements and regulations like HIPAA. Companies need to establish clear guidelines for using AI tools to protect sensitive information.
Personal Data and Privacy
AI generated image. Prompt: personal data and privacy
Privacy concerns have led to investigations and bans of ChatGPT in several countries. Italy and Canada have initiated investigations, with Germany and the EU potentially following suit. These actions reflect growing concerns about the misuse of personal data by AI systems.
In the U.S., the executive branch is pushing for a Blueprint for an AI Bill of Rights to protect individuals from the misuse of AI. This initiative aims to establish guidelines and regulations to ensure the ethical use of AI technologies and safeguard personal data.
People
AI generated image. Prompt: serious meeting with four people
Effective risk management begins with the individuals responsible for identifying what constitutes a risk to the business. It involves prioritizing risks, determining appropriate mitigation strategies, and establishing plans for response and recovery. By focusing on these key areas, organizations can better navigate potential challenges and safeguard their operations.
Who's involved?
While every organization has its unique characteristics, the following departments and roles are typically involved in the risk management process for new technology:
Human Resources (HR) plays a crucial role in the risk management process for new technology. They are responsible for reinforcing company culture and its relationship to Generative AI (GenAI) tools. HR also leads or co-leads security education and awareness training, ensuring employees understand the risks and proper use of these tools.
Additionally, HR is in charge of creating and maintaining employee policies, such as acceptable use policies and privacy policies. These policies help guide employees on the appropriate use of GenAI tools and protect the organization from potential risks associated with their misuse.
Information Technology (IT) is essential in managing the identities and access to various hardware, software, applications, and other tools available to employees. IT creates and manages a vendor procurement and approval process for GenAI tools, ensuring that only approved tools are used within the organization.
IT is also responsible for managing access for employees to approved systems and services, from initial request to termination. They must monitor the usage of open-source GenAI tools, such as Stable Diffusion, to ensure compliance with company policies and security standards.
Security's workflow for GenAI tools often mirrors existing workflows for data input and output. They must ensure that data uploaded to or downloaded from GenAI tools complies with company policies and security protocols. Security teams should have a playbook for handling instances of Shadow IT, as new GenAI tools are constantly emerging.
The investigation process for security incidents involving GenAI tools should align with existing workflows. For example, if a user uploads a file to a corporate ChatGPT, security should be prepared to review chat history and determine if any breaches occurred. Policies should be in place to address security's jurisdiction over personal accounts when necessary.
Legal is involved in the vendor procurement process alongside Security and IT. They work with vendors to confirm approved data handling practices, including Data Processing Addendums for global privacy and security compliance. Legal ensures that data protections are in place for both data egress and ingress.
In cases where GenAI tools are open-source and lack a vendor, Legal must determine how to protect the organization from potential risks. They also collaborate with Security and HR to establish legal requirements for investigations involving corporate and personal GenAI tool accounts.
Process
AI generated image. Prompt: two people looking at a flowchart
What Should be in Place?
Proactively addressing potential events is crucial for effective risk management. The optimal approach is to identify possible risk events in advance and establish comprehensive policies and procedures aimed at preventing, detecting, and responding to incidents. While it is impossible to prepare for every conceivable scenario, having well-defined policies and procedures, along with regular practice, enables organizations to adapt to most situations effectively.
Key questions to consider when defining policies and procedures for GenAI tools include:
- Who has authorized the use of the tool, and which accounts are permitted access?
- What are the expected behaviors while utilizing the tool?
- What is the vendor approval process for GenAI tools?
- What procedures are in place for investigations?
Addressing these questions requires a collaborative team effort.
Don't forget to start by looking at any existing content policies and procedures. For example, if a vendor procurement process already exists, does it need to be different for GenAI tools? It might be ready as is.
Policies
Establishing clear policies regarding permitted use and privacy expectations for company resources is essential in mitigating risks, including those associated with GenAI tools. While existing policies may implicitly address GenAI tool usage, it is crucial for companies to intentionally articulate their stance on these tools.
Below are example policies accompanied by key questions to consider when developing or revising policies to incorporate GenAI tool usage.
Sample language, and templates that will assist you in revising existing policies effectively, can be found in the Additional Resources section.
An Acceptable Use Policy (AUP) guides how employees can use company resources, specifying who owns these resources and helping to prevent data loss. It also stipulates how email communications should be conducted and addresses the increasing use of Bring Your Own Device (BYOD) for business.
Key considerations for AUPs include who is allowed to use the tool, any constraints on its use, and how access can be requested and monitored. Additionally, it should define what data can be used as input to the tool and how GenAI tool content must be cited.
Important aspects of an EPP include what employee data can be used within a GenAI tool, whether employees can opt in or out of usage, and if the existing language already covers GenAI tool vendor use. These considerations help ensure that employee privacy is respected while leveraging GenAI tools.
Customer contracts often start with boilerplate content, and the company's stance on GenAI tools should be included as a clause. This ensures that both the company and the customer are clear on how customer information can be used with GenAI tools.
Other considerations include whether customers can receive data created or improved upon by GenAI tools, and if customer data can be used as input for a GenAI tool. Consulting with legal counsel is essential to confirm appropriate language is included in these contracts.
A vendor approval process is essential for managing the risks associated with using GenAI tools from third-party vendors. This process should include evaluating the vendor's security measures, compliance with regulations, and overall reliability.
Key questions to address include what the vendor approval process entails, who is responsible for approving vendors, and how ongoing vendor performance will be monitored. Establishing a robust vendor approval process helps mitigate risks and ensures that only trusted vendors are used.
Technology
View data movement to Generative AI tools with Incydr
Stay ahead of potential data leaks by monitoring files shared with Generative AI tools. Incydr easily spots risks in corporate file uploads and copy/paste to AI tools with dedicated risk indicators. You can quickly be alerted to these potential data leaks and respond immediately.
- Get the AI Cheat Sheet for Protecting Data.
Incydr is built to protect your data from Generative AI tools
What makes Incydr stand out?
- Visibility into cloud and endpoint exfiltration in one, including Git push/pull activity, Salesforce downloads, Airdrops and cloud syncs.
- Correct users when data is shared incorrectly with integrated security lessons, that prevent risky activity from becoming the norm.
- Validate actual file contents to know for sure how sensitive the data is.
- Implement real-time blocking for high-risk employees, working closely with intellectual property.
Incydr Settings Specific to Generative AI Tools
Destination risk indicators | AI Tools
Destination risk indicators apply to file events based on where a file is moved or uploaded.
As of May 2, 2024, file uploads to common artificial intelligence (AI) tools now have dedicated risk indicators, making it easier to identify when corporate data is exfiltrated to an AI tool. The new risk indicators cover 14 common AI tools, including ChatGPT, Claude, and Gemini.
As of July 31, 2024, AI Tools risk indicators are now also applied when users paste data to common artificial intelligence (AI) tools. This update makes it easier to identify when corporate data is exfiltrated to an AI tool.
For a complete list of risk indicators and associated risk scores, sign in to the Incydr console and select Risk Settings | Destination risk indicators | AI Tools.
Monitor ChatGPT desktop app (macOS)
With the Insider Risk Agent, version 2.1.0, there will be added support for detecting files exfiltrated to the macOS ChatGPT desktop app. (OpenAI has not released a Windows desktop app yet.) Support for the macOS app enhances Incydr's existing ChatGPT exfiltration detection via web browsers.
-
Code42 for GenAI
Announcing Expanded Protections for Critical Data.
Other Incydr Features to Consider
Add any enterprise-based GenAI tool domains to Incydr's list of trusted activity to remove any "noise" of sanctioned interaction with GenAI tools and only highlight signal of unsanctioned activity.
Forensic Search allows analysts to query for event data based on specific insider risk indicators (IRIs) (see above for GenAI specific IRIs) and other metadata pertaining to each event. Example search parameters include: Untrusted domains or Untrusted URL paths, e.g. openai.com, bard.google.com.
Configure Incydr preventative control settings to automatically prevent a user from uploading a file to a browser or pasting content into untrusted domains, such as openai.com, bard.google.com.
Security Education
Use Incydr Instructor to send manually or automated if triggered by alert.
Instructor Video Specific to Generative AI Tools
Incydr Instructor is specifically designed for adult learning to guide employees and help companies prevent and respond to risk events. Instructor's proactive and situational videos are designed to be given before any event occurs (such as annual training or when a role change occurs), while responsive videos can be triggered to send after certain risk criteria have been met.
In addition to policies and procedures, the best way to prevent and respond to the risks from GenAI tools is through education and awareness.
(To view any videos mentioned below, navigate to the Instructor page in your console or reach out to your CSM for more information).
Responsive
Responsive lessons provide just-in-time training as soon as a user makes a mistake. These lessons are non-accusatory and personable, which allows users to learn from their mistakes and build a positive relationship with the security team.
Unapproved AI Tool:
- Educates users about the risks of supplying sensitive data to AI tools that are not approved for company use.
- Like all lessons, it can be sent manually to users, or sent automatically in response to behavior that triggers an alert.
Summary
Generative Artificial Intelligence (GenAI) tools like ChatGPT are rapidly advancing, and organizations need to prepare for their integration. However, with holistic risk mitigation, the use of GenAI tools can be managed effectively without hindering progress. Here are some key takeaways:
- Prepare for GenAI integration. Organizations should anticipate the increasing capabilities and speed of GenAI tools like ChatGPT.
- Holistic risk mitigation is key. By incorporating people, process, and technology (such as Incydr Incydr and Instructor), organizations can effectively manage the risks associated with GenAI tool usage.
- Don't sacrifice future capabilities. With proper risk mitigation, organizations can protect their data without impeding the potential benefits of GenAI tools.
- Stay ahead of the competition. Organizations that embrace GenAI tools and implement effective risk management strategies can maintain a competitive edge in the evolving technological landscape.
Embrace GenAI tools with a balanced approach to risk and innovation for sustained success.
Knowledge Check
Question One: Which of the following are significant risks associated with using GenAI tools in business settings? Please select all options that apply.
- Improved data security.
- Increased employee productivity.
- Enhanced customer engagement.
- Lack of output uniqueness.
- Intellectual Property (IP) loss/leak.
The answer is 4, & 5.
This is a correct answer because ChatGPT may generate similar outputs for different users, which can undermine business uniqueness. This is a correct answer because ChatGPT can potentially expose sensitive information input by users.
Question Two: Why should organizations prepare for the integration of GenAI tools?
- To anticipate the increasing capabilities and speed of these tools.
- To replace all existing technologies with GenAI tools.
- To eliminate the need for human employees.
- To reduce the costs of technological investments.
The answer is 1.
This answer correctly identifies the reason for preparation, which is to keep up with the advancements in GenAI tools.
Question Three: Which of the following are best practices for companies to effectively mitigate the risks associated with GenAI tools? Please select all options that apply.
- Relying solely on GenAI tools for decision-making.
- Implementing robust data privacy policies.
- Using GenAI tools without any restrictions.
- Conducting regular security audits.
- Ignoring updates and patches for GenAI tools.
The answer is 2, & 4.
Robust data privacy policies help protect sensitive information from being misused or exposed. Regular security audits help identify and address potential vulnerabilities in the use of GenAI tools.
Question Four: Which new Incydr features can detect possible exfiltration via AI tools?
- Risk indicators for file uploads to common AI tools.
- Trust recommendations based on your company name.
- Risk indicators for pasting data to common AI tools.
- Coverage of all AI tools available.
- Monitoring of the ChatGPT desktop app
The answer is 1, 3, & 5.
The new features specifically apply to file uploads and pasting data to common AI tools, and monitoring of the ChatGPT desktop app.
Question Five: Why is proactively addressing potential events, such as with Incydr Instructor, crucial for effective risk management?
- It ensures that every conceivable scenario is prepared for.
- It eliminates the need for regular practice and updates.
- It focuses solely on responding to incidents after they occur.
- It helps in preventing, detecting, and responding to incidents effectively.
The answer is 4.
Proactive risk management involves identifying potential risks and establishing measures to handle them, which helps in preventing, detecting, and responding to incidents.
Additional Resources
Acceptable Use Policy Snippets for Generative Artificial Intelligence (GAI) Tools
What Others Are Saying
-
White House, OSTP
Blueprint for an AI Bill of Rights -
World Economic Forum
Why we need to care about responsible AI in the age of the algorithm -
European Commission
Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts -
Holistic AI
What is the EU AI Act? -
European Commission
White Paper on Artificial Intelligence: a European approach to excellence and trust
The Inside Story of ChatGPT's Astonishing Potential | Greg Brockman | TED
In a talk from the cutting edge of technology, OpenAI cofounder Greg Brockman explores the underlying design principles of ChatGPT and demos some mind-blowing, unreleased plug-ins for the chatbot that sent shockwaves across the world.
Leading AI expert Stuart J Russell explains why putting guardrails in place is imperative right now (Linkedin video).
Learn more about preparing for AI.
Getting Started with Incydr
General Resources
Questions or Comments?
Reach out to your Customer Success Manager (CSM).
Comments
Please sign in to leave a comment.