Published On August 08, 2023Every day, TV, radio, and online news reports have something to say about generative AI, its transformative potential, and the blooming controversy surrounding this nascent technology.
During the fourth webinar in Iron Mountain’s 2023 Education Series, Generative AI and Information Governance: Friends or Foes?, our panel of experts, including Ryan Zilm, Global Data Strategist and Former Chairman of the Board, ARMA International; John Isaza, Esq., FAI, Partner at Rimon Law Group Inc.; Arlette Walls, Global Records and Information Manager for Iron Mountain; and moderator Brian O’Flynn, Global Director of Digital Marketing for Iron Mountain, discussed the privacy, security, ethical, and governance concerns surfacing from generative AI tools and what these implications mean for records and information management (RIM) and information governance (IG) professionals.
Guide it, don’t hide it
There’s no hiding it. Your employees have likely dabbled in generative AI, or soon will. So it’s a good idea to proactively build a generative AI policy to address potential issues within your organization. Such policies should piggyback on existing retention schedules and serve to educate employees on proper usage.
Two components are critical to a successful generative AI policy. The first should address the use of proprietary or sensitive information within AI tools. Specifically, information entered into a generative AI tool is used to train its language model and can become open to the public, so educate employees on the risks and legal ramifications that can arise from using private, proprietary data to prompt AI tools.
Secondly, remind employees of the importance of fact-checking, validating, and editing AI-generated content as these tools commonly present outdated and nonfactual data that could also be biased or discriminatory. Furthermore, much of the information generated comes directly from online sources, so employees must keep in mind copyright infringement and the potential for plagiarism.
Consider establishing an organization-wide system to flag sensitive or inappropriate content. This will allow for better monitoring while also providing real-world examples of what not to do. To guide AI prompting practices, regularly examine AI outputs to monitor for accuracy and better understand the behaviors behind the algorithms.
Understand how your partners and vendors use AI
Organizations must not only think internally when it comes to generative AI policies but also have a holistic view of their external operations. Most notably, how vendors, contractors, and other suppliers are engaging with this technology.
Speak with your current vendors and contractors to understand the role that AI plays in their operations/processes and the impact it could have on your working relationship. Likewise, ensure you have full transparency into the AI policies of prospective partners and vendors before engaging with them.
If needed, consider creating or reworking vendor contracts to include guidance and expectations around the use of generative AI technologies. For example, language in these contracts should require vendors to disclose the types of AI tools they use to produce content for your organization and ensure they’re taking the proper precautions to protect your information.
Generative AI is evolving at a rapid pace and, as more data becomes available for its use, the more it will progress. It’s therefore crucial for RIM and IG professionals to stay on top of how it evolves and remain vigilant.
Many countries are starting to implement laws regarding AI, so expect others to follow suit. For instance, the European Union recently started setting rules on how companies can use AI, and Canada has already introduced its own regulations. This will likely pave the way for more countries to adopt policies. Similarly, be aware of any industry-related regulations that may be in development. While none have been officially published, the growing use of AI suggests they may be on the horizon.
Generative AI is not going away. If anything, its use will continue to become commonplace. RIM and IG professionals have crossed similar paths before as they’ve worked to produce policies, procedures, and guardrails around social media, instant messaging, and any new technology that has come onto the scene.
With great power comes great responsibility for this profession, so use it wisely. Generative AI will continue to have a significant impact on how content is created, consumed, and distributed, so it’s crucial to ensure your organization’s information stays protected and secure.
Interested in learning more about this topic and others in the information governance space? Visit Iron Mountain 2023 Education Series to watch the on-demand recording of Generative AI and Information Governance: Friends or Foes? and to register for upcoming webinars.