Proactive Strategies to Maximise the AI-IP Intersect
The third article in this three-part series. koralli unravels the complex relationship between artificial intelligence (AI) and intellectual property (IP), exploring what it means for you and your business. We’ll guide you through the various intersections between these two resource domains, building a comprehensive understanding of the challenges and opportunities that lie ahead. By the end, you'll be equipped with six actionable strategies to minimise risks associated with Generative AI and optimise your IP assets in this rapidly evolving landscape.
Photo by Google DeepMind from Pexels
Proactive Strategies to Maximise the AI-IP Intersect
Having explored the advantages of safeguarding your IP assets in Artificial Intelligence & Intellectual Property and examined the areas of possible infringement liability in How Are IP Protections Changing in a Global AI Landscape?, its clear that businesses must carefully manage how they integrate AI into their operation streams.
Advice For Business: How to Protect Your IP?
While concrete regulation and legal precedent remain distant, and AI companies continue to delay resolving key risk areas—such as transparency auditing and proper reimbursement to IP holders (e.g., through licensing or revenue-sharing models)—it can feel as though business owners are left navigating a nebulous, uncertain space.
But there’s good news: this isn’t entirely the case. In fact, there are several effective ways to secure and leverage your business’s IP assets, and we’ve outlined six of them below.Above all, our primary advice is simple: stay informed.
With updates and developments emerging almost daily, taking proactive steps to stay abreast of best practices and model advancements is crucial to staying above the flurry of businesses scrambling to stay competitive.
1. Upskilling & Educating
Navigating the complexities of AI and IP requires an integrated strategy that spans all departments and levels of seniority. As AI becomes increasingly embedded in daily operations, it’s likely that your employees are already using AI tools in their routines—whether for tasks as simple as spell checking or more advanced applications like generating email templates and taking meeting minutes. These tools can significantly enhance productivity, streamlining routine tasks and allowing your team to focus on higher-level work.
Photo by cottonbro studio from Pexels
Businesses should begin by conducting a comprehensive audit of all AI software in use across the organisation. Taking stock of what platforms are being employed and where they are integrated into your operations is a critical first step in identifying potential risks or theft to your business and its IP.
One early precaution is to minimise the use of personal or commercial information (PII), such as name and address or bank details, when using AI tools. For example, when uploading attachments to platforms like Claude AI, ensure they do not inadvertently include sensitive information, such as an email signature.
This policy should be communicated to all levels of your organisation to ensure that employees are aware of the risks associated with using GAI tools. Consider circulating safe practice guidelines and implementing training programs that are tailored to staff at all levels of seniority.
2. Audit and Monitor Your IP Assets
Regularly auditing your business’s IP assets is the first step to securing them. Begin by cataloguing all patents, trademarks, copyrighted materials, and trade secrets. This proactive approach not only helps in managing your existing assets but also in identifying any gaps or potential vulnerabilities. An intriguing opportunity for IP holders and content creators is the possibility of building their own GAI models using their proprietary datasets. These models can be developed based on existing, lawfully trained open-source GAI, allowing businesses to leverage AI while maintaining control over their IP.
In addition to audits, businesses should actively monitor their compiled datasets or data lakes—such as those containing images, logos, tables, and tags—for potential IP infringement. Time-saving tools like IP watch services can assist in this ongoing task (Appel, Neelbauer, and Schweidel 2023; Sweetenham 2023). Although still in early development and not entirely foolproof, tools like Glaze, which offer to ‘cloak’ artists’ data, provide some protection against unauthorised use by AI platforms (Lomas 2023).
Photo by cottonbro studio from Pexels
3. Limit accessible data available to AI-Platforms
Using a VPN is an inexpensive and effective way to protect your internet connection and online privacy. VPNs work by creating an encrypted tunnel for your data, hiding your IP address, and thus your online identity. This helps obscure much of your online data from AI platforms that might otherwise ingest it for analysis.
Some platforms offer commercial alternatives to their mainstream tools that are generally considered more secure. For instance, OpenAI has launched an API as an alternative to ChatGPT, which does not use input data to improve its services (Ioet 2023). Additionally, many companies are increasingly offering ‘opt-out’ options in their terms of service to enhance user privacy.
In the case of OpenAI, as of October 2023, you can submit a Privacy Request through their website, which removes their ability to view or analyse your inputs and prompts (Open AI 2024). Leveraging such privacy tools and options is highly advisable to minimise the data available to AI platforms.
4. Write Protections into Contracts
When either you or your clients are using GAI, it's essential to include specific disclosures in vendor and customer agreements, particularly for custom services and product deliveries (Appel, Neelbauer, and Schweidel 2023). This ensures that all IP rights are clearly defined, transparent, and protected. These agreements should also specify any restrictions on the text-prompt language or input data used in AI generation.
Additionally, consider incorporating AI transparency policies into staff agreements. Clearly outline the requirement for employees to document and log all AI tools they use. This will provide your business with valuable insights into the extent of AI usage, the variety of platforms in operation, and employee behaviour. Understanding where AI tools have and have not permeated workplace routines is crucial for building a long-term strategy, especially if your company plans to implement measures such as restricting platform access.
5. Find Task Specific Models to Protect Your Business
Custom task-oriented LLM models are increasingly being developed and made available on websites offering open-source AI models. For example, ShieldLM is a free, open-source safety detector model designed to help identify safety issues in LLM-generated content. This model claims to flag biassed content, harmful or illegal material, and any generation that violates privacy (thu-coai 2024). This is just one example from the vast catalogue of models available on platforms like GitHub that businesses can explore and employ to address specific needs.
6. Avoid Hallucinations
The usual approach to using GAI looks at it as a database or store of knowledge. Treating models as such increases the risk of incorporating false, misplaced, or ‘hallucinated’ information into your work. Instead, consider utilising AI generation tools for tasks that optimise their strengths, such as reasoning, judgement and contextualising information (Garg 2024). Focus on leveraging the language analysis capabilities of ML training while sourcing factual information independently from these models.
Photo by Google DeepMind from Pexels
With these six steps, koralli hopes to have provided some structure to this evolving landscape, helping you feel confident that your current and future AI applications do not inadvertently infringe upon the intricate web of IP they may interact with.
Navigating the IP in an AI Landscape
For now we will have to wait on a few lawsuits to know how courts and governments respond to the nuanced questions arising from the AI-IP intersect. Regardless of their outcomes, it’s clear this is just the tip of the iceberg.
Links to articles 1 and 2 in the series:
Sources [Accessed August 12, 2024]:
Appel, Gil, Juliana Neelbauer, and David A. Schweidel. 2023. “Generative AI Has an Intellectual Property Problem.” Harvard Business Review.
https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem.
Garg, Ashu. 2024. “Ten AI Insights from Databricks, Anyscale, and Microsoft.” Foundation Capital. https://foundationcapital.com/ten-ai-insights-from-databricks-anyscale-and-microsoft/.
Ioet. 2023. “Avoid Exposing Sensitive Data to ChatGPT: Tips and Tricks for Safe AI Interaction.” LinkedIn. https://www.linkedin.com/pulse/avoid-exposing-sensitive-data-chatgpt-tips-tricks-safe-ai-interaction/.
Lomas, Natasia. 2023. “Glaze protects art from prying AIs: Generative art’s style mimicry interrupted.” Tech Crunch. https://techcrunch.com/2023/03/17/glaze-generative-ai-art-style-mimicry-protection/?guccounter=2&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAG4yn6iJDiXDI8mR-Pd87reV9Pfz3iFKAbl-CAv-NJ4-ou04emHximqSoUbEXYW7evMx8xAxcJODRNObgucgYFsIqKs.
Sweetenham, Ellie. 2023. “Intellectual Property for Start-ups: Key Considerations and Strategies for Early-Stage Businesses.” Lawdit Solicitors.
https://lawdit.co.uk/readingroom/intellectual-property-for-start-ups.
Open AI. 2024. “OpenAI Privacy Request Portal.” OpenAI Privacy Center.
https://privacy.openai.com/policies.
thu-coai. 2024. “thu-coai/ShieldLM: ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors.” GitHub.