October 23, 2023

The risks of working with generative AI

Mark Pattie
Modern Work Practice Lead

In the first leg of our journey we explored AI transformation and how you could unlock the power of AI for your organisation. Our second leg takes us on a descent into the world generative AI as we explore what you can do to keep your organisation secure in the wave of AI transformation.

The topic of AI is permeating almost every industry. The incoming wave of AI transformation promises major leaps in potential for productivity, creativity, and collaboration.

However, as with any new technology, it also comes with an array of potential risks that need to be addressed. Open-source generative AI models, such as ChatGPT, learn from user input to shape their output, which means they may inadvertently include proprietary data in generated responses. This could result in a range of adverse outcomes and could even jeopardise your organisation’s competitive advantage, compromising customers’ or corporate intellectual property, and violating confidentiality agreements.

The use of generative AI by employees of organisations leads to privacy, security, and regulatory implications. Let’s be clear, AI is here to stay and it’s well-worth getting on board with. However, your organisation needs to consider potential issues to make the most of the benefits while minimising risk. There are a few common concerns we’re hearing surrounding the use of AI. In this blog we’ve compiled these concerns along with some potential mitigating strategies you could put into place to help alleviate them.

Concern: Policy, regulation, governance, and law

The rapid pace at which generative AI technologies are rolling out is making it impossible for laws and policies to keep up. We’re operating in a legal grey area surrounding the use of AI, a grey area that prompts a lot of questions. For example, who’s liable if AI generates a false piece of information, or exposes sensitive information such as patient information in generated content from an open-source generative AI model? Or what happens if AI generates a piece of content that violates copyright of existing content?

Governments, regulatory bodies, and legal professionals alike are attempting to tackle the fast-moving AI landscape – but when the technology changes every day, it’s a relentless task.

Mitigation strategy:

It’s always best practice to review your regulatory compliance obligations and to consider legal implications and protections to ensure you’re across the most up to date requirements for your organisation, but that doesn’t mean there’s nothing you can do on the frontline.  

AI governance and policy making can’t be left solely to governments or other statutory bodies. Every organisation that utilises AI will need its own guardrails in place to regulate its use of AI. It is shifting rapidly, and any policies you put in place to address AI use won’t be set-and-forget. You’ll need to have regular, stringent review processes in place to protect your organisation.

Make sure you’re having open conversations with your employees about what AI use within your organisation. This includes acceptable use of AI, what can and can’t be provided to large language models, and how the private versus public offerings vary (for example CoPilot versus ChatGPT). This must be done with a lens of protection of your sensitive corporate data (including that of your customers) while protecting your corporate IP., Adopting AI may seem daunting for your organisation,, but the productivity gains are well worth the risk tradeoff that AI can offer.

Concern: Privacy and sercurity

Concerns about privacy and security risks resulting from generative AI are some of the most pressing. The inherent nature of generative AI – as it relies on inputted data to comprehend or generate content – means it has access to significant volumes of data, some of it containing personal or sensitive information. This raises concerns both about how this data is being stored and used by AI software, and who can gain access to it. Some companies have even banned certain AI tools because of privacy and concerns about data leakage.

AI can be used to enhance cyber security efforts, but it simultaneously can pose security risks. Generative AI tools can be used for malicious purposes just as easily as well-intentioned ones. It’s easier than ever for hackers to use these tools to generate malicious code – we can’t forget that malicious actors are receiving all the same productivity benefits from generative AI that we are.

Mitigation strategy:

If you’re actively engaging with AI within your organisation, it’s more important than ever to ensure you’re continually renewing and strengthening your security posture. You also need to have in place robust data protection measures to prevent unauthorised access to, or misuse of, sensitive information. Ensuring security is regularly reviewed, permissions applied, and a solid information classification and protection solution is deployed is essential to protect your corporate data.

Concern: Transparency and accountability

Generative AI models operate as ‘black boxes’ – there’s no way to understand their decision-making process. This lack of transparency can lead to a lack of trust in AI applications, from employees, customers, and other stakeholders. Accountability is also a big question mark, tying into the similar concerns surrounding privacy and security. If sensitive information is leaked because of an AI tool, who should be held accountable?

Mitigation strategy:

As AI continually shifts and changes, so do societal expectations, thoughts, and opinions around its use. During this transitionary period, it’s critical for businesses to ensure the ethical, transparent, and responsible use of AI technologies. What this looks like may change day-to-day, so striving for transparency will be an ongoing process for organisations.

Concern: Mistakes and inaccurate information

The internet is already rife with misinformation, and generative AI is not immune to contributing to this. While the majority of generative AI tools available on the market are mostly accurate, they’re not infallible. There’s been reports of a phenomenon referred to as ‘AI hallucinations’, where tools such as ChatGPT has confidently stated entirely incorrect information.

Mitigation strategy:

Taking advantage of generative AI is great for the productivity benefits, but it’s important to not get too reliant upon them, and to not take information given to you by AI as gospel. These tools can, and do, make mistakes. Information generated by AI still needs to be thoroughly vetted before you put it out to the market, to avoid producing misinformation and dealing with corresponding reputational damage.

Concern: Job displacement

There are also concerns that AI will become too efficient and end up eradicating certain jobs – or even entire industries. Routine and repetitive tasks have always been susceptible to automation, and the new wave of AI has furthered this. Preparing workforces for new skill requirements and identifying opportunities for human-AI collaboration is going to become essential in this new landscape. Jobs will start looking different as AI advances, and there will need to be rapid mindset shifts in most industries as these changes occur.

Mitigation strategy:

Linking to the earlier concern surrounding AI hallucinations – at this point in time, there’s definitely still a real need for human evaluation and vetting of AI produced content. Generative AI can certainly help speed up certain tasks, but it’s not perfect, and you may need to point this out to your employees to alleviate any concerns around AI-induced job losses. Training is a good way to go about this – for your organisation to fully adopt AI, you’ll need to invest in training to ensure your team’s AI literacy is up to date.

Have concerns or questions about how you’re going to balance benefiting from AI with risk management?

At Data#3, we’re at the forefront of engaging with generative AI, and we’re here to help you along your AI journey. Contact us today to find out how we can help you implement AI into your organisation – securely.