AI: chatbots, cut and paste, and data leaks

Abstract image of AI

In just a few short years, artificial intelligence has moved from an academic computer science discipline to the most talked about topic in technology.

And, though some aspects of AI are surely overhyped – AI tools are being adopted widely.
Generative AI, especially, has huge potential to change the way we work.

But it also brings risks.

All these applications rely on data. Yet, in most organisations, there are few controls on who has access to AI tools, and which data they can share with them. And only a minority of organisations have incorporated AI tools into their security policies.

Our guest for this episode is Tim Freestone, of Kiteworks. He’s a long-standing expert in data protection and data privacy. And he’s been following the growth of AI, and what it means for data privacy, security and confidentiality.

Even data specialists have been surprised by the rapid take up of generative AI and its benefits. But do we have the measure in place to guard against the potential security risks it brings?

It is not just malicious hackers who make AI tools such as chatbots a risk. Even something as simple as pasting information into a generative AI tool can cause problems. And he argues that we need to apply security’s zero trust approach to AI too.

Interview by Stephen Pritchard

Image by Julius H. from Pixabay