- 30% of Britons are providing AI chatbots with confidential personal information
- Research from NymVPN shows company and customer data is also at risk
- Emphasizes the importance of taking precautions, like using a quality VPN
Almost one in three Britons shares sensitive personal data with AI chatbots like OpenAI’s ChatGPT, according to research from cybersecurity company NymVPN. 30% of Brits have fed AI chatbots with confidential information such as health and banking data, potentially putting their privacy – and that of others – at risk.
This oversharing with the likes of ChatGPT and Google Gemini comes despite 48% of respondents expressing privacy concerns over AI chatbots. This signals that the issue extends to the workplace, with employees sharing sensitive company and customer data.
NymVPN’s findings come in the wake of a number of recent high-profile data breaches, most notably the Marks & Spencer cyber attack, which shows just how easily confidential data can fall into the wrong hands.
“Convenience is being prioritized over security”
NymVPN’s research reveals that 26% of respondents admitted to disclosing financial information related to salary, investments, and mortgages to AI chatbots. Riskier still, 18% shared credit card or bank account data.
24% of those surveyed by NymVPN admit to having shared customer data – including names and email addresses – with AI chatbots. More worrying still, 16% uploaded company financial data and internal documents such as contracts. This is despite 43% expressing worry about sensitive company data being leaked by AI tools.
“AI tools have rapidly become part of how people work, but we’re seeing a worrying trend where convenience is being prioritized over security,” said Harry Halpin, CEO of NymVPN.
M&S, Co-op, and Adidas have all been in the headlines for the wrong reasons, having fallen victim to data breaches. “High-profile breaches show how vulnerable even major organizations can be, and the more personal and corporate data that is fed into AI, the bigger the target becomes for cybercriminals,” said Halpin.
The importance of not oversharing
Since nearly a quarter of respondents share customer data with AI chatbots, this emphasizes the urgency of companies implementing clear guidelines and formal policies for the use of AI in the workplace.
“Employees and businesses urgently need to think about how they’re protecting both personal privacy and company data when using AI tools,” said Halpin.
Although avoiding AI chatbots entirely would be the optimal solution for privacy, it’s not always the most practical. Users should, at the very least, avoid sharing sensitive information with AI chatbots. Privacy settings can also be tweaked, such as disabling chat history or opting out of model training.
A VPN can add a layer of privacy when using AI chatbots such as ChatGPT, encrypting a user’s internet traffic and original IP address. This helps keep a user’s location private and prevents their ISP from seeing what they’re doing online. Still, even the best VPN isn’t enough if sensitive personal data is still being fed to AI.
This post was originally authored and published by from Tech Radar via RSS Feed. Join today to get your news feed on Nationwide Report®.