Picture: for illustration purposes

ChatGPT in Businesses: A Look at Four Potential Privacy Risks

Published October 23, 2023
1 years ago

With an increasing reliance on machine learning language models like ChatGPT in the workplace, it's critical to consider potential risks, particularly in relation to sensitive corporate data. This is echoed in a recent Kaspersky survey showing a growing popular usage in Russia, Belgium, and the UK.



One primary concern is the susceptibility of tech giants who operate LLM-based chatbots to hacking or accidental leakage. Another risk factor revolves around 'unintended memorisation' by these models, training data could be compromised, creating potential privacy risks.



The usage of unofficial alternative services also presents an alarming avenue for cyber-attacks, which could include malware downloads disguised as non-existing apps or clients. Furthermore, chatting using ChatGPT exposes employee accounts to potential hacking attempts, which opens a gateway to sensitive corporate data.


According to Anna Larkina, a security and privacy expert at Kaspersky, the risk of sensitive data leakage is highest when employees use personal accounts at work, spotlights the need for companies to educate their staff about the risks of using chatbots. The company recommends strong unique passwords, awareness against phishing attempts, regular software updates, and elevated employee cybersecurity education.


Leave a Comment

Rate this article:

Please enter email address.
Looks good!
Please enter your name.
Looks good!
Please enter a message.
Looks good!
Please check re-captcha.
Looks good!
Leave the first review