Generative AI has become one of the key technological innovations of the past few years, capturing the interest of both the technical and non-technical audiences alike. However, the unpredictable nature of these systems raises concerns, necessitating careful monitoring within corporate environments to safeguard data privacy.
Incident Example:
A notable instance highlighting this unpredictability is the breach in Nvidia’s NEMO framework. Despite the built-in safeguards of Generative AI tools, researchers managed to exploit vulnerabilities, emphasising the need for robust monitoring. In this case, simple instructions such as swapping the letter ‘I’ with ‘J’ altered the information and led to the tool releasing personally identifiable information (PII) from an internal database.Importance of Policies:
These incidents highlight the urgency of implementing comprehensive policies on Generative AI usage. Such policies act as a guiding framework for employees (both technical and non-technical) in leveraging such technologies responsibly. This guidance will begin to mitigate risks associated with biases, misuse, and data leakage. This proactive approach aims to secure corporate data while aligning with ethical and legal objectives. Cyberseer and Darktrace complement these policies by monitoring the use of these services to ensure adherence.Successful Deployment Case Study:
Cyberseer has successfully deployed monitoring measures within the environment of a leading UK transport and logistics provider, involving the following:- Providing daily reporting on users connecting to generative AI services,
- The development of bespoke models assessing data transfer volumes to AI services – aiming to identify any uploads of large amounts of data,
- Models detecting potential beaconing to these services.