GenAI Security: What's a CISO/CIO to do?
...the wheel is a much more important invention than what people believed it to be. The wheel helped transport people and allowed people to trade their goods. This invention led to many more advancements such as the spinning wheel, the water wheel, and modern transportation that changed the world drastically. (source: https://www.interstem.us/events/the-invention-that-changed-the-world-the-wheel.html)
Is GenAI this generation's wheel?
As it should be, it is being hailed for being a technological marvel, especially by social media influencers who are convinced that GenAI will definitely, if it hasn't already, increase innovation, and wealth, turn arid land into green havens and turn us into ethical, well rounded, yet incredibly smart, individuals. The challenge, of course, is understanding how to squeeze this juicy orange for its Vitamin-C-packed nectar without spraying, and blinding, everyone else around us. Additionally, the fact that we are in the throes of the so-called Cancel Culture only adds to the risk of using GenAI without being cautious of societal impacts.
Ethical Considerations w/ GenAI Use
GenAI uses massive amounts of data from different sources (kosher or otherwise), and therefore, may produce results that violate fairness, accountability and transparency.
What is Fairness generally? Webster defines Fairness as
lack of favoritism (or more support and favor)* toward one side or another
*Author's addition
Ardent users and supporters of Machine Learning have witnessed the unfair treatment of people belonging to a certain socio-economic class (read this article, this article and this article), and fortunately, steps are being taken to overcome these situations (read this article, this article and this article).
Fairness in GenAI is, therefore, not much different from Fairness in general. GenAI models, if trained on biased datasets, will always be biased in their results and can cause unequal and discriminatory outcomes based on markers such as gender, ethnicity, nationality and/or economic class.
What is Accountability? Webster defines it as
an obligation or willingness to accept responsibility or to account for one's actions.
Yes, accounting for one's action is important if we, as conscientious users of technology, in particular AI, want to ensure Fairness and Transparency from our AI overlords (at least that's what Elon keeps telling us to call AI systems).
A straightforward solution for filling the Accountability gap is a remnant of the past, a much-derided approach called a RACI chart (but just because something is old does not mean it is also useless).
Leaders must ensure clear lines of responsibility and accountability are known and any adverse consequences should be dealt with as quickly as possible (not when feasible).
What is Transparency? Webster defines it as
a: free from pretense or deceit
b: easily detected or seen through
c: readily understood
d: characterized by visibility or accessibility of information especially concerning business practices
For GenAI (and AI models in general), the method behind the deductions should be obvious and understood. This attribute of an AI model is called its Explainability. Ideally, users should be able to understand a model if we want them to trust and accept the model(s) (and by extension, the human beings behind it/them).
IBM made AI Fairness 360 (FOSS) and MSFT has released Fairlearn.
Does GenAI pose an existential threat to all humanity?
Elon wants us to believe this, as did Stephen Hawking before his death (read the news article here). They have openly warned us about the existential risk that GenAI poses to our species because:
- GenAI is constantly learning about us, our world, our fears and our ambitions. All this information on its own may not turn GenAI into Galvatron unless, of course, we keep supplying it with more munition (compute power), it keeps arranging us into meta-patterns-of-behaviour and one day decides to eliminate the 'outliers' because, after all, the way to peace and prosperity is through standardizing everything?
- Automating complex systems is considered par for the course in our modern workplaces. Imagine automating an AI agent, that is 'aware' of its agency and autonomy, and learns more about us with every passing day. If such an agent (along with a few thousand of its agentic friends) decided to bring down our power grid because it's killing the environment, would we be able to stop it/them?
- The AI gold rush turned everyone with a laptop who understood the difference between median and mean into a Data Scientist, who may have shared their models with the world. Depending upon the sophistication of the models, they can be manipulated for nefarious ends.
What are some Evolving GenAI Concerns?
Deliberate manipulation of input data to models
Also called Adversarial Attacks, these attacks knowingly send corrupt data to a model, forcing it to make unlikely and erroneous predictions. Those who use these affected models to determine their future course of action could hurt themselves or someone else. Examples of deliberate data manipulation include Model Evasion, Targeted Misclassifications, and Poisoning Training Datasets, among others.
Minimal Monitoring of GenAI tools and their outputs
Observability is important. It purports to help us make scientific decisions based on evidence and uses data to support our work. However, typical observability and monitoring tools may not be able to provide the same oversight and auditing for GenAI-based software since models.
Increased use of GenAI for hyper-scaling commonly known malicious acts
The first time the Crown Prince of Burkina-Faso sent us an email promising us millions in exchange for our banking information, it, of course, seemed too good to be true. However, it was not the preposterous claims made in the email that confirmed it was a phishing attack, but the lack of semantic coherence in the message itself. Riddled with spelling mistakes and other grammatical errors, one could steer clear of such frauds but in the new GenAI world, proper sentence construction (imbued with a sense of authority) makes it difficult for the less aware among us to separate the chaff from our wheat. This kind of fraud is a particular issue in developing countries where GenAI is being used without due consideration given to the education and governance that must go along with it.
GenAI tools are not easily integrated w/ other security tools
Though a concern now, one can argue this is an area where work is being done and in due time, we will have Security tools for GenAI applications. Till that time, this is a problem that needs to be handled using contemporary tactics such as IAM and RBAC.
Is there any hope for CISOs and CIOs?
Fortunately, yes. It's going to take some doing but with the right support and expertise, CISOs and CIOs can put strong structures in place to minimize the impact of GenAI software on their organization, their business and ultimately their reputation.
- Confirm, through periodic risk assessments, the organization's risk profile and put steps in place to mitigate known risks.
- Integrate security into the development cycle for a GenAI application by following secure coding practices, proper model validation and a constant lookout for unexpected outcomes.
- Build a strong culture of security around AI period. With great power comes greater responsibility and therefore, though a culture of security may inhibit rapid innovation, it will elevate an organization's security posture.
I write to remember, and if, in the process, I can help someone learn about Containers, Orchestration (Docker Compose, Kubernetes), GitOps, DevSecOps, VR/AR, Architecture, and Data Management, that is just icing on the cake.