Benjamin Martin, Managing Consultant at Adarma
In recent months, we have seen artificial intelligence (AI), a discipline that has existed for many years, reach a new level of maturity, moving from the back-end of systems to the forefront of our companies. This leap is driven by generative AI, which can understand and generate text, audio, and video data.
This advancement in AI has caused the use cases for this technology to skyrocket. As a result, a new challenge has emerged: traditional computing was predictable because it was programmed by humans in a deterministic manner. In contrast, the paradigm of AI, especially generative AI, produces outcomes that are not pre-programmed. What is programmed is the method to obtain results after a training process, leading to less deterministic results.
Ensuring that less deterministic results meet minimum ethical standards, such as avoiding biases, poses a new challenge. This dilemma raises the question of when to regulate but without stifling progress. We can debate whether now is the right time to regulate or use this time to our advantage, preparing for a future regulatory framework while also mitigating risks associated with the use of AI. As an example, less deterministic results can lead to reputational damage if an AI system used in an automated call centre produces a biased answer. It is within this context that the EU AI Act emerges.
The EU AI Act, which was passed by the European Parliament on 13 March 2024 and received approval from the EU Council on 21 May 2024, is a comprehensive regulatory framework designed to manage the risks and benefits associated with AI technologies.
This legislation employs a risk-based approach, categorising AI applications based on their potential risks to society. The regulation includes controls structured in three categories: Ethical Controls, Data Protection, and Cybersecurity. For the latter two categories, the regulation refers to compliance with existing regulations.
Examples of ethical controls include:
- Human oversight: AI systems designated as high-risk must be subject to human oversight throughout their lifecycle to ensure their safe and ethical use.
- Accountability and liability: Providers of high-risk AI systems must ensure they can trace and understand the actions of their AI systems, enabling them to take responsibility for the system’s outcomes and behaviour.
- Addressing bias: AI systems must be designed and developed to ensure they are not discriminatory or biased, with specific measures in place to detect, minimise, and mitigate any biases that may arise during the system’s operation.
- Transparency: AI systems must provide users with clear and transparent information about the system’s capabilities, limitations, and the risks associated with its use.
As organisations globally adapt to the evolving AI landscape, it’s essential that security leaders understand and navigate the new regulations to ensure compliance and leverage AI safely for their own strategic advantages.
Faced with an ever-evolving regulatory landscape, we find ourselves with yet another framework to comply with. This encompasses not only the new regulatory framework but also new standards and best practice guidelines, such as the NIST AI Risk Management Framework (AI RMF), ISO/IEC 42001:2023 Information Technology AI Management System, and others.
The first question that arises is who is responsible for this new legislation within an organisation. Do we need to create a new role within the company, or can one of the current roles assume this responsibility?
The answer to this question is not simple and will depend on how risk management and regulatory compliance are structured within each company. However, we can provide some guidelines.
To address this new compliance framework, a legal, humanistic, and ethical approach, working in conjunction with the risk and compliance department, is necessary. The combination of these perspectives will enable an effective implementation of the framework. Reusing the same roles responsible for data protection may be a good solution.
The new legislation emphasises a risk analysis process. In this sense, it is crucial to follow the same methodology as in cybersecurity, operational risks, financial risks, etc. Using the same methodology allows you to integrate and summarise all risks, providing a comprehensive view to your board.
Currently, depending on their size and business objectives, companies may face multiple compliance frameworks such as the General Data Protection Regulation (GDPR), the new Digital Operational Resilience Act (DORA), as well as various guidelines and standards such as COBIT, NIST, etc. Additionally, companies must comply with mandatory provisions to maintain their competitiveness, such as those stipulated by the International Organization for Standardization (ISO). Moreover, they must adhere to specific compliance frameworks if they wish to offer services to the public sector, such as the Cyber Assessment Framework (CAF), etc.
The difficulty arises because all these frameworks translate into hundreds, if not thousands, of controls that must be identified, disseminated within the organisation, implemented, and regularly audited.
As mentioned previously, the new legislation in AI adds new controls (mainly of an ethical nature) and emphasises compliance with existing controls in data protection and cybersecurity. In this new scenario, mapping controls across different regulatory frameworks is crucial to streamline and operationalise compliance efforts.
At some point in the future, we can expect AI to resolve the overlap of controls belonging to multiple standards. We will be able to use LLM-based AI to ask and obtain answers about which controls apply, which ones have been previously reviewed in the framework of past audits, etc.
Implementing AI in a company can be challenging and is not exempt from new risks that must be identified and appropriately managed. Adarma can assist businesses in this process using its cyber maturity assessment methodology to achieve a secure implementation while being prepared for future regulatory frameworks.
Adarma’s Cybersecurity Maturity Assessment is designed to support the key capabilities essential for strengthening your cybersecurity strategy and delivers a roadmap to improve your cybersecurity posture. Through rigorous evaluation, we establish your current security capabilities, your readiness to respond to cyber threats, and provide you with the metrics you need to make smart investment decisions.
Our success lies in reducing risk and operationalising compliance. Partner with Adarma to navigate the new challenge of developing AI while maximising compliance and minimising risk.
If you would like to learn more about how Adarma can support your organisation’s cyber resilience, please get in touch with us at hello@adarma.com.
To hear more from us, check out the latest issue of ‘Cyber Insiders,’ our c-suite publication that explores the state of the threat landscape, emerging cyber threats, and most effective cybersecurity best practices.
You can also listen to our new podcast, which explores what it’s really like to work in cybersecurity in today’s threat landscape.
Stay updated with the latest threat insights from Adarma by following us on X and LinkedIn.