Recently, the European Union has introduced a new legislation known as the Artificial Intelligence Act. This law is seen as a landmark moment in the political regime around the world and a positive step towards an Artificial Intelligence-inclusive world. The Act, at this moment, has taken a conclusive shape after the fifth and final round of negotiations. The first proposal related to having a framework on Artificial Intelligence was made by the EU in April 2021. Since then, continuous efforts have been made to reach a consensus, which could only be achieved after a span of 2.5 years, shaping the framework into the EU’s AI Act. However, at this stage, there is no true copy or final document available for the public to reflect their opinions on the current Act. Nonetheless, the political agreement, supported by earlier drafts as well as discussions and developments that have taken place in the last few months, has provided a substantial source for this article to comment on the key features of the Artificial Intelligence Act.
What is the aim of the EU’s AI Act?
The main aim of the AI law is to regulate the governance and functioning of AI in society. It is expressed to be the world’s first comprehensive AI law, ensuring better conditions for the development and use of AI’s innovative technology to provide benefits to societal functioning, such as improved healthcare, unbiased judicial decisions, cost-effective labor, efficient resource utilization, and various other applications. Through observing parliamentary discussions, it is evident that the parliament’s priority in framing this Act was to ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly. The Act emphasizes human oversight of
AI systems to prevent harmful outcomes, advocating for human control rather than full automation.
By referring to the draft proposal of the European Parliament and of the Council regarding the harmonized rules of Artificial Intelligence, one can understand the context in which the AI Act is designed. It is stated that laying down the AI Act is in the Union’s interest to preserve the EU’s technological leadership and to ensure that Europeans can benefit from new technologies developed and functioning according to Union values, fundamental rights, and principles. In addition to the aforementioned, the proposal outlines the specific objectives of the AI Act, which include:
Hence, the EU contends that to attain these goals, a fair and measured overarching regulatory strategy for AI is necessary. This approach should only include the essential requirements to tackle the associated risks and issues related to AI, without unreasonably restricting or impeding technological advancements or disproportionately raising the cost of introducing AI solutions to the market.
Risk-based regulatory approach
One of the remarkable aspect of the Act is its adoption of a precisely defined risk-based regulatory approach that avoids imposing unnecessary trade restrictions. Legal intervention is specifically targeted at situations where there is a valid cause for concern or where such concerns can reasonably be anticipated in the immediate future. This way, the Act ensures that it is built on a legal framework which is guided by proportionality and
necessity and only imposes regulatory burden when an AI system is likely to pose high risks to fundamental rights and safety. Therefore, the Act lays down a solid risk methodology to define “high-risk” and thus classify AI systems into four different risk categories depending on their use cases i.e. unacceptable-risk, high-risk, limited-risk, and minimal/no-risk.
At first, AI systems falling under the unacceptable risk category are considered to be a threat to the fundamental right of citizens and are outrightly banned. Such systems include biometric categorisation systems that use sensitive characteristics, emotion recognition in the workplace and educational institution, untargeted scraping of facial images from the Internet or CCTV footage, social scoring: classifying people based on behaviour, socio-economic status or personal characteristics, etc. Barring a few exceptions related to law enforcement purposes, any use of AI for aforesaid purposes will be considered wrong and will be taken as non-compliance of the law.
Secondly, high risks AI systems that pose significant risks to the health and safety or fundamental rights of persons and negatively affect safety or fundamental rights will have to comply with a set of horizontal mandatory requirements for trustworthy AI and follow conformity assessment procedures before those systems can be placed on the Union market.
Then, those AI systems that are categorized as Limited risks will have to comply with minimal transparency requirements that would allow users to make informed decisions. After interacting with the applications, the user can then decide whether they want to continue using it.
Other than that, non-high-risk AI systems, only very limited transparency obligations are imposed, for example in terms of the provision of information to flag the use of an AI
system when interacting with humans. Overall, this law has avoided “one shoe fits all” approach and instead it has imposed specific requirements to address specific types of AI systems or AI uses, e.g., those that interact with people such as chatbots, or deepfakes.
Promoting transparency in AI system’s design and functioning
The key priority of the Act is to promote transparency in the use of AI as well as to enable accountability for decisions made by companies and public authorities. According to the Parliament position adopted on 14th June, 2023, before moving to the present and final version of the Act, transparency is listed among the general principles applicable to all AI systems under Article 4a and stipulates that transparency for the purposes of this provision indicates that “AI systems shall be developed and used in a way that allows appropriate traceability and explainability while making humans aware that they communicate or interact with an AI system as well as duly informing users of the capabilities and limitations of that AI system and affected persons about their rights”. The EU finds that transparency is essential for ensuring public trust in AI systems and ensuring their responsible deployment. Thus, the law focuses on cumbersome requirements of transparency, which are mandatory to be followed and are to be disclosed through spreading information to individuals and to the public. The law further obligates that technical infrastructure has to be developed that could support transparency within AI systems. The intent of the transparency principle is that a person unknown to AI can also understand how decisions are made by AI systems and the logic behind them. This includes providing an explanation of how an AI system arrived at its decisions, as well as information on the data used to train the system and the accuracy of the system.
Enforcement and penalties
In order to enforce the mandate of AI Act, the national competent authorities will have enforcement powers with the capacity to impose significant fines depending on the level of noncompliance. According to the sources, the penalty would be hefty such as for use of prohibited AI systems, fines may be up to 7% of worldwide annual turnover (revenue) or 34 million euro, while noncompliance with requirements for high-risk AI systems will be subject to fines of up to 3% of the same.
Conclusion
The AI has not been spared with the criticisms. One of the major criticisms of the Act is the challenge of accurately defining and categorising AI applications. The evolving nature of AI technologies may make it difficult to establish clear boundaries between different risk levels, potentially leading to uncertainties in regulatory implementation. Similarly, imposing heavy penalties or making enforcement of the law difficult, can directly affect and hinder the competitiveness of European businesses in the global AI market.
Nonetheless, apart from anticipated limitations, the current law is a positive and the first big step towards inclusive AI with a solid framework of governance. All around the world, AI has already made its space and has entered into many spheres ranging from judiciary to police, healthcare to manufacturing, etc. Without a framework to manage or govern AI systems, potential risks are left unaddressed. European Union has been a trendsetter in dealing with technologies and cyberspace and the same is evident from GDPR laws as well which are considered to be one of the finest pieces of data protection laws across the world. Therefore, high hopes are earmarked towards the effective functioning of AI Act so that the other nations of the world can also follow the path of EU in making an effective AI law for their countries.