Lawtitude

The EU AI Act is Here—but is the Industry Ready to Comply?

The European Union’s landmark Artificial Intelligence (AI) Act has officially taken its first steps into implementation, marking a significant moment in the global regulation of AI technologies. As of last week, key provisions of the Act, particularly those targeting AI systems deemed to pose an “unacceptable risk,” have come into force. This includes bans on AI-driven social scoring, manipulative AI applications, and biometric-based predictive systems.

Despite these regulatory advancements, compliance across industries remains sluggish. Many companies are still grappling with the Act’s requirements, trying to assess the impact on their operations. Initial observations suggest that while awareness is growing, full compliance is far from widespread. The slow pace of adherence can be attributed to multiple factors, ranging from lack of AI governance frameworks to the complexities of enforcement.

Why Is Compliance Lagging?
The delayed compliance with the EU AI Act is not necessarily a sign of resistance but rather an indication of the challenges organizations faces in adapting to the new legal landscape. Several key reasons contribute to this sluggish response.

  1. Lack of Awareness and Readiness
    For many organizations, the EU AI Act remains an evolving regulatory challenge. While large tech companies have been monitoring developments, smaller and mid-sized enterprises are only now beginning to take stock of how the law applies to them. The complexity of the Act, coupled with varying degrees of AI deployment across industries, means that some companies are still in the initial stages of conducting risk assessments.

    The Act mandates that AI deployers and providers ensure their employees have a sufficient level of AI literacy, a requirement that is only beginning to gain traction. Many businesses have yet to implement structured training programs or appoint dedicated AI governance teams. Without a clear internal framework, companies struggle to assess and mitigate compliance risks effectively.

  2. Complexity of Risk Assessment and AI Audits
    One of the major hurdles in compliance is the requirement for AI risk assessments. Organizations must evaluate their AI systems against the categories outlined in the Act, which include:
    – Unacceptable Risk (banned outright)
    – High-Risk AI (subject to strict oversight)
    – Limited Risk (transparency requirements)
    – Minimal Risk (unregulated)
    Identifying which category an AI system falls under is not always straightforward. Many companies lack in-house expertise to conduct comprehensive audits, forcing them to seek external consultants—adding time and cost to the compliance process.
  3. Uncertainty in Enforcement and Legal Interpretation
    While the provisions of the Act are coming into effect, the actual enforcement mechanisms are still being established. Regulators across EU member states are in different stages of readiness, leading to inconsistencies in how the law will be applied. Some countries may aggressively pursue compliance, while others may take a more lenient approach initially.

    This uncertainty makes it difficult for businesses to gauge how urgently they need to act. Some companies may opt for a “wait-and-see” approach, delaying full compliance until clearer enforcement patterns emerge.

  4. Focus on Larger Companies First
    Historically, EU regulatory bodies have prioritized enforcement against major corporations before expanding scrutiny to smaller businesses. The early years of the General Data Protection Regulation (GDPR) saw enforcement actions concentrated on tech giants, while smaller firms had more time to adapt.

    A similar trend may unfold with the EU AI Act. Large AI providers, particularly those offering biometric identification tools, automated decision-making systems, and predictive analytics, are expected to be the first targets of enforcement. As a result, smaller companies that use AI peripherally may not yet feel the same urgency to comply.

  5. Need for Cross-Border Coordination
    AI systems often operate across multiple jurisdictions, making compliance with the EU AI Act a complex challenge for multinational organizations. Companies that develop or deploy AI globally must navigate overlapping regulatory frameworks, including AI laws in the U.S., China, and emerging regulations in other regions.

    For businesses that serve both EU and non-EU markets, striking the right balance between compliance and operational flexibility is a major challenge. Companies may need to develop different AI governance strategies for different regions, which can slow down implementation efforts.

The Path Forward: Steps for Businesses to Achieve Compliance
Despite these challenges, businesses cannot afford to ignore the EU AI Act. As regulatory scrutiny increases, organizations must take proactive steps to align with the new framework. Here are key measures companies should adopt:

  • Conduct Comprehensive AI Audits
    Companies must perform thorough AI audits to identify systems that may fall under the high-risk or unacceptable risk categories. This includes reviewing AI-driven decision-making tools, facial recognition software, and predictive analytics systems.
  • Establish AI Governance Boards
    Implementing internal AI governance boards can help organizations develop compliance strategies. These boards should include legal, technical, and ethical experts to oversee AI deployments and ensure they align with regulatory requirements.
  • Enhance AI Literacy Across the Organization
    Educating employees about AI risks and regulatory requirements is essential. Training programs should be rolled out across departments to ensure compliance efforts are understood and implemented at all levels.
  • Engage with Regulators and Industry Groups
    Companies should actively participate in industry discussions and engage with regulatory bodies to stay informed about evolving compliance expectations. Collaboration with AI ethics groups and legal advisory firms can provide valuable insights.
  • Develop Clear Documentation and Transparency Policies
    The EU AI Act emphasizes transparency in AI decision-making. Organizations must ensure that their AI systems provide explainable and auditable outputs. Clear documentation of AI models, data sources, and decision-making processes will be critical for regulatory compliance.

Conclusion
The EU AI Act marks a historic shift in how artificial intelligence is regulated, setting a precedent for other jurisdictions. However, the slow pace of compliance underscores the complexities of adapting to this new legal framework. Many companies are still in the early phases of understanding and implementing the necessary changes, hindered by lack of awareness, resource constraints, and uncertainty about enforcement.

As regulatory scrutiny increases in the coming months and years, businesses must take proactive steps to align with the EU AI Act’s provisions. While initial enforcement may focus on larger AI providers, companies of all sizes should begin strengthening their AI governance frameworks now to avoid potential legal and financial risks down the line.

By embracing AI governance and ethical compliance, organizations can not only meet regulatory requirements but also build trust with consumers, stakeholders, and regulators in an era of rapidly evolving AI technologies.

COPYRIGHT © ALL RIGHTS RESERVED.