On August 1, the EU Artificial Intelligence Act came into force across the bloc, setting strict rules on the use of AI for facial recognition, creating safeguards for general-purpose AI systems and protecting consumer rights to submit complaints and request meaningful explanations about decisions made with high-risk AI systems that affect citizens’ rights.
The AI Act legislation outlines EU-wide measures designed to ensure that AI is used safely and ethically, and includes new transparency requirements for developers of foundation AI models like ChatGPT.
The European Union Parliament voted the Artificial Intelligence Act into law on March 13, 2024, with 523 members voting in favour of its adoption, 46 voting against it and 49 abstaining. The vote came after the member states agreed on the regulations in negotiations in December 2023.
The AI Act was published in the European Union’s Official Journal on July 12, 2024 and officially came into force (meaning it took effect) on August 1. However, various provisions will apply in phases:
Bans on prohibited practises, including use of AI systems that present unacceptable risk, will apply six months after the entry into force date (approximately February 2024). General-purpose AI rules, including governance and transparency requirements, will go into effect 12 months after entry into force (approximately August 2025). Obligations for AI systems designated as high-risk by the Commission must be in place as of 24 months after entry into force (approximately August 2026). Obligations for high-risk systems that are subject to existing EU health and safety legislation will go into effect 36 months after entry into force (approximately August 2027). What is the AI Act? The AI Act is a set of EU-wide legislation that seeks to place safeguards on the use of AI in Europe, while simultaneously ensuring that European businesses can benefit from the rapidly evolving technology.
The legislation establishes a risk-based approach to regulation that categorises AI systems based on their perceived level of risk to and impact on citizens.
The following use cases are banned under the AI Act:
Biometric categorisation systems that use sensitive characteristics (e.g., political, religious, philosophical beliefs, sexual orientation, race). Untargeted removal of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and educational institutions. Social scoring based on social behaviour or personal characteristics. AI systems that manipulate human behaviour to circumvent their free will. AI used to exploit the vulnerabilities of people due to their age, disability, social or economic situation. SEE: How to Prepare Your Business for the EU AI Act With KPMG’s EU AI Hub
What are the penalties for breaching the AI Act? Companies that fail to comply with the legislation face fines ranging from €35 million ($38 million USD) or 7% of global turnover to €7.5 million ($8.1 million USD) or 1.5% of turnover, depending on the infringement and size of the company.
What do businesses that use AI need to do? TechRepublic recommends that businesses consider the following actions to help ensure compliance with the new legislation:
Produce an inventory of AI systems deployed internally and provided by third-party vendors. Implement or update a governance framework with input from multiple business functions, ensuring its alignment with the latest regulatory standards. Identify risk management procedures already in place and update with potential risks associated with AI systems. Assess the geographical use of AI in the business and identify the relevant standards and rules for the jurisdictions it operates within. Consult with external legal and AI experts to ensure a comprehensive and up-to-date understanding of the regulatory landscape. Invest in compliance tools that monitor AI systems. Train staff, including senior leadership, on AI compliance and the benefits and risks of using AI. Communicate actions taken for compliance with the relevant stakeholders. Julian Mulhare, EMEA managing director at AI consulting firm Searce, told TechRepublic in an email: “With the EU AI Act starting this week, businesses need to understand their new obligations to remain compliant and avoid crippling fines.
“Compliance with copyright laws and transparency is crucial for both general-purpose AI systems, like chatbots, and generative AI models. Detailed technical documentation and clear summaries of training data, especially for GenAI models, will be necessary. To remain agile, companies need modular AI processes for easy updates — avoiding a complete overhaul. A dedicated team and budget for AI maintenance are essential here.
“As AI becomes increasingly integrated, it will impact all business areas. Investing in compliance infrastructure, enhancing documentation and transparency, and instilling robust cybersecurity measures will be imperative to mitigate financial risks and align with regulatory standards. Now, for the UK and Europe, this is the only way businesses can continue to leverage the benefits of AI while ensuring ethical standards are met.”
SEE: What is the EU’s AI Office? New Body Formed to Oversee the Rollout of General Purpose Models and AI Act
What do AI developers need to know? Developers of AI systems deemed to be high risk will have to meet certain obligations set by European lawmakers, including mandatory assessment of how their AI systems might impact the fundamental rights of citizens. This applies to the insurance and banking sectors, as well as any AI systems with “significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law.”
AI models that are considered high-impact and pose a systemic risk — meaning they could cause widespread problems if things go wrong — must follow more stringent rules. Developers of these systems will be required to perform evaluations of their models, as well as “assess and mitigate systemic risks, conduct adversarial testing, report to the (European) Commission on serious incidents, ensure cybersecurity and report on their energy efficiency.” Additionally, European citizens will have a right to launch complaints and receive explanations about decisions made by high-risk AI systems that impact their rights.
To support European startups in creating their own AI models, the AI Act also promotes regulatory sandboxes and real-world-testing. These will be set up by national authorities to allow companies to develop and train their AI technologies before they’re introduced to the market “without undue pressure from industry giants controlling the value chain.”
“There is a lot to do and little time to do it,” said Forrester Principal Analyst Enza Iannopollo in an emailed statement. “Organisations must assemble their ‘AI compliance team’ to get started. Meeting the requirements effectively will require strong collaboration among teams, from IT and data science to legal and risk management, and close support from the C-suite.”
What about ChatGPT and generative AI models? Providers of general-purpose AI systems must meet certain transparency requirements under the AI Act; this includes creating technical documentation, complying with European copyright laws and providing detailed information about the data used to train AI foundation models. The rule applies to models used for generative AI systems like OpenAI’s ChatGPT.
SEE: Microsoft is investing £2.5 billion in artificial intelligence technology and training in the EU.
How significant is the AI Act? Symbolically, the AI Act represents a pivotal moment for the AI industry. Despite its explosive growth in recent years, AI technology remains largely unregulated, leaving policymakers struggling to keep up with the pace of innovation.
The EU hopes that its AI rulebook will set a precedent for other countries to follow. Posting on X, European Commissioner Thierry Breton labelled the AI Act “a launchpad for EU startups and researchers to lead the global AI race,” while Dragos Tudorache, MEP and member of the Renew Europe Group, said the legislation would strengthen Europe’s ability to “innovate and lead in the field of AI” while protecting citizens.
What have been some challenges associated with the AI Act? The AI Act has been beset by delays that have eroded the EU’s position as a frontrunner in establishing comprehensive AI regulations. Most notable has been the arrival and subsequent meteoric rise of ChatGPT late 2022, which had not been factored into plans when the EU first set out its intention to regulate AI in Europe in April 2021.
As reported by Euractiv, this threw negotiations into disarray, with some countries expressing reluctance to include rules for foundation models on the basis that doing so could stymie innovation in Europe’s startup scene. In the meantime, the U.S., U.K. and G7 countries have all taken strides towards publishing AI guidelines.
SEE: UK AI Safety Summit: Global Powers Make ‘Landmark’ Pledge to AI Safety
Responses from tech organisations “I commend the EU for its leadership in passing comprehensive, smart AI legislation,” said Christina Montgomery, IBM vice president and chief privacy and trust officer, in a statement made by email. “The risk-based approach aligns with IBM’s commitment to ethical AI practices and will contribute to building open and trustworthy AI ecosystems.”
Organisations like IBM have been preparing products that could help other firms comply with the AI Act, such as IBM’s watsonx.governance. KPMG launched the EU AI Hub in May 2024, a service that equips businesses with the tools and expertise they need to ensure their AI offerings are compliant with new regulations.
At a press briefing, Montgomery said companies need to “get serious” about AI governance.
“There will be an implementation period, but making sure you’re regulation-ready and being able to shift in a changing climate is key,” she said.
IBM has been the first client for its own AI governance tools, Montgomery said, preparing for regulations by fine-tuning those tools, creating a clear set of principles around AI trust and transparency and creating an AI ethics board.
Jean-Marc Leclerc, director and head of EU policy at IBM, said the AI Act will have influence across the globe, similar to GDPR. Leclerc framed the AI Act as positive for openness and competition between companies in the EU.
Salesforce EVP of government affairs Eric Loeb wrote, “We believe that by creating risk-based frameworks such as the EU AI Act, pushing for commitments to ethical and trustworthy AI, and convening multi-stakeholder groups, regulators can make a substantial positive impact. Salesforce applauds EU institutions for taking leadership in this domain.”
What are critics saying about the AI Act? In July 2024, Meta warned that the EU’s approach to regulating AI may result in its missing out on cutting-edge technological advancements.
Rob Sherman, the company’s deputy privacy officer and vice-president of policy, told the Financial Times: “If jurisdictions can’t regulate in a way that enables us to have clarity on what’s expected, then it’s going to be harder for us to offer the most advanced technologies in those places … it is a realistic outcome that we’re worried about.”
Mulhare told TechRepublic: “Given the pessimism around European’s AI regulatory measures, regulators must strive to continuously evolve and collaborate with tech experts to ensure safe, equitable and innovative AI deployment so that the EU doesn’t fall behind.”
Some privacy and human rights groups have argued that the AI Act doesn’t go far enough, accusing the EU lawmakers of delivering a watered-down version of what they originally promised.
Privacy rights group European Digital Rights labelled the AI Act a “high-level compromise” on “one of the most controversial digital legislations in EU history,” and suggested that gaps in the legislation threatened to undermine the rights of citizens.
The group was particularly critical of the Act’s limited ban on facial recognition and predictive policing, arguing that broad loopholes, unclear definitions and exemptions for certain authorities left AI systems open to potential misuse in surveillance and law enforcement.
In March 2024, European Digital Rights highlighted that the AI Act has “a parallel legal framework for the use of AI by law enforcement, migration and national security authorities,” suggesting this could be used to lever disproportionate surveillance technology onto migrants.
Ella Jakubowska, senior policy advisor at European Digital Rights, said in a statement in December 2023:
“It’s hard to be excited about a law which has, for the first time in the EU, taken steps to legalise live public facial recognition across the bloc. Whilst the Parliament fought hard to limit the damage, the overall package on biometric surveillance and profiling is at best lukewarm. Our fight against biometric mass surveillance is set to continue.”
Amnesty International was also critical of the limited ban on AI facial recognition, saying it set “a devastating global precedent.”
Mher Hakobyan, advocacy advisor on artificial intelligence at Amnesty International, said in a statement in December 2023: “The three European institutions — Commission, Council and the Parliament — in effect greenlighted dystopian digital surveillance in the 27 EU Member States, setting a devastating precedent globally concerning artificial intelligence (AI) regulation.
“Not ensuring a full ban on facial recognition is therefore a hugely missed opportunity to stop and prevent colossal damage to human rights, civic space and rule of law that are already under threat throughout the EU.”
A draft of the act was leaked in January 2024, highlighting the urgency with which businesses will need to adhere to the act. Some lawmakers worry the act will hamper innovation and economic growth, such as French President Emmanuel Macron speaking to the Financial Times in December 2023.
Be First to Comment