The Compliance Challenges of AI
Artificial Intelligence (AI) is revolutionizing financial services – from algorithmic trading and credit scoring to chatbots and fraud detection – but it has also caught the keen eye of regulators. As AI systems become deeply embedded in decision-making, authorities worldwide are grappling with how to ensure these technologies are used ethically, safely, and in compliance with existing laws. For compliance professionals, AI presents a double-edged sword: it offers new tools to improve compliance (through advanced analytics and automation) but also introduces novel risks around bias, transparency, and operational resilience. In this article, we examine how regulators are responding to AI, the challenges this poses for compliance, enforcement examples, and practical implications for firms deploying AI in finance.
Key Regulatory Developments
The EU’s AI Act – A Bold Comprehensive Law: The European Union has taken a landmark step by enacting the AI Act, the first major regulatory framework globally for artificial intelligence. Agreed in late 2023 and entering into force in August 2024, the AI Act establishes a risk-based approach to AI systems (European Commission, 2024). It categorizes AI uses into tiers: minimal risk (largely unregulated, e.g. spam filters), limited risk (with transparency requirements, e.g. chatbots must disclose they are AI), high-risk (such as AI for credit scoring, fraud monitoring, or hiring decisions, which face strict requirements on data quality, documentation, human oversight, etc.), and unacceptable risk AI, which are outright banned (for instance, social scoring of citizens or real-time biometric tracking for law enforcement) (European Commission, 2023). The Act will apply across all 27 EU countries, with most provisions kicking in by 2025-2026 after transitional periods.
United Kingdom’s Principles-Based Path: The UK has chosen a different route. Rather than an umbrella AI law, the UK government’s 2023 AI Regulation White Paper advocated a sector-specific, principles-driven approach (UK Government, 2023). The idea is to leverage existing regulators (like the FCA for finance, ICO for data protection, etc.) to apply common principles to AI within their domains, such as safety, transparency, fairness, and accountability. As such, there is no single “AI Act” in the UK; instead, regulators are issuing guidance on AI relevant to their remit. The UK approach touts flexibility and “pro-innovation” stance, aiming not to stifle AI development. However, it is continually evolving – by late 2024, the UK signaled it may introduce targeted legislation for the most powerful, general-purpose AI models (“frontier AI”) to address risks like uncontrolled self-learning systems (White & Case LLP, 2025a).
United States – Patchwork and Enforcement-Focused: In the U.S., there isn’t a single federal AI regulation yet, but multiple initiatives signal a growing regulatory appetite. Federal agencies have issued guidance and principles: for instance, the FTC has warned companies against using AI in misleading ways or perpetuating bias (FTC, 2023). The Consumer Financial Protection Bureau (CFPB) has emphasized that using AI for lending does not exempt lenders from fair lending laws. On the legislative side, discussions are ongoing in Congress about AI, but consensus is nascent. However, an important executive action took place in October 2023 when President Biden signed a comprehensive Executive Order on AI. This Executive Order directs federal agencies to set standards for AI safety and security, calls for frameworks to ensure AI algorithms are free from unlawful bias and do not endanger privacy, and even explores the requirement of reporting and evaluation of AI models by developers (U.S. Department of Homeland Security, 2023).
Canada and Others – Proposed Laws and Guidelines: Canada has put forward the Artificial Intelligence and Data Act (AIDA) as part of a broader bill (Bill C-27) to regulate AI at the federal level (White & Case LLP, 2025b). AIDA would require AI system deployers to conduct impact assessments for “high-impact” AI systems, ensure mitigation of risks, and impose transparency about AI decisions. However, as of early 2025, the legislative process has been delayed (Government of Canada, 2022). In the meantime, Canadian regulators like OSFI (which oversees banks) have issued risk management expectations for AI, and Canada has established an advisory council on AI to guide safe adoption. Similarly, Singapore and Hong Kong have not passed AI-specific laws but rely on detailed guidance: Singapore’s MAS issued FEAT principles (Fairness, Ethics, Accountability, and Transparency) for the use of AI in financial services, urging firms to have governance frameworks around AI models. Singapore in 2022 also released an AI Governance Testing Framework (Veritas) to help firms verify that their AI systems are behaving as intended (MAS, 2022a).
Enforcement Case Studies Given the relative novelty of AI regulation, explicit enforcement cases are still developing, but regulators have already used existing laws to address AI-related harms. One notable example occurred in the European Union: in 2023, Italy’s Data Protection Authority (Garante) temporarily banned the AI chatbot ChatGPT over privacy violations. The Garante found that ChatGPT had been processing personal data unlawfully (without proper notice or legal basis) and potentially exposing minors to inappropriate content. OpenAI, the chatbot’s provider, had to scramble to implement age checks and greater privacy disclosures to get the ban lifted (Reuters, 2023).
In the United States, enforcement related to AI has often come through the lens of discrimination or consumer harm. A prominent case involved the Department of Housing and Urban Development (HUD) and the Justice Department addressing algorithmic bias in mortgage lending. While specific companies were not publicly named, authorities have investigated whether certain lenders’ underwriting algorithms unintentionally discriminated against minority applicants, which would violate the Fair Housing Act. In one settlement, an online lender agreed to pay penalties and change its AI-driven underwriting model after a CFPB investigation found it was denying credit to qualified applicants in protected classes.
Industry Implications For financial institutions, the rising regulatory focus on AI translates into several practical compliance imperatives. First, firms need robust governance frameworks for AI. This means establishing clear responsibility for AI oversight – many banks have created an “AI governance committee” or expanded the mandate of risk committees to cover AI. Senior management should approve significant AI uses and ensure a diverse group (compliance, IT, legal, business heads) evaluates risks at the design stage. An emerging best practice is to maintain an inventory of AI models in use (similar to model inventories maintained for model risk management) along with documentation of each model’s purpose, data inputs, intended outputs, and validation results. Regulators, especially under regimes like the EU AI Act, may ask to see such documentation.
Cross-border Implications Cross-border coordination is another implication. A global bank might find its AI-based trading strategy permissible in one country but constrained in another. Thus, global firms may need to geofence certain AI functionalities – i.e., limit or adapt features when serving EU clients vs. U.S. clients to comply with local laws. Compliance officers in multinational institutions increasingly share notes and align with the strictest applicable standard to simplify compliance management. As noted, many are choosing to voluntarily adopt principles from laws like the EU AI Act or OECD AI Principles worldwide, to present a consistent face to regulators and customers.
On the Opportunity Side AI can be harnessed for improved anti-money laundering monitoring (machine learning models that detect suspicious transaction patterns more effectively) or for regulatory change management (FINRA, 2024a). Compliance leaders should stay abreast of these developments – regulators generally encourage innovation that enhances compliance, provided it’s controlled. For instance, the FCA ran a “Digital Sandbox” to let firms experiment with AI for detecting fraud and authorized push payment scams. Embracing such tools could be a way to turn the regulatory radar on AI from a threat to an advantage.
The Future of AI Regulation
From Voluntary to Mandatory: Many early AI governance efforts were voluntary guidelines (e.g., OECD AI Principles, industry ethics charters). Now, jurisdictions are codifying requirements – the EU’s binding AI Act is the clearest example, and others (Canada’s AIDA, China’s AI measures) are following suit (Holistic AI, 2023). Even in the U.S., where a comprehensive AI law remains absent, we see agencies issuing rules and Congress debating bills to establish oversight committees or algorithmic accountability obligations.
Harmonization vs. Fragmentation: A key question is whether AI governance will converge internationally or fragment. International bodies like the G7 and OECD are advocating for alignment in AI principles, and we see some common themes across regulations – emphasis on transparency, risk management, and protection of fundamental rights. There may be moves toward mutual recognition of AI standards or global certifications (European Commission, 2023).
Preparing for Uncertainty and Change: The AI regulatory environment will remain a moving target for a while. Laws like the EU AI Act will require detailed implementing standards that are still being developed. Regulators may also update rules as AI technology evolves – for instance, adding new high-risk categories or adjusting requirements in response to real-world incidents. Therefore, businesses should build agility into their compliance programs (Dentons, 2024).
Conclusion
AI’s transformative power comes with substantial compliance challenges, but the direction from regulators is becoming clearer: financial institutions must deploy AI responsibly or not at all. The days of “move fast and break things” are giving way to “move carefully and govern things.” Around the world, regulators are formulating rules and expectations that center on fairness, transparency, and accountability for AI, whether through broad legislation like the EU’s AI Act or through enforcement of existing laws.
References
Dentons (2024). “The current state of play for the regulation of AI in Australia in 2024.” Dentons Insight, April 26, 2024.
European Commission (2023). The EU Artificial Intelligence Act: A Risk-Based Approach to AI. Shaping Europe’s Digital Future – Policy Summary.
European Commission (2024). News Article: AI Act enters into force, 1 August.
FINRA (2024a). 2024 Annual Regulatory Oversight Report. Financial Industry Regulatory Authority, January 2024.
FINRA (2024b). Regulatory Notice 24-09: FINRA Reminds Members of Regulatory Obligations When Using Generative AI and LLMs. June 27, 2024.
FTC (2023). Atleson, S. “AI Regulation: An FTC Perspective.” Holland & Knight Webinar Discussion, Nov 2023.
Government of Canada (2022). Bill C-27: Digital Charter Implementation Act, 2022.
Holistic AI (2023). “Making Sense of China’s AI Regulations.” Holistic AI Blog, 2023.
MAS (2022a). FEAT Principles & Veritas Toolkit for Responsible AI. MAS.gov.sg.