A Curated Global Guide to AI Compliance:
Navigating International
AI Regulations

Artificial intelligence is rewriting the rules of technology development, from automating mundane tasks to driving innovations. But as AI systems grow more sophisticated, so do the challenges of ensuring they operate responsibly. Every line of code, every training dataset, and every decision made by an AI model now comes with an important question: Does it comply with the law?

As AI adoption accelerates, governments worldwide are introducing regulations to keep pace with this powerful technology. From Brazil to South Korea to the United States, these laws aim to manage risks, protect users, and promote transparency. For organizations, this creates a complex regulatory maze where staying compliant requires more than just understanding one law; it demands navigating overlapping and sometimes conflicting frameworks.

This guide offers a curated overview of key AI regulations shaping the global compliance landscape. While it doesn’t cover every single regulation out there, it’s designed to highlight the most impactful laws and provide actionable insights for businesses.

Go back to table of contents
1

The Rise of AI Governance:
Why It Matters

Artificial intelligence has gone from an experimental technology to a ubiquitous presence, shaping everything from how we interact online to life-changing decisions in healthcare, finance, and beyond. But this rapid adoption hasn’t been without its challenges. AI’s unchecked potential has raised ethical, legal, and societal concerns that no longer remain hypothetical.

High-profile incidents have underscored the urgent need for governance. From algorithms that unintentionally discriminate in hiring to AI-driven surveillance systems that threaten privacy, these examples have sparked global conversations about the responsible use of AI. Trust is the cornerstone of technology adoption, and without accountability, AI’s promise can quickly turn into public distrust.

Governments are stepping up to address these challenges. The European Union has taken a leading role with its ambitious AI Act, introducing risk-based classifications and strict rules for high-impact AI systems. South Korea has implemented similar efforts through its Basic Act on AI, emphasizing trust and safety in critical AI applications. Even the United States, often considered hesitant on regulatory frameworks, is advancing state and federal laws to bring structure to AI development and deployment.

This wave of AI regulations matters because it sets the boundaries for innovation. Companies must now navigate a landscape where regulatory compliance is not just a checkbox; it’s a strategic imperative. Those who proactively adapt to these frameworks can gain a competitive advantage, establishing themselves as trusted players in an increasingly scrutinized market. On the other hand, organizations that overlook these developments risk financial penalties, reputational damage, or even being excluded from markets altogether.

As AI governance matures, it becomes clear that regulatory compliance isn’t just about following the law. It’s about ensuring ethical, transparent, and responsible AI that benefits both businesses and society.

The Challenges of Compliance Across Borders

For organizations operating across borders, AI compliance can get even more complicated. What may be considered compliant in one region can easily violate regulations in another. This growing patchwork of laws and frameworks places significant burdens on companies wanting to use AI responsibly while remaining competitive globally.

A Venn diagram showing challenges of AI compliance across borders, like conflicting requirements, cost of compliance, uncertainty in emerging laws and market access risks.

One of the biggest challenges lies in conflicting requirements. For example, the EU’s AI Act introduces a comprehensive risk classification system, requiring stringent impact assessments for high-risk applications. Meanwhile, Brazil’s Proposed AI Bill (PL 2338/2023) focuses more on transparency and prohibited uses, creating differences in implementation priorities. For companies working across these regions, ensuring compliance means reconciling varied obligations without compromising operational efficiency.

The cost of compliance also continues to rise. Frequent audits, detailed impact assessments, and documentation requirements demand significant investments in both time and resources. Smaller organizations often face a steeper hill to climb, as they may lack the internal expertise or budget to meet regulatory demands. Even larger companies find themselves allocating substantial resources to stay ahead of evolving rules.

Uncertainty compounds the issue. Many regulations, such as the US Artificial Intelligence Research, Innovation, and Accountability Act (S. 3312) or Texas TRAIGA, are still in the proposal stage. Businesses are left guessing how these laws will evolve and how to future-proof their compliance strategies.

Navigating this regulatory maze requires a deep understanding of individual laws and a holistic approach to compliance—exactly what the Modulos AI Governance Platform is built for. Companies must think globally while acting locally, adapting their practices to meet regional requirements while maintaining principles of transparency, fairness, and accountability.

Click on a flag to navigate directly to the corresponding section explaining that region’s AI regulation
2

Key AI Regulations Around the World

As AI adoption grows, governments are stepping in to ensure this transformative technology is developed and deployed responsibly. Below is a curated list of some of the most impactful AI regulations shaping compliance today. While not exhaustive, these laws represent the diversity of approaches to governing AI across the globe, from risk classification frameworks to transparency mandates.

The EU AI Act

The EU AI Act is the world’s first comprehensive legal framework for artificial intelligence, following a risk-based approach to ensure safety, transparency, and accountability. Like GDPR did for data privacy, the EU AI Act is expected to shape global AI compliance standards.

Key Elements of the EU AI Act

  • Risk-Based Classification: AI systems are categorized into four levels—Unacceptable Risk (banned), High Risk (strictly regulated), Limited Risk (transparency obligations), and Minimal Risk (no specific requirements).
  • Compliance Requirements: High-risk AI systems must adhere to strict obligations, including risk management, transparency, human oversight, and data governance.
  • Penalties: Non-compliance can result in fines of up to €35 million or 7% of global turnover, whichever is higher.

The Act officially entered into force in August 2024, with phased implementation through 2027. Businesses operating in or interacting with the EU market must assess their AI systems and ensure compliance.

Learn More: Explore the full EU AI Act Guide here

Council of Europe Framework Convention on AI and Human Rights

Signed in September 2024 in Vilnius, this Convention establishes a comprehensive legal framework designed to ensure that all activities within the AI lifecycle adhere to fundamental human rights, democratic principles, and the rule of law. It applies to public authorities and private actors acting on their behalf, mandating that states implement graduated, context-specific measures to manage AI-related risks—ranging from potential discrimination and privacy breaches to threats against democratic processes.

Under the Convention, Parties are required to embed core principles such as transparency, accountability, and effective oversight throughout the design, development, deployment, and decommissioning of AI systems. This includes performing thorough risk and impact assessments, establishing accessible remedies for affected individuals, and ensuring continuous monitoring and international cooperation. By aligning domestic legal frameworks with internationally recognized human rights standards, the Convention not only sets a global benchmark for responsible AI governance but also complements national regulatory efforts aimed at fostering safe innovation.

The Convention will be adopted by Switzerland into national law.

Brazil’s Proposed AI Bill (PL 2338/2023)

Brazil’s Proposed AI Regulation Bill (PL 2338/2023) aims to establish a comprehensive framework for the ethical and responsible development, deployment, and use of AI systems. Approved by the Senate in 2023, it is currently under review by the Chamber of Deputies and is expected to take effect 12 months after publication, giving organizations a one-year grace period to comply.

The bill is built on four key pillars: risk-based governance, human rights protection, transparency, and innovation. Its objective is to ensure that AI systems align with democratic values, safeguard user rights, and promote responsible technological advancement in Brazil.

Who Does PL 2338/2023 Apply To?

The regulation covers any organization or individual that develops, deploys, or benefits from AI systems within Brazilian territory. This includes AI suppliers (the developers) and operators (those deploying AI in real-world applications), spanning across public, private, and nonprofit sectors. While personal or nonprofessional use of AI is exempt, micro and small businesses may benefit from simplified compliance obligations, which will be detailed in future regulations.

What Does the Law Require?

Risk-Based Classification

PL 2338/2023 introduces a three-tiered risk framework:

  1. Excessive-Risk AI: Prohibited outright, these systems include applications such as subliminal manipulation, social scoring by public authorities, and exploitation of vulnerable individuals.
  2. High-Risk AI: Systems used in critical domains like healthcare, education, justice, and infrastructure fall under this category. They require strict oversight, including impact assessments and continuous monitoring throughout their lifecycle.
  3. Low-Risk AI: While subject to fewer restrictions, these systems must still adhere to principles of transparency, fairness, and accountability.

For high-risk systems, a mandatory AI Impact Assessment is required before deployment. This document evaluates potential harms, discrimination risks, and security vulnerabilities, ensuring proactive risk mitigation.

Transparency and Accountability

Transparency is a cornerstone of the bill. Organizations must disclose when users are interacting with AI and provide clear explanations of how decisions are made. This includes detailing the logic, data categories, and methodologies used in AI outputs when requested. The law emphasizes that users have the right to understand the systems impacting their lives, particularly in high-stakes areas like hiring, lending, and public services.

Human Oversight

To maintain accountability, the bill mandates human oversight for high-risk AI applications. Operators must be able to intervene in or override AI decisions, particularly when those decisions significantly affect individual rights, such as in employment or education. This ensures that human judgment remains central in sensitive and high-stakes scenarios.

Data Privacy and Security

The legislation aligns closely with Brazil’s General Data Protection Law (LGPD), reinforcing protections around sensitive data and emphasizing principles like data minimization and purpose limitation. Operators are required to implement safeguards to prevent unlawful discrimination and misuse of personal information, ensuring that privacy is prioritized in all AI applications.

Enforcement and Penalties

To oversee compliance, a federal regulatory authority will be established. This body will have the power to issue warnings, impose fines, and even suspend AI-related activities for severe violations. Penalties can reach up to R$50 million per infraction or 2% of an organization’s annual revenue in Brazil. Importantly, the bill provides a one-year adaptation period after it becomes law, allowing businesses time to align their practices with the new regulations.

Encouraging Innovation

While the bill is strict on transparency and risk mitigation, it also fosters innovation through the introduction of regulatory sandboxes. These controlled environments allow organizations to test AI systems under real-world conditions without being fully subject to regulatory requirements, creating opportunities for safe experimentation and development.

What This Means for Your Business

For organizations operating in Brazil, PL 2338/2023 represents both a challenge and an opportunity.

Businesses will need to:

  • Evaluate their AI systems to determine their risk classification.
  • Develop robust documentation, including AI Impact Assessments for high-risk applications.
  • Build internal processes for monitoring, bias mitigation, and user transparency.
  • Prepare to comply with potential audits and enforcement actions once the law is enacted.

Proactive compliance not only minimizes legal risks but also builds trust with users and stakeholders, positioning organizations as leaders in responsible AI.

South Korea’s Basic Act on AI Advancement and Trust

South Korea’s Basic Act on AI Advancement and Trust, passed in November 2024, establishes a regulatory framework to foster responsible AI development while safeguarding public trust. This legislation represents a significant step in positioning South Korea as a leader in both technological innovation and ethical AI governance.

The law aims to promote safety, transparency, and fairness across all AI applications, with particular emphasis on high-impact systems and generative AI. Scheduled to take effect in late 2025, it introduces obligations for both domestic and foreign entities offering AI products and services within South Korea.

Who Does It Apply To?

The Basic Act has broad applicability, covering AI developers, providers, and users across all sectors—public, private, and nonprofit. Foreign companies that meet certain thresholds, such as having a substantial user base or revenue in South Korea, are also required to comply.

Exemptions are limited. AI applications purely developed for defense or national security purposes may be excluded, as detailed in forthcoming Presidential Decrees. The law does not explicitly exempt small businesses, although future regulations may clarify thresholds or provide partial exemptions.

Core Requirements of the Basic Act

1. Risk Assessments for High-Impact AI

High-impact AI systems, defined as those that significantly affect human safety, rights, or critical infrastructure, are subject to stringent risk management requirements. Organizations must:

  • Identify risks throughout the AI lifecycle, from design to deployment.
  • Document risk assessments, including details about safety measures, influencing factors, and mitigation strategies.
  • Submit these assessments to the Ministry of Science and ICT if computational thresholds are exceeded.

Examples of high-impact AI include systems used in energy management, healthcare, public services, biometric surveillance, and lending decisions.

2. Transparency Obligations

Transparency is a key principle of the Basic Act, particularly for high-impact and generative AI systems. Organizations must:

  • Notify users when interacting with AI, especially in critical areas like credit scoring or medical triage.
  • Label generative AI outputs, such as synthetic images, text, or videos, to ensure users are aware of AI-generated content.
  • Be prepared to explain how AI systems reach decisions, including information about datasets, algorithms, and influencing factors.

This emphasis on transparency builds trust with users and aligns with international trends in AI governance.

3. Human Oversight

To mitigate risks, the law mandates human management and supervision for high-impact AI applications. Operators must be able to:

  • Override or halt AI outputs if they pose risks to human rights or safety.
  • Ensure significant human involvement in irreversible or high-stakes decisions, such as hiring, justice, or healthcare-related matters.

This requirement ensures accountability and prevents over-reliance on automated decision-making.

4. Data Privacy and Security

Although the Basic Act does not introduce new personal data provisions, it references compliance with existing Korean laws such as the Personal Information Protection Act (PIPA). Organizations are encouraged to adopt privacy-by-design principles, ensuring data minimization, lawful processing, and secure handling of sensitive information.

The government may issue additional guidelines to address privacy concerns specific to AI systems, ensuring alignment with evolving international standards.

Enforcement and Penalties

The Ministry of Science and ICT is the primary enforcement body, though other government agencies may get involved in cases of data privacy violations.

Penalties include:

  • Fines up to KRW 30 million (approximately USD 25,000) for failing to meet labeling or transparency obligations.
  • Up to three years imprisonment or additional fines for leaking confidential information.
  • Corrective orders may be issued, requiring organizations to rectify noncompliance within a specified timeframe.

In practice, Korean regulators prioritize corrective actions over punitive measures, often using public disclosure of breaches to encourage compliance.

What This Means for Your Business

The Basic Act on AI Advancement and Trust places significant obligations on organizations, but it also opens opportunities for businesses to differentiate themselves through ethical and transparent AI practices.

Here’s what businesses should focus on:

  1. Evaluate Your AI Systems: Determine if your applications qualify as high-impact or generative AI and map your compliance gaps.
  2. Document Everything: Maintain detailed records of risk assessments, safety measures, and transparency disclosures.
  3. Embed Human Oversight: Ensure robust human involvement in critical AI workflows to meet supervisory requirements.
  4. Prepare for Inspections: Be ready to submit compliance records and respond to audits from the Ministry of Science and ICT.

California’s Generative AI Training Data Transparency Act (AB 2013)

California’s Generative AI Training Data Transparency Act (AB 2013) sets a precedent as the first law in the United States to mandate disclosure of training data for generative AI systems. Signed into law on September 28, 2024, it aims to promote transparency in AI development, safeguard personal information, and empower users with a greater understanding of how AI outputs are created. This groundbreaking legislation will take effect on January 1, 2026, applying retroactively to AI systems made available to Californians since January 1, 2022.

By enforcing public disclosure of training datasets, California seeks to establish trust in generative AI systems while holding developers accountable for their choices in data sourcing and processing.

Who Does AB 2013 Apply To?

AB 2013 applies broadly to individuals, corporations, partnerships, and government entities that develop, modify, or provide generative AI systems accessible to the public in California. This includes AI systems that generate text, images, videos, or other synthetic content based on training data.

Developers and companies outside California must also comply if their AI systems are accessible to California residents, regardless of whether the systems are free or subscription-based.

Exemptions

Certain use cases and entities are exempt:

  • AI systems developed exclusively for security and integrity purposes.
  • AI systems used for operating aircraft in national airspace.
  • AI systems built for national security, defense, or military purposes, provided they are used solely by federal entities and not offered publicly.

An infographic featuring exemptions to AB 2013 regulation

What Does AB 2013 Require?

1. Training Data Disclosure

At the heart of AB 2013 is the requirement for developers to publicly disclose detailed documentation about the datasets used to train their generative AI systems. This documentation must be published on a public website prior to January 1, 2026, and updated with each new release or substantial modification of the system.

The documentation must include:

  • High-level dataset summaries, including sources, owners, size, and data types.
  • Copyright and ownership status: Whether the data is copyrighted, public domain, or protected by other intellectual property laws.
  • Personal information content: An indication of whether personal data is included, along with any data-cleaning or anonymization processes.
  • Dates of collection and first use of datasets.
  • Information on whether synthetic data was used, along with an explanation of its purpose.

An infographic visualising what the documentation for AB 2013 must include

This transparency empowers users and stakeholders to understand the foundations of AI-generated outputs, addressing concerns about biases, copyright infringement, and privacy violations.

2. Transparency to End Users

AB 2013 emphasizes clear communication with end users by requiring developers to:

  • Provide accessible explanations of training data sources and methodologies.
  • Inform users of the system’s generative nature, particularly in cases where AI outputs closely resemble or emulate human-created content.

While the law does not mandate direct human oversight in the decision-making process, developers are encouraged to ensure accuracy in their disclosures by implementing robust documentation and review practices.

3. Data Privacy Alignment

Although AB 2013 does not introduce new privacy-specific obligations, it intersects with other California privacy laws, including the California Consumer Privacy Act (CCPA). Developers must ensure that personal data within training datasets adheres to existing privacy regulations, emphasizing lawful collection, data minimization, and protection against unauthorized disclosure.

Enforcement and Penalties

AB 2013 integrates into California’s Civil Code, allowing for enforcement by the California Attorney General. Private civil actions may also be brought under consumer protection statutes, such as the Unfair Competition Law, if noncompliance results in harm.

Key Points on Enforcement:
  • No specified statutory fines: While the law does not establish explicit penalties, violations may lead to injunctive relief or lawsuits, particularly if lack of transparency is deemed an unfair or unlawful business practice.
  • Grace period: Developers have until January 1, 2026, to align with the requirements, offering time to audit and document their training datasets.

Challenges and Opportunities for Businesses

Compliance with AB 2013 presents both challenges and opportunities:

Challenges:

  • The law places the burden on developers to meticulously track, audit, and disclose their training data. For companies with extensive datasets, this may involve significant resource allocation.
  • Potential overlap with other laws, such as copyright and privacy regulations, adds complexity to compliance efforts.

Opportunities:

  • Demonstrating transparency can build trust with users and stakeholders, providing a competitive edge in an increasingly scrutinized market.
  • Companies that embrace compliance early can position themselves as industry leaders, setting benchmarks for ethical AI development.

What Businesses Should Do Next?

  • Step 1: Conduct a Training Data Audit
    Identify and categorize all datasets used for developing generative AI systems. Assess for:

    • Copyrighted material.
    • Personal data inclusion.
    • Synthetic data usage.
  • Step 2: Establish Robust Documentation Practices
    Implement tools and processes to track changes in datasets and automatically generate documentation for new releases or updates.
  • Step 3: Educate Key Teams
    Ensure that data scientists, product managers, and legal teams are trained on AB 2013 requirements, particularly around transparency and disclosure.
  • Step 4: Leverage Technology for Compliance
    Invest in compliance platforms or governance tools to automate documentation, manage version control, and streamline compliance tracking.

Colorado Senate Bill 24-205: Consumer Protections for AI

Colorado Senate Bill 24-205 is a landmark regulation aimed at protecting residents from algorithmic discrimination in high-risk AI systems. Signed into law and set to take effect on February 1, 2026, the bill requires developers and deployers of AI systems to prioritize transparency, risk management, and consumer rights. By addressing AI’s impact in critical areas such as employment, housing, and finance, the law ensures that AI systems are used ethically and without bias in decisions that significantly affect individuals’ lives.

This regulation is particularly relevant for businesses operating in Colorado or providing AI-driven services to its residents. It introduces clear responsibilities for both developers (who create or modify AI systems) and deployers (who use these systems in their operations), emphasizing fairness and accountability.

Who Is Covered by SB 24-205?

The law applies to:

  • Developers: Entities or individuals that create or substantially modify high-risk AI systems.
  • Deployers: Organizations using high-risk AI systems in critical applications, such as hiring, loan approvals, or healthcare decisions.
Exemptions

SB 24-205 provides certain exemptions to reduce the burden on smaller entities and federally regulated organizations:

  • Small businesses: Companies with fewer than 50 full-time employees are exempt if they do not train the AI using proprietary data and only use it as intended.
  • Federally regulated entities: Businesses already under strict federal AI or anti-discrimination laws (e.g., FDA-regulated systems) are exempt.
  • Specific use cases: AI systems developed exclusively for research, security, or national defense purposes may also be exempt.

An infographic featuring exemptions to SB 24-205 regulation

What Defines a High-Risk AI System?

A high-risk AI system is any machine-based system that significantly influences decisions in areas such as:

  • Employment: Automated resume screening or hiring decisions.
  • Education: AI-driven admissions or scholarship allocations.
  • Finance: Credit scoring and loan approvals.
  • Healthcare: Medical diagnoses or triage.
  • Housing: Tenant selection or mortgage approvals.

These systems must meet specific criteria to be considered high-risk, focusing on their potential to affect individuals’ fundamental rights or opportunities.

Main Requirements for Compliance

1. Risk Assessment and Governance

SB 24-205 mandates both developers and deployers to implement robust risk management practices:

  • Developers: Must exercise “reasonable care” to identify and mitigate foreseeable risks, particularly those related to algorithmic discrimination.
  • Deployers: Must adopt formal risk management policies, such as the NIST AI RMF or ISO/IEC 42001, to address the lifecycle risks of high-risk AI systems.

2. Documentation and Reporting

Documentation is a cornerstone of compliance under SB 24-205:

  • Developers: Must provide detailed documentation on the AI system’s:
    • Purpose and intended use.
    • Data sources and limitations.
    • Known risks and mitigation strategies.
    • Instructions for deployers to conduct impact assessments.
  • Deployers: Must complete an AI Impact Assessment before deploying high-risk AI systems and update it annually or after major modifications. These assessments should evaluate:
    • Potential risks of algorithmic discrimination.
    • Data integrity and biases.
    • Mitigation strategies.

3. Human Oversight

SB 24-205 emphasizes the importance of maintaining human control over high-risk AI systems:

  • Deployers must provide consumers with an appeals process or human review for decisions affecting their rights (e.g., loan rejections or denied job applications).
  • Operators must ensure that human intervention can override AI decisions in sensitive scenarios.

4. Transparency to End Users

The law requires deployers to disclose the use of AI in consequential decisions:

  • Consumers must be informed when a high-risk AI system has been used to evaluate them.
  • For adverse decisions, deployers must provide:
    • The principal reasons for the outcome.
    • Information on how consumers can correct inaccuracies or appeal decisions.

5. Data Handling and Privacy

While SB 24-205 defers existing privacy laws like the Colorado Privacy Act, it reinforces the importance of protecting sensitive data and avoiding unauthorized disclosure. Developers and deployers are encouraged to implement strong data governance practices, ensuring that confidential information remains secure.

Enforcement and Penalties

Enforcement is managed exclusively by the Colorado Attorney General, with no private right of action available.

Penalties
  • Violations are classified as unfair or deceptive trade practices, allowing civil penalties under the Colorado Consumer Protection Act.
  • Businesses can avoid penalties by:
    • Conducting regular “red teaming” or legitimate testing to identify and address risks.
    • Adhering to recognized frameworks like the NIST AI RMF for risk mitigation.

Challenges and Opportunities for Businesses

Challenges
  • Compliance costs: Completing impact assessments and maintaining documentation may require significant resources.
  • Uncertainty: Ongoing rulemaking by the Colorado Attorney General could introduce additional requirements or clarifications.
Opportunities
  • Building trust: Proactive compliance with SB 24-205 demonstrates a commitment to fairness and transparency, enhancing reputation among consumers and stakeholders.
  • Competitive advantage: Companies adopting best practices early can establish themselves as leaders in ethical AI use.

How Businesses Can Prepare for SB 24-205?

  • Step 1: Audit AI Systems
    Identify all high-risk AI systems in use, particularly those involved in consequential decisions.
  • Step 2: Adopt Risk Frameworks
    Implement standards such as the NIST AI RMF to guide risk management practices.
  • Step 3: Conduct Impact Assessments
    Ensure AI Impact Assessments are completed before deploying systems and updated regularly.
  • Step 4: Train Key Teams
    Educate employees on SB 24-205 requirements, particularly around risk management, transparency, and documentation.
  • Step 5: Leverage Technology
    Use compliance platforms to automate documentation, track impact assessments, and manage disclosure requirements.

US Artificial Intelligence Research, Innovation, and Accountability Act (S. 3312)

The Artificial Intelligence Research, Innovation, and Accountability Act of 2024 (S. 3312) seeks to establish a federal regulatory framework for AI systems in the United States. This proposed legislation introduces new compliance requirements for generative AI, high-impact systems, and critical-impact systems to enhance transparency, accountability, and risk management. Although not yet enacted, it signals a growing push for structured AI governance across industries.

The bill aims to strike a balance between fostering innovation and ensuring AI systems operate responsibly, particularly in applications that directly impact individuals’ rights, safety, or public trust.

Who Does S. 3312 Apply To?

If passed, the bill will apply broadly to organizations that develop or deploy AI systems within the U.S. It targets two main categories of entities:

  • Developers: Those who design, train, or build AI systems, including generative AI models.
  • Deployers: Organizations that integrate or use AI for high-impact or critical applications.

The law will also require federal agencies to develop sector-specific oversight frameworks, adding further structure to AI governance.

Exemptions
  • AI systems developed for military or intelligence purposes are largely exempt unless used for civilian applications.
  • Small businesses are not explicitly excluded, though obligations are tied to the definition of high-impact or critical-impact AI systems.

An infographic featuring exemptions to S. 3312 regulation

What Does the Law Require?

Risk Classification and Governance

3312 introduces two key risk categories:

  1. High-Impact AI Systems:
    • Defined as systems influencing decisions in sensitive areas such as housing, education, healthcare, and credit.
    • Deployers must conduct ongoing risk management aligned with recognized frameworks like NIST’s AI Risk Management Framework (RMF).
    • Annual transparency reports are required, detailing the AI system’s purpose, data usage, and mitigation strategies.
  2. Critical-Impact AI Systems:
    • Includes applications in biometric surveillance, critical infrastructure, and criminal justice, where risks to constitutional rights or safety are significant.
    • Deployers must submit risk management assessments 30 days before deployment and every two years thereafter.
    • Compliance with Testing, Evaluation, Validation, and Verification (TEVV) standards is mandatory.
Transparency Obligations

Transparency is a cornerstone of S. 3312, particularly for generative and high-impact AI systems.

  • Generative AI:
    • Platforms must label AI-generated content, such as text, images, or videos, to clearly distinguish it from human-created outputs.
    • Notifications must be “clear and conspicuous” for users.
  • High-Impact and Critical-Impact AI:
    • Developers and deployers may need to disclose training data sources, methodologies, and known system limitations.
    • Risk assessment results must be accessible to regulators and, in some cases, the public.
Human Oversight

While not universally mandated, S. 3312 encourages human oversight for decisions made by AI systems:

  • Operators of high-risk AI should include mechanisms for human review or intervention, especially when decisions significantly impact individual rights or safety.
  • Federal agencies are expected to develop guidelines detailing oversight requirements for specific use cases.
Data Privacy and Security

The bill emphasizes protecting personally identifiable information (PII) in AI training data, requiring deployers to mitigate risks of unauthorized disclosure.

  • Privacy-enhancing techniques, such as data minimization, are encouraged but not explicitly mandated.
  • Developers must document the data sources and methods used during model training to support transparency and privacy efforts.

Enforcement and Penalties

Oversight and Enforcement

The bill designates the Secretary of Commerce as the primary enforcement authority, with the Attorney General authorized to initiate civil actions in cases of noncompliance.

Penalties
  • Civil fines of up to $300,000 or twice the value of the AI system involved in noncompliance.
  • Intentional violations could result in bans on deploying specific high- or critical-impact systems.
  • A 15-day grace period is provided for certain noncompliance corrections before penalties are applied.

What This Means for Your Business

If enacted, S. 3312 will introduce significant compliance requirements for AI systems. Organizations operating in the U.S. should start preparing now by:

  1. Auditing AI Systems: Identify applications that fall under high- or critical-impact categories and map their compliance gaps.
  2. Developing Risk Frameworks: Adopt frameworks such as NIST AI RMF to ensure consistent risk management and documentation.
  3. Labeling Generative Content: Implement systems to notify users when content is AI-generated.
  4. Training Teams: Educate staff on the documentation, transparency, and risk management requirements outlined in the bill.
  5. Leveraging Technology: Use compliance platforms to automate documentation, risk assessments, and reporting workflows.

NIST AI Risk Management Framework (NIST AI RMF)

The NIST AI Risk Management Framework (AI RMF 1.0), developed by the National Institute of Standards and Technology (NIST), provides organizations with a structured approach to identifying, assessing, managing, and monitoring AI risks.

Why NIST AI RMF Matters

  • Voluntary but Influential: While not a law, NIST AI RMF is widely adopted by businesses and governments to ensure trustworthy, responsible AI development.
  • Risk-Centered Approach: It helps organizations manage AI risks throughout the entire AI lifecycle, from design to deployment.
  • Global Impact: Although a US-based framework, NIST AI RMF is recognized worldwide and aligns with many AI regulations, including the EU AI Act and ISO 42001.

Key Components of NIST AI RMF

  • Govern: Establish AI governance policies and ensure compliance.
  • Map: Identify and document AI risks.
  • Measure: Develop metrics to assess AI risks.
  • Manage: Implement strategies to mitigate and monitor risks.

Learn More: Explore the full NIST AI RMF Guide here

Texas Responsible AI Governance Act (TRAIGA)

The Texas Responsible AI Governance Act (TRAIGA) is a forward-looking regulatory framework designed to govern the development, deployment, and use of artificial intelligence systems in Texas. With an effective date of September 1, 2025, this legislation introduces strict requirements for high-risk AI systems, focusing on transparency, risk management, and consumer protection while fostering innovation through a regulatory sandbox.

TRAIGA reflects Texas’s commitment to balancing responsible AI governance with business-friendly policies, ensuring organizations can innovate while protecting users from potential risks.

Who Does TRAIGA Apply To?

TRAIGA applies to mid to large-sized organizations and entities conducting business in Texas, specifically those developing, distributing, or deploying AI systems that influence significant, consequential decisions.

Key Groups Covered
  • Developers: Those building or substantially modifying AI systems.
  • Distributors: Organizations bringing AI systems to market, even if they don’t develop the technology themselves.
  • Deployers: Businesses using high-risk AI systems to make decisions affecting consumers in areas such as healthcare, finance, education, or housing.

Exemptions

  • Small Businesses: Organizations falling below the U.S. Small Business Administration (SBA) thresholds are exempt unless they use their own proprietary data to train high-risk AI models.
  • Research and Development Use Cases: Companies operating within TRAIGA’s 36-month regulatory sandbox may enjoy temporary relief from full compliance.

An infographic highlighting small businesses and research & development use cases as exemptions to TRAIGA regulation

What Does the Law Require?

Risk Management and Assessments

Organizations working with high-risk AI systems must implement robust risk management processes, including:

  • Semiannual Risk Impact Assessments: These evaluations identify potential biases, discrimination risks, and operational vulnerabilities in high-risk AI systems.
  • Risk Management Policies: Both developers and deployers must establish formal policies aligned with recognized frameworks such as the NIST AI Risk Management Framework.
Transparency and Accountability

TRAIGA places a strong emphasis on ensuring users are informed about AI’s role in consequential decisions:

  • Consumer Disclosures: Users must be notified when interacting with AI systems. Organizations must clearly explain:
    • The purpose of the AI system.
    • The factors influencing decisions made by the AI.
    • Processes for appealing or correcting adverse decisions.
  • High-Risk Reports: Developers are required to document and disclose:
    • The intended use of the AI system.
    • Data sources, known limitations, and measures taken to mitigate risks.
Human Oversight

TRAIGA mandates human supervision for high-risk AI systems to ensure accountability in decision-making processes:

  • Human Supervisors: Deployers must assign qualified personnel to oversee critical decisions made by AI.
  • Override Mechanisms: Operators must be able to intervene or override AI outputs, particularly in sensitive areas such as employment or housing.
Data Privacy and Security

TRAIGA incorporates strict controls to prevent misuse of sensitive data:

  • Consent for Biometric Data: Organizations are prohibited from capturing or using biometric data without explicit consent.
  • Limits on Sensitive Data Use: Inference of attributes such as emotions or protected characteristics is restricted unless users provide express consent.

Enforcement and Penalties

Oversight and Enforcement

The Texas Attorney General is responsible for enforcing TRAIGA and ensuring compliance through audits, investigations, and penalties.

Penalties
  • Fines for Violations:
    • Up to $100,000 per violation for prohibited uses, such as manipulative AI or algorithmic discrimination.
    • Daily fines range from $1,000 to $20,000 for continued noncompliance.
  • Opportunity to Cure: Noncompliant organizations typically have 30 days to address violations unless they involve outright banned activities, such as social scoring.

Encouraging Innovation: The Regulatory Sandbox

TRAIGA includes a 36-month regulatory sandbox, allowing companies to test AI technologies in a controlled environment with relaxed compliance requirements. This provision fosters innovation while enabling regulators to study emerging risks and develop tailored guidelines.

What This Means for Your Business

Organizations affected by TRAIGA must act now to prepare for the September 2025 compliance deadline. Key steps include:

  1. Conducting an Internal Audit: Inventory AI systems to determine whether they meet the “high-risk” threshold and map out their risk classifications.
  2. Implementing Risk Management Frameworks: Adopt recognized standards, such as NIST AI RMF, to formalize governance processes and risk assessments.
  3. Developing Consumer Disclosures: Build user-facing transparency notices and train teams to handle appeals and queries regarding AI decisions.
  4. Leveraging the Sandbox: If developing novel AI technologies, apply for the sandbox program to test features with reduced regulatory obligations.
  5. Preparing for Inspections: Maintain documentation of high-risk reports, impact assessments, and mitigation strategies for audits by the Texas Attorney General.
3

A Side-by-Side Look at Global AI Regulations

To help organizations better understand the complexities of global AI compliance, we’ve compiled a comparison table featuring key elements of the most prominent AI regulations. This structured comparison provides a quick and actionable reference to help you navigate the global AI regulatory landscape and align your strategies for compliance wherever your operations are based.

AI regulations comparison table
4

Common Threads in AI Regulation

As you can see from the above laws and regulations, despite their regional differences, global AI governance frameworks share several common principles. These shared threads reflect a growing consensus on the need for transparency, risk management, human oversight, and data privacy. Let’s break down these recurring themes and how they shape the regulatory landscape.

Transparency

Transparency is a cornerstone of nearly every AI regulation discussed. Governments recognize that users have a right to know when they are interacting with AI and how decisions impacting them are made.

  • User Disclosures: Laws such as California’s Generative AI Training Data Transparency Act (AB 2013) and South Korea’s Basic Act require clear notifications when AI is involved. Generative AI outputs must often be labeled as such, helping users distinguish AI-generated content from human-created work.
  • Decision-Making Logic: Regulations like Colorado’s Senate Bill 24-205 and the EU AI Act take transparency further by requiring explanations of decision-making logic. This includes providing insights into the data sources, algorithms, and factors influencing outcomes.

Risk Management

A proactive approach to risk is at the heart of many AI laws. By identifying and mitigating risks, governments aim to prevent harm before it occurs.

  • High-Risk Classifications: Brazil’s AI Bill and the EU AI Act classify AI systems based on their potential risks, imposing stricter rules on high-risk applications like healthcare, education, and justice systems.
  • Impact Assessments: The US AI Accountability Act (S. 3312) and Texas TRAIGA mandate detailed assessments to identify potential harms, such as algorithmic bias or safety concerns, particularly for high-impact and critical-impact systems.

Human Oversight

Even the most advanced AI systems must be subject to human judgment to ensure ethical and accountable outcomes.

  • Human-in-the-Loop Requirements: South Korea and Colorado explicitly require human oversight for high-risk AI systems, ensuring operators can intervene when decisions significantly affect individual rights or safety.
  • Accountability: These frameworks emphasize human responsibility, ensuring that automated processes do not replace the accountability of decision-makers.

Data Privacy

Data protection remains a crucial element of AI governance, with many regulations aligning their requirements with broader privacy laws.

  • Privacy Integration: Brazil’s AI Bill reinforces its alignment with the LGPD, while California’s laws overlap with the CCPA. Similarly, the EU AI Act  works alongside GDPR to ensure personal data is handled responsibly.
  • Minimization and Safeguards: Across the board, laws emphasize data minimization, lawful processing, and robust security measures to prevent unauthorized access or misuse of sensitive information.

These common threads highlight the universal priorities shaping AI regulation: protecting users, ensuring accountability, and fostering innovation responsibly.

A Venn diagram showing overlapping principles like transparency, risk management, human oversight and privacy across regulations.

5

The Role of Technology in Simplifying Compliance

Finally, let’s talk about how technology simplifies the challenge of AI compliance. With the increasing complexity of global regulations, relying solely on manual processes is no longer sustainable. Technology, especially solutions like the Modulos AI Governance Platform, provides the tools to manage risk, ensure transparency, and maintain accountability, all while staying adaptable to evolving laws.

How Modulos Aligns with Major Regulations

Modulos AI Governance Platform takes compliance to the next level by combining advanced AI governance capabilities with an intuitive platform designed to make compliance simple, scalable, and efficient.

Here’s how Modulos helps your organization align with the most critical aspects of global AI regulations:

A Venn diagram explaining how Modulos simplifies AI compliance by transparency, human oversight, proper design and regulatory updates.

Built-In Transparency & Data Disclosure

Transparency is no longer optional; it’s a regulatory requirement. Modulos simplifies transparency compliance by centralizing your audit trails, data lineage, and system documentation. This ensures you can clearly demonstrate how your models were built, what data was used, and how decisions are made. Automated reporting tools make it effortless to meet disclosure obligations under laws like California’s AB 2013 or the EU AI Act.

Human Oversight & Accountability

Accountability is essential for ensuring AI operates ethically and responsibly. Modulos integrates human oversight at every stage of the AI lifecycle, from design to deployment. By creating structured workflows for human review, the platform ensures auditable decision-making chains, helping you stay compliant with regulations like South Korea’s Basic Act and Colorado’s AI law.

AI Compliance by Design

Modulos embeds compliance into every stage of AI development. From initial data gathering to model deployment, the platform incorporates regulatory requirements from the outset. This reduces the risk of non-compliance later and streamlines audits, saving time and resources.

Always Up to Date with Emerging Rules

The regulatory landscape is constantly evolving, but Modulos keeps you one step ahead. The platform continuously monitors updates to global AI regulations and automatically adjusts controls to reflect the latest requirements. Proactive notifications ensure you never miss critical changes, whether it’s a new reporting obligation or a shift in risk management standards.

Conclusion

Global AI regulations are not just a passing trend. They’re here to stay, and their complexity will only deepen as technology evolves. While the current landscape may seem fragmented, efforts to harmonize these laws through global standards like ISO frameworks or potential UN AI codes are already gaining traction.

Emerging trends, such as the governance of generative AI, the push for ethical AI development, and the challenges of cross-border data handling, highlight the direction regulation is taking. Staying ahead of these trends requires more than compliance; it demands agility and foresight.

Proactive compliance today is not just about avoiding penalties; it’s about building resilience for tomorrow. By aligning with universal principles like transparency, risk management, and human oversight, organizations can navigate the nuances of regional differences while fostering trust and accountability.

All that to say, technology platforms like Modulos are no longer optional; they’re essential. They enable organizations to scale their compliance efforts efficiently, integrate governance into AI lifecycles, and stay ahead of changing regulations.

Need help navigating the complexities of global AI regulations? Request a free demo to see how the Modulos AI Governance Platform can simplify compliance for your organization and turn regulatory challenges into opportunities for innovation and growth.

Stay Informed with Modulos Newsletter

Stay informed about the latest Modulos developments and AI industry news by subscribing to our newsletter.

[hubspot type=form portal=8471794 id=8b86a72e-4ad3-43f3-8526-881da9060959]