ROSÉ Federation · Compliance Intelligence

AI Compliance & Risk
Profile Assessment

Does your organization know its true exposure under the laws already in force?

EU AI Act 2024/1689
NYC Local Law 144
Colorado AI Act
Texas TRAIGA
California SB 53 / AB 2013
South Korea AI Basic Act
Triage · Question 1 of 5 0%
✦   ✦   ✦

Six jurisdictions.
One assessment.
Your complete risk profile.

This assessment is structured in two tiers. A five-question triage surfaces your immediate risk level in under two minutes. Those who wish to proceed receive a full category-by-category analysis across every major AI law enacted through March 2026.

Tier One · ~2 Minutes

Triage Assessment

Five questions. Instant risk tier. Understand your immediate exposure before proceeding further.

Tier Two · ~15 Minutes

Full Risk Profile

Complete analysis across all six regulatory frameworks with a category breakdown and specific priority actions.

Already familiar with your regulatory exposure? You may proceed directly to the comprehensive assessment.

Triage 01 / 05
Where does your organization operate, sell to, or make employment decisions?
Select all that apply — your jurisdiction footprint determines which laws govern you.
EU AI Act: Extraterritorial reach — applies wherever AI output affects EU individuals.  ·  NYC LL144: Applies to NYC-based candidates regardless of where your organisation is headquartered.  ·  Colorado SB 205: Applies to consequential decisions affecting Colorado residents.  ·  Texas TRAIGA: Disclosure and high-risk AI requirements apply from January 2026.  ·  California SB53/AB2013: Safety reporting and training data transparency obligations.  ·  South Korea AI Basic Act: High-impact AI reporting requirements.
Select all that apply
Triage 02 / 05
Does your organization use AI to make or materially influence decisions that affect people's employment, credit, housing, healthcare, education, insurance, or legal services?
These are the highest-liability categories across every major law.
EU AI Act Annex III: Employment, credit, healthcare, education, and critical infrastructure are categorically high-risk — no further classification needed.  ·  Colorado SB 205: "Consequential decisions" is defined broadly to cover all of these domains.  ·  NYC LL144: Employment AI triggers specific audit and notice obligations.  ·  Texas TRAIGA: High-risk AI in employment, financial, and healthcare contexts triggers disclosure obligations.
Triage 03 / 05
Do you use any automated tool to screen, score, rank, or evaluate job candidates or employees — including via your ATS, HCM platform, or any vendor-supplied AI feature?
Many modern applicant tracking systems include AI scoring by default. Even if you did not configure it, using a platform with this capability may trigger compliance obligations.
NYC LL144 — in force: Any AEDT used for NYC candidates requires an annual independent bias audit + public posting + 10-day candidate notice. Fines begin at $500/day per violation.  ·  EU AI Act Annex III: Employment AI is categorically high-risk — conformity assessment, risk management system, automatic logging, and human oversight are mandatory.  ·  Colorado SB 205: Reasonable care against discrimination, impact assessments, and right of appeal are mandatory.  ·  US EEOC: Employers are liable for discriminatory AI outcomes from vendor tools — vendor compliance does not shield the deployer.
Triage 04 / 05
If a regulator, court, or auditor requested proof of how your AI made a specific decision six months ago — the exact inputs, the rules in force, the timestamp — could your organization produce it?
This is the single most commonly failed requirement across all six regulatory frameworks. Ordinary server logs are not sufficient.
EU AI Act Art. 12: High-risk AI systems must automatically log events throughout their lifecycle — server logs do not satisfy this standard.  ·  NYC LL144: Bias audit reconstruction requires complete, accessible decision records.  ·  Colorado SB 205: Appeals processes require retrievable, complete decision records.  ·  US Federal Rules of Evidence: Electronic records must be authenticated — tamper-evident logs with hardware-attested timestamps significantly strengthen admissibility.  ·  ROSÉ / Timebeat: IEEE-1588 PTP cryptographic execution proofs satisfy the most demanding evidentiary standards across all six jurisdictions.
Triage 05 / 05
How would you characterize your organization's current AI governance program?
EU AI Act Art. 9: A documented risk management system maintained throughout the AI lifecycle is mandatory for high-risk AI — not a one-time exercise.  ·  ISO 42001: Certification requires demonstrated governance maturity across policy, risk management, and continuous improvement.  ·  Colorado SB 205: "Reasonable care" to prevent algorithmic discrimination requires a structured governance programme — not ad hoc review.  ·  NIST AI RMF: Organisational governance and accountability structures are the foundation of the Govern function.
Preliminary Risk Tier
ROSÉ Federation — Why This Matters

The most common source of triage flags — across every jurisdiction — is the logging and proof gap. ROSÉ's on-action cryptographic proofs and IEEE-1588 hardware-attested timestamps produce tamper-proof execution records for every consequential AI decision. One infrastructure investment satisfies EU AI Act Art. 12, NYC LL144 audit reconstruction, Colorado appeals processes, and EEOC discrimination defence simultaneously.

Proceed to the full assessment for a category-by-category analysis across all six regulatory frameworks.

This assessment is for informational purposes only and does not constitute legal advice. Consult qualified counsel for jurisdiction-specific compliance guidance.

↺   Retake from the Beginning
Deep Dive 01 / 19
What is your organization's annual global revenue?
Revenue thresholds affect fine calculations across EU, Colorado, and other frameworks.
EU AI Act: Maximum fines are the greater of a fixed amount OR a % of global annual turnover — €35M/7% for prohibited practices; €15M/3% for high-risk non-compliance; €7.5M/1.5% for incorrect information. For a $1B+ revenue company, a 7% fine could exceed $70M.  ·  Colorado SB 205: Up to $20,000 per violation.  ·  NYC LL144: $500/day per violation.  ·  Illinois BIPA: $1,000–$5,000 per biometric data violation with class action exposure.
Deep Dive 02 / 19
In which of the following high-risk domains does your organization deploy or influence AI systems?
These map directly to EU AI Act Annex III and Colorado's consequential decision categories.
⚠ Even indirect or vendor-supplied AI in these areas triggers compliance obligations. The category designation is categorical — there is no materiality threshold.
EU AI Act Annex III: These categories are categorically high-risk — full obligations apply regardless of system sophistication.  ·  Colorado SB 205: "High-risk consequential decisions" covers employment, housing, education, healthcare, credit, and legal services.  ·  NYC LL144: Employment AI triggers specific annual audit and notice obligations.  ·  Texas TRAIGA: High-risk AI in these domains triggers disclosure and impact assessment obligations.  ·  Brazil PL 2338: Mirrors EU Annex III categories closely — EU readiness substantially satisfies Brazilian requirements.
Select all that apply
Deep Dive 03 / 19
Do you use — or plan to use — any of the following practices that are outright prohibited under the EU AI Act as of February 2, 2025?
These bans are in force now. Violations carry fines of up to €35 million or 7% of global annual turnover.
⚠ These prohibitions are already active violations. There is no grace period and no compliance pathway — these practices must cease immediately for EU operations.
EU AI Act — in force February 2025: Prohibited practices include emotion recognition in employment, biometric inference for sensitive characteristics, social scoring, subliminal manipulation, and real-time biometric identification in public spaces.  ·  US Illinois BIPA: Biometric data collection in employment (including facial recognition in interviews) requires prior written consent — violations carry $1,000–$5,000 per incident.  ·  US Maryland HB1202: Prohibits facial recognition in employment interviews without consent.  ·  California CPRA: Sensitive personal information including biometric data carries heightened protection requirements.
Select all that apply
Deep Dive 04 / 19
For any AI used in employment or hiring, which of the following compliance steps have been completed?
NYC Local Law 144, EU Annex III, and Colorado SB 205 each impose distinct obligations. Select every item your organization has completed.
⚠ Employment AI carries the highest compliance liability across all jurisdictions. Missing items below are active or imminent violations.
NYC LL144 — in force: Annual independent bias audit + public posting on website + 10-day candidate notice are all mandatory before any AEDT may be used for NYC candidates.  ·  EU AI Act Annex III: Full high-risk obligations apply — conformity assessment, risk management, automatic logging, human oversight.  ·  Colorado SB 205: Impact assessments + reasonable care against discrimination + right of appeal mandatory from June 2026.  ·  US EEOC: Employers cannot disclaim liability for discriminatory vendor AI tools — the deployer bears responsibility.
Select all that apply
Deep Dive 05 / 19
Have you verified that none of your HR technology vendors use automated scoring or ranking features that would constitute an Automated Employment Decision Tool under NYC Local Law 144?
Platforms such as Workday, Greenhouse, HireVue, LinkedIn Recruiter, and many others include AI scoring features that may be active by default.
NYC LL144: The deploying organisation — not the vendor — bears LL144 compliance responsibility. A vendor's terms of service do not insulate the employer from liability.  ·  EU AI Act Art. 25: Deployers share compliance obligations with providers — you cannot contractually disclaim your own obligations.  ·  US EEOC: Employers are liable for discriminatory AI outcomes from vendor tools they selected and deployed, regardless of what the vendor contract says.  ·  Practice: Request model cards, bias audit results, and AEDT status documentation from every HR technology vendor as a condition of contract renewal.
Deep Dive 06 / 19
When AI is used in decisions that materially affect individuals, do you provide clear notice and offer a mechanism to appeal or seek human review?
Required across NYC, Colorado, EU AI Act, California, and Texas.
EU AI Act + GDPR Art. 22: Individuals have the right to explanation, human review, and to contest automated decisions — this applies to high-risk AI systems and solely automated decisions producing legal effects.  ·  Colorado SB 205: Consumers have the right to appeal consequential AI decisions and receive a written explanation within 90 days.  ·  California CPRA: Right to opt out of automated decision-making with significant effects.  ·  Texas TRAIGA: Consumers must be notified when AI is used in consequential decisions and must be able to request human review.
Deep Dive 07 / 19
When your AI systems interact directly with end users or customers, do they disclose that the interaction is AI-generated or AI-mediated?
EU AI Act Article 50 mandates disclosure for all AI systems that interact with natural persons.
EU AI Act Art. 50 — in force: AI systems interacting with natural persons must clearly disclose they are AI. Applies to chatbots, virtual agents, and synthetic content.  ·  California AB 2602: Prohibition on using AI to impersonate humans in certain consumer contexts.  ·  Texas TRAIGA: AI disclosure requirements apply to consumer-facing AI interactions.  ·  South Korea AI Basic Act: Disclosure obligations apply to AI systems used in public-facing services and high-impact applications.  ·  FTC: Undisclosed AI impersonation of humans may constitute deceptive trade practice under Section 5.
Deep Dive 08 / 19
Does your organization develop, fine-tune, or operate large-scale generative AI or foundation models for use by others?
California SB 53 and AB 2013 require safety frameworks and training data provenance disclosure. EU AI Act Title VIII targets General Purpose AI model providers.
EU AI Act Title VIII — GPAI: Providers of general-purpose AI models must publish technical documentation and comply with copyright law. Models with systemic risk face additional adversarial testing and incident reporting obligations.  ·  California SB 53: Requires frontier AI developers to implement safety protocols before training runs above a compute threshold.  ·  California AB 2013: Training data transparency — disclosure of datasets used to train generative AI systems made available to Californians.  ·  South Korea AI Basic Act: High-impact generative AI systems require registration and governance framework publication.
Deep Dive 09 / 19
For any high-risk AI systems deployed in EU markets, has your organization completed — or initiated — the required conformity assessment and registered the system in the EU AI database?
Full high-risk AI obligations now extended to December 2027 — but accountability for outcomes is immediate regardless of the enforcement deadline.
EU AI Act — conformity assessment: High-risk AI systems must undergo conformity assessment before being placed on the market. Many employment and credit AI systems require third-party assessment — self-certification is not available.  ·  EU AI database registration: High-risk AI systems must be registered before deployment in EU markets.  ·  CE marking: Required for all high-risk AI systems — without it, the system may not lawfully operate in EU markets.  ·  Timeline note: The December 2027 extension is conditional on harmonised standards being confirmed — organisations that delay risk being caught without time to remediate.
Deep Dive 10 / 19
How does your organization currently maintain proof of AI decision-making for audit or regulatory purposes?
EU AI Act Articles 12 and 26 require automatic logging of high-risk AI decision events throughout the system's lifecycle.
EU AI Act Art. 12: High-risk AI systems must automatically generate logs. Logs must be retained for minimum periods and be available to national authorities.  ·  NYC LL144: Bias audit reconstruction requires complete, accessible decision records for each candidate evaluated.  ·  Colorado SB 205: Impact assessment and appeals processes require retrievable, complete decision records.  ·  US Federal Rules of Evidence Rule 901: Electronic records must be authenticated — hardware-attested timestamps and cryptographic sealing significantly strengthen admissibility.  ·  ROSÉ / Timebeat: IEEE-1588 PTP hardware-attested timestamps combined with cryptographic on-action proofs satisfy all of these requirements simultaneously.
Deep Dive 11 / 19
Do your AI execution records capture the specific rules, model version, input data, confidence scores, and governing policies in force at the precise moment each decision was made?
A log entry stating "candidate rejected" is not the same as a non-repudiable execution record. Regulators and courts will test whether execution evidence can withstand adversarial challenge.
EU AI Act Art. 13 (Transparency): High-risk AI systems must be sufficiently transparent to enable deployers to interpret outputs — which requires reconstructable execution context, not just outcome records.  ·  UK GDPR Art. 22: Individuals have the right to meaningful information about the logic of automated decisions — "the model rejected you" is not sufficient.  ·  US disparate impact doctrine: Defending an AI discrimination claim requires reconstructing the decision inputs and governing logic at the exact moment of the challenged decision.  ·  Canada AIDA: Organisations must be able to explain how high-impact AI decisions were made — outcome-only records are insufficient.
Deep Dive 12 / 19
Does your organization have a documented AI incident response protocol — covering detection, internal escalation, and regulatory notification — for bias events, errors, or harmful AI outputs?
A generic IT incident process does not satisfy AI-specific requirements under EU Art. 73 or Colorado's 90-day reporting obligation.
EU AI Act Art. 73: Providers must report serious incidents to national market surveillance authorities without undue delay — and deployers must notify providers of any incidents.  ·  Colorado SB 205: Developers must report algorithmic discrimination to the Colorado Attorney General within 90 days of discovery.  ·  UK ICO: AI-related data breaches or rights violations may trigger mandatory breach notification under UK GDPR.  ·  US EEOC: Discovering and failing to act on discriminatory AI outcomes strengthens discrimination claims — a documented response protocol demonstrates good faith.
Deep Dive 13 / 19
For high-risk AI systems, are human operators technically capable of understanding, questioning, and overriding AI decisions in real time?
EU AI Act Article 14 requires genuine oversight — not nominal. A human who cannot meaningfully interpret or challenge a decision does not constitute lawful oversight.
EU AI Act Art. 14: Oversight measures must include the ability to decide not to use the system in a given situation and the ability to override or interrupt its output. Operators must be properly trained.  ·  Colorado SB 205: Reasonable care requires that humans remain meaningfully in the decision loop — not just nominally present.  ·  Singapore MAS FEAT Principle 4: Financial institutions must be able to intervene in AI processes where outputs may cause harm.  ·  Australia proposed guardrails: Human oversight and control is one of eight proposed mandatory guardrails for high-risk AI systems.
Deep Dive 14 / 19
Does your organization have a designated individual or formal team responsible for AI regulatory compliance?
EU AI Act Art. 26: Deployers of high-risk AI must assign oversight responsibility to competent, trained personnel.  ·  UK FCA Senior Managers Regime: AI governance accountability must be traceable to a named senior individual.  ·  ISO 42001 Clause 5.3: Roles, responsibilities, and authorities for AI governance must be formally assigned and communicated throughout the organisation.  ·  Colorado SB 205: "Reasonable care" to prevent algorithmic discrimination requires an identifiable accountability structure — a gap here weakens every other compliance defence.
Deep Dive 15 / 19
Does your organization maintain documented data governance practices covering AI training datasets — including data lineage, quality criteria, and bias testing?
EU AI Act Art. 10: Training, validation, and testing datasets must be subject to data governance practices — including relevance, representativeness, freedom from errors, and bias assessment. Continuous documentation is mandatory.  ·  NYC LL144: Bias audits require accessible records of the training data demographics used to build the AEDT.  ·  US EEOC: Training data documentation is increasingly requested in EEOC investigations of AI discrimination complaints.  ·  California AB 2013: Developers must disclose training data characteristics for generative AI made available to Californians.
Deep Dive 16 / 19
When procuring AI systems from third-party vendors, does your organization conduct formal compliance due diligence — including requiring model cards, dataset documentation, and regulatory status?
EU AI Act Article 25 makes clear that deployers share compliance obligations. Your vendor's non-compliance is your exposure.
EU AI Act Art. 25(6): Providers must give deployers all information necessary to fulfil their own obligations — this must be contractually secured.  ·  EU GDPR Art. 28: Data processing agreements must address AI-specific governance requirements.  ·  US EEOC: Employers cannot contractually disclaim liability for discriminatory vendor AI tools — you remain responsible for what you deploy.  ·  UK FCA: Outsourcing rules require financial institutions to maintain audit rights and governance oversight over AI service providers.
Deep Dive 17 / 19
Does your organization conduct regular — at minimum annual — bias and discrimination testing across protected characteristics for all AI systems used in consequential decisions?
NYC LL144: Annual independent bias audit required — not a one-time exercise, not internal testing, not a vendor's self-certification.  ·  EU AI Act Art. 9: Continuous bias monitoring is required as part of the risk management system throughout the lifecycle.  ·  US EEOC guidance: Regular disparate impact testing is the primary mechanism for demonstrating non-discriminatory AI deployment in employment.  ·  Colorado SB 205: Ongoing monitoring to prevent algorithmic discrimination against protected classes is a due diligence requirement — not satisfied by a one-time pre-deployment test.
Deep Dive 18 / 19
Has your organization formally budgeted for potential AI regulatory fines, remediation costs, or litigation arising from AI-related incidents?
For a mid-market enterprise, a single enforcement action under the EU AI Act could be existential.
EU AI Act maximum fines: €35M or 7% of global annual turnover for prohibited practices. €15M or 3% for high-risk non-compliance. €7.5M or 1.5% for providing incorrect information.  ·  NYC LL144: $500/day per violation — accruing from the day non-compliant use begins.  ·  Colorado SB 205: Up to $20,000 per violation; AG enforcement is active.  ·  Illinois BIPA: $1,000–$5,000 per biometric violation — class actions have produced settlements exceeding $100M.  ·  US employment discrimination: AI bias class actions are an emerging and growing area of litigation risk.
Deep Dive 19 / 19
On a scale of one to five, how would you rate your organization's overall AI governance maturity?
ISO 42001: Certification requires demonstrated maturity across policy, risk management, operations, and continuous improvement — level 3 or above is a practical prerequisite.  ·  EU AI Act: Quality management systems are required for high-risk AI.  ·  NIST AI RMF: Organisational profile and maturity are assessed across Govern, Map, Measure, and Manage functions.  ·  Key insight: The organisation that is EU AI Act-ready at a maturity level of 4 or 5 is substantially positioned for every other jurisdiction simultaneously — the Brussels Effect in practice.
Risk Score
Risk Level
Priority Findings & Actions
ROSÉ Federation — Mitigation Spotlight

The most common source of high-risk flags across all six jurisdictions is the logging and proof gap. ROSÉ's on-action cryptographic proofs and IEEE-1588 PTP hardware-attested timestamps produce tamper-proof execution records for every consequential AI decision — directly satisfying EU AI Act Art. 12 automatic logging, NYC LL144 bias-audit reconstruction, Colorado SB 205 appeals documentation, EEOC discrimination defence records, and Texas TRAIGA disclosure requirements. One infrastructure investment. Every jurisdiction. No workflow overhauls.

← Return to ROSÉ Federation Home

This assessment is for informational purposes only and does not constitute legal advice. Consult qualified counsel for jurisdiction-specific compliance guidance.

↺   Retake Assessment
ROSÉ
Federation
AI Compliance & Risk Profile Assessment
Confidential — For Internal Use
AI Compliance & Risk Profile
Full Assessment Report · ROSÉ Federation · 2026
ROSÉ Federation — Mitigation Spotlight

Recommended Next Step

✦   ✦   ✦