Skip to content

AI Ethics Compliance

5 automated security scanners


Purpose: Ensures that the explanations provided by AI systems are of high quality, interpretations remain stable over time, and transparency is maintained. This scanner helps organizations comply with ethical standards in AI usage by analyzing documentation and policies related to AI explainability.

What It Detects:

  • 1. Explanation Quality Indicators:

    • Test for presence of detailed explanation methodologies.
    • Check for clarity in explaining model decisions.
    • Verify the inclusion of confidence scores or uncertainty measures.
    • Detect vague explanations without supporting data.
    • Flag overly technical jargon that lacks user-friendly interpretation.
  • 2. Interpretation Stability Patterns:

    • Test for consistent explanations across similar inputs.
    • Check for changes in explanation logic over time.
    • Verify the stability of model interpretations with minor input variations.
    • Detect significant shifts in how decisions are explained.
    • Flag inconsistencies between different sources of documentation.
  • 3. Transparency Maintenance Indicators:

    • Test for clear communication of AI system limitations.
    • Check for transparency in data sources and preprocessing steps.
    • Verify the inclusion of ethical considerations in explanations.
    • Detect hidden biases or assumptions in model explanations.
    • Flag lack of transparency regarding model updates or retraining.
  • 4. Policy Compliance Patterns:

    • Test for adherence to established explainability policies.
    • Check for compliance with industry standards (e.g., GDPR, HIPAA).
    • Verify the presence of internal guidelines on AI explainability.
    • Detect deviations from documented procedures.
    • Flag non-compliance with regulatory requirements.
  • 5. Documentation Quality Indicators:

    • Test for comprehensive security documentation related to AI systems.
    • Check for up-to-date policy pages and trust center information.
    • Verify the inclusion of compliance certifications (e.g., SOC 2, ISO 27001).
    • Detect outdated or incomplete documentation.
    • Flag missing sections in critical documents.

Inputs Required:

  • domain (string): Primary domain to analyze (e.g., acme.com)
  • company_name (string): Company name for statement searching (e.g., “Acme Corporation”)

Business Impact: Ensuring that AI systems provide clear, consistent, and transparent explanations is crucial for building trust with users and complying with legal requirements. Poor explainability can lead to misinterpretation of model outputs, which may result in incorrect decisions or non-compliance with regulations.

Risk Levels:

  • Critical: The scanner identifies significant shifts in explanation logic that could mislead stakeholders about the reliability of AI system outputs.
  • High: Incomplete or outdated documentation related to AI explainability can lead to compliance issues and potential legal risks.
  • Medium: Unclear explanations, lack of transparency, or non-compliance with established policies may impact user trust and regulatory adherence.
  • Low: Minor inconsistencies in explanation quality might not have a significant impact but are still important for continuous improvement.
  • Info: Informational findings such as minor updates to documentation can ensure that the scanner remains effective against evolving practices.

Example Findings:

  • The AI system provides explanations that change significantly with only slight variations in input data, indicating unstable interpretation logic.
  • Documentation lacks recent updates regarding compliance certifications, posing a risk of non-compliance with current regulatory standards.

Purpose: The Ethical Boundary Adherence Scanner is designed to detect shifts in value alignment, violations of purpose limitations, and inappropriate use cases by analyzing company security documentation, public policies, trust center information, and compliance certifications.

What It Detects:

  • Value Alignment Shifts: Identify changes in stated values or mission statements that may indicate a shift away from ethical principles.
  • Purpose Limitation Compliance: Verify adherence to declared purposes and limitations of AI usage.
  • Use Case Appropriateness: Evaluate the appropriateness of specific AI use cases in relation to ethical standards and compliance requirements.
  • Security Policy Indicators: Search for key security policy terms such as “security policy,” “incident response,” “data protection,” and “access control.”
  • Maturity Indicators: Look for compliance certifications like SOC 2, ISO 27001, penetration tests, and vulnerability assessments.

Inputs Required:

  • domain (string): Primary domain to analyze (e.g., acme.com)
  • company_name (string): Company name for statement searching (e.g., “Acme Corporation”)

Business Impact: This scanner is crucial for ensuring that companies adhere to ethical standards and comply with regulatory requirements, thereby safeguarding user trust and preventing potential security breaches.

Risk Levels:

  • Critical: Conditions that pose a significant risk to the organization’s operations or reputation, requiring immediate attention.
  • High: Conditions that indicate a high likelihood of negative outcomes, necessitating mitigation efforts.
  • Medium: Conditions that may lead to moderate risks if not addressed promptly.
  • Low: Informal observations that do not pose significant risks but can be improved for better compliance and ethical practices.
  • Info: Non-critical findings providing general information without immediate risk or impact.

If the README doesn’t specify exact risk levels, infer them based on the scanner’s purpose and impact.

Example Findings: The scanner might flag instances where a company’s public statements about ethical practices do not align with actual operations, indicating a high-risk critical finding for value alignment shifts. Additionally, it could identify unauthorized use of AI capabilities beyond declared purposes as a medium-risk issue in compliance with purpose limitations.


Purpose: The Human Oversight Effectiveness Scanner is designed to assess the effectiveness of human oversight in security processes by evaluating the presence and quality of review mechanisms, intervention appropriateness, and control effectiveness. It aims to identify gaps in human involvement that may lead to vulnerabilities or compliance issues.

What It Detects:

  • Review Quality Indicators:
    • Security Policy Presence: Checks for the existence of a comprehensive security policy document that outlines essential security practices.
    • Incident Response Plan: Verifies the presence and completeness of an incident response plan to handle potential security breaches effectively.
    • Data Protection Measures: Ensures that data protection policies are in place to safeguard sensitive information from unauthorized access or disclosure.
    • Access Control Policies: Evaluates the effectiveness of access control mechanisms and policies to ensure only authorized personnel have access to relevant resources.
  • Intervention Appropriateness Indicators:
    • SOC 2 Compliance: Checks for compliance with SOC 2 standards, which indicate adherence to specific security controls related to data handling practices.
    • ISO 27001 Certification: Verifies compliance with ISO 27001, ensuring robust information security management systems are in place.
    • Penetration Testing Reports: Looks for evidence of regular penetration testing to identify and mitigate potential vulnerabilities in the system.
    • Vulnerability Assessments: Ensures that ongoing vulnerability assessments are conducted to maintain a secure environment.
  • Control Effectiveness Indicators:
    • Security Documentation Accessibility: Assesses the accessibility and completeness of company security documentation, including policies, procedures, and guidelines.
    • Policy Review Processes: Evaluates the presence and effectiveness of processes for regularly reviewing and updating security policies.
    • Manual Evaluation Practices: Checks for manual evaluation practices in place to ensure ongoing oversight and compliance with security standards.

Inputs Required:

  • domain (string): The primary domain name of the organization’s website, which is used to search for relevant security documentation and policy statements.
  • company_name (string): The company name, which helps in identifying specific documents or policies related to the organization.

Business Impact: Identifying gaps in human oversight can significantly impact a company’s cybersecurity posture by reducing the risk of unauthorized access, data breaches, and other compliance issues that may arise from inadequate security practices. Effective human oversight is crucial for maintaining trust with stakeholders, customers, and regulatory bodies.

Risk Levels:

  • Critical: Conditions where there are significant gaps in security policies or critical controls not in place despite documented requirements.
  • High: Conditions where key security documents are missing or incomplete, posing a high risk of potential vulnerabilities being exploited.
  • Medium: Conditions where some security measures are partially implemented or inadequately enforced, requiring immediate attention to avoid escalating risks.
  • Low: Conditions where basic security practices are adequately in place but there may be room for improvement in documentation and oversight processes.
  • Info: Informal findings related to ongoing assessments and compliance with general cybersecurity standards.

Example Findings:

  • The scanner might flag a company that lacks an up-to-date incident response plan, which is crucial during security incidents to ensure swift actions are taken.
  • Another example could be a scenario where the organization has not conducted recent penetration testing, exposing it to potential internal threats and vulnerabilities that could have been mitigated with such tests.

Purpose: The Regulatory Compliance Maintenance Scanner is designed to detect changing requirements, evolving standards, and new obligations in the domain of AI ethics compliance by analyzing company security documentation, public policy pages, trust center information, and compliance certifications.

What It Detects:

  • Security Policy Indicators: Identifies mentions of “security policy” to ensure comprehensive coverage of security measures, checks for “incident response” plans indicating preparedness for breaches, looks for “data protection” strategies ensuring adherence to data privacy laws, and verifies the presence of “access control” mechanisms safeguarding sensitive information.
  • Maturity Indicators: Detects references to SOC 2 compliance, a widely recognized standard for trust in service organizations, searches for ISO 27001 certification, which outlines an information security management system, identifies mentions of “penetration test” activities to assess system vulnerabilities, and looks for “vulnerability scan” or “vulnerability assessment” procedures indicating proactive security measures.
  • Policy Review: Analyzes company security documentation for adherence to current compliance standards, reviews public policy pages for updates on regulatory requirements and best practices, examines trust center information for transparency in security practices, checks compliance certifications for validity and relevance to AI ethics.
  • Manual Evaluation: Conducts a manual evaluation of the identified documents and policies, cross-references findings with known standards and regulations, flags discrepancies or gaps in compliance measures, provides recommendations for improvement based on detected issues.
  • Data Source Analysis: Scrapes company websites for relevant security documentation, collects information from public policy pages to ensure alignment with external standards, gathers trust center data to assess transparency and reliability, validates compliance certifications against recognized bodies and standards.

Inputs Required:

  • domain (string): Primary domain to analyze (e.g., acme.com)
  • company_name (string): Company name for statement searching (e.g., “Acme Corporation”)

Business Impact: This scanner is crucial as it ensures that the company’s security practices and compliance with regulatory standards are up-to-date and robust, which directly impacts the overall security posture of the organization against potential risks and legal liabilities associated with AI ethics violations.

Risk Levels:

  • Critical: Conditions where there is a direct breach of critical regulations or significant vulnerabilities that could lead to severe consequences for the company’s operations and reputation.
  • High: Situations where compliance standards are not met, leading to potential risks in service reliability, data security, and legal liabilities.
  • Medium: Issues that require immediate attention but do not pose an immediate threat, such as minor deviations from regulatory requirements or incomplete documentation.
  • Low: Informal findings that may need future monitoring or improvement suggestions based on ongoing compliance assessments.
  • Info: General information about the company’s security practices and compliance status, which does not directly affect risk levels but provides baseline insights for stakeholders.

Example Findings:

  1. The company lacks a comprehensive security policy document detailing incident response procedures.
  2. There is no mention of ISO 27001 certification in any public documents, indicating potential gaps in information security management.

Purpose: The Fairness Metric Stability Scanner is designed to identify and assess potential biases in AI models by analyzing company policies, security documentation, and compliance certifications. Its primary purpose is to ensure that organizations maintain ethical standards and fairness in their AI deployments, thereby mitigating the risk of discrimination and unfair treatment.

What It Detects:

  • Bias Emergence Patterns: The scanner identifies emerging biases within AI models by testing for mentions of such biases in company policies and security documentation. It checks for specific terms related to bias (e.g., “algorithmic bias,” “discrimination,” or “unfair treatment”) and verifies the presence of mitigation strategies.
  • Demographic Performance Shifts: The scanner detects changes in performance metrics across different demographic groups by checking for mentions of such shifts in company policies, security documentation, and compliance certifications. It also ensures that demographic data is included in performance reports.
  • Equality Measure Changes: This involves testing for modifications in equality measures and fairness indicators within AI models. The scanner checks for specific terms related to these changes (e.g., “equality measure changes,” “fairness metrics,” or “bias correction”) and verifies the documentation of any such alterations.
  • Policy Review Indicators: The scanner evaluates AI ethics policies and compliance frameworks by searching for mentions of these in company documents. It checks for specific terms related to ethical standards (e.g., “AI ethics policy,” “compliance framework,” or “fairness guidelines”) and verifies the presence of detailed policy documents.
  • Security Documentation Indicators: This involves testing security documentation for references to fairness and bias considerations, including mentions of tools like “bias detection” and “fairness audits.” The scanner ensures that these factors are integrated into overall security protocols.

Inputs Required:

  • domain (string): Primary domain to analyze (e.g., acme.com). This is crucial for the scanner to target specific company websites for analysis.
  • company_name (string): Company name for statement searching (e.g., “Acme Corporation”). This helps in identifying relevant documents and statements within the company’s documentation.

Business Impact: Maintaining ethical standards and fairness in AI deployments is critical as it directly impacts user trust, legal compliance, and operational integrity. The scanner contributes significantly to this by providing detailed insights into potential biases and unfair practices, which are essential for building a secure and trustworthy security posture.

Risk Levels:

  • Critical: This severity level applies when there are clear indications of severe bias or discrimination in AI models that could lead to significant legal or ethical repercussions.
  • High: Applies when the scanner identifies emerging biases without mitigation strategies, which may indicate a risk of unfair treatment and potential compliance issues.
  • Medium: Indicates situations where performance metrics show disparities across demographic groups, necessitating an investigation into equality measures and fairness criteria.
  • Low: Used for informational findings that do not directly impact security but could be indicative of best practices in AI ethics and fairness.
  • Info: Includes any additional information or observations that are relevant to understanding the scanner’s outputs without necessarily being critical to risk assessment.

Example Findings:

  1. The company mentions “algorithmic bias” in its AI ethics policy, but no specific mitigation strategies are outlined. This finding is marked as high severity due to the potential for unaddressed biases affecting model fairness.
  2. Performance metrics show a significant disparity between male and female employees in a decision-making tool, indicating a need to review equality measures and consider bias correction techniques.