Skip to content

Leadership Dynamics

5 automated security scanners


Purpose: The Security vs. Business Tension Scanner is designed to analyze breach disclosure language and identify patterns that may indicate excessive controls, productivity impact disputes, and security friction points. It helps organizations detect blame deflection tactics, passive voice usage, and minimization of security incidents by identifying various tactics used in breach disclosures.

What It Detects:

  • Blame Deflection Patterns: This includes nation-state actor claims, state-sponsored attacks, highly sophisticated attacks, unprecedented breaches, and zero-day exploits.
  • Passive Voice Usage: This involves systems being accessed, data being compromised, information being obtained, and determinations being made in a passive manner.
  • Minimization of Impact: This includes limited number affected, no evidence found, abundance of caution statements, and potentially affected users.
  • Third-Party Blame Patterns: This includes vendor/partner responsibility shifting, supply chain attack framing, managed service provider blame, contractor/consultant scapegoating, and outsourcing as a primary explanation.
  • Employee Scapegoating: This involves rogue employee or insider framing, individual termination announcements, lack of systemic control failure acknowledgment, HR action emphasis over security gaps, and isolated incident framing.

Inputs Required:

  • domain (string): The primary domain to analyze, such as acme.com.
  • company_name (string): The company name for statement searching, e.g., “Acme Corporation”.

Business Impact: This scanner is crucial for organizations aiming to enhance their security posture by identifying potential issues in breach disclosure language that could lead to miscommunication and ineffective risk management strategies. It helps businesses understand the implications of various tactics used in breach disclosures and adjust their internal controls accordingly.

Risk Levels:

  • Critical: Findings indicating significant blame deflection, passive voice usage, or minimization of impact with a direct correlation to high security risks and potential legal liabilities.
  • High: Findings suggesting moderate risk, such as increased complexity in managing third-party relationships or potential public trust issues due to ambiguous language.
  • Medium: Findings that may require further investigation or improvement in communication practices without severe consequences but still indicative of potential challenges.
  • Low: Informal findings that do not significantly impact security posture but could benefit from improved clarity and transparency in communications.
  • Info: Non-critical findings that provide additional context but do not pose immediate risks or significant impacts on business operations.

Example Findings:

  1. “The company’s recent breach disclosure uses passive voice extensively, making it difficult to assess the actual impact of the incident.”
  2. “A third-party vendor is framed as responsible for a data breach that occurred under their management, which may shift blame without addressing systemic issues.”


Purpose: The Risk Acceptance Dynamics Scanner is designed to analyze breach disclosure language and detect risk displacement, approval forum shopping, and accountability diffusion. This tool identifies linguistic patterns that suggest organizations are shifting blame to external actors, vendors, or employees instead of acknowledging internal security failures, policy gaps, or leadership negligence.

What It Detects:

  • Blame Deflection Patterns:
    • Phrases like “nation-state actor,” “state-sponsored,” indicating an attempt to deflect responsibility from internal issues.
    • Sophistication claims such as “highly sophisticated,” “unprecedented level,” or “zero-day exploit” used without technical justification.
  • Third-Party Blame Patterns:
    • Language like “third-party vendor,” “managed service provider,” or “contractor” to shift blame away from internal controls.
    • Emphasis on supply chain attacks rather than internal security measures.
  • Employee Scapegoating:
    • Use of terms like “rogue employee” or “insider threat” without acknowledging broader access control failures.
    • Highlighting individual actions over systemic issues.
  • Passive Voice and Vagueness:
    • Sentences using passive voice such as “was accessed,” “were compromised.”
    • Unclear causation statements like “has been determined” without clear attribution.
  • Technology Failure Emphasis:
    • Overemphasis on specific technologies or vendors rather than configuration issues.
    • Mention of zero-day exploits without CVE details.

Inputs Required:

  • domain (string): Primary domain to analyze (e.g., acme.com)
  • company_name (string): Company name for statement searching (e.g., “Acme Corporation”)

Business Impact: This scanner is crucial as it helps organizations self-assess their security posture by identifying where they might be shifting blame to external parties instead of addressing internal vulnerabilities. Understanding these patterns can lead to more proactive measures in enhancing cybersecurity and risk management practices.

Risk Levels:

  • Critical: Conditions that directly impact critical systems or data loss, requiring immediate attention and resolution.
  • High: Conditions that significantly affect business operations but do not necessarily lead to critical system failures, requiring high priority remediation efforts.
  • Medium: Conditions that may disrupt normal operations but have lower potential impacts compared to high risks, suggesting a need for medium-priority actions.
  • Low: Informal or minimal impact conditions that can be addressed at the discretion of management based on overall risk tolerance and strategic objectives.
  • Info: Non-critical findings providing general insights into security practices without immediate operational implications.

If specific risk levels are not detailed in the README, they have been inferred based on typical severity indicators.

Example Findings:

  1. “The systems were accessed by unauthorized actors.” - This passive voice statement highlights a potential vulnerability that needs to be addressed through improved access controls.
  2. “No evidence of data exfiltration was found.” - While this is not explicitly minimizing the impact, it does suggest a need for more thorough forensic analysis and possibly stronger preventive measures against data theft.

Purpose: The Security Funding Politics Scanner is designed to analyze organizational language in disclosures, press releases, and other communications to detect budget competition, resource justification, investment prioritization conflicts, and related political maneuvering within the security domain. This tool helps identify whether an organization is strategically framing its security needs to secure funding or justify existing investments.

What It Detects:

  • Budget Competition Indicators: The scanner identifies language indicating internal budget battles (e.g., “limited resources,” “funding constraints”) and checks for comparisons with other departments or organizations (“our budget is smaller than XYZ’s”). It also verifies claims of underinvestment in security (“we need more funding to address these issues”).
  • Resource Justification Patterns: The scanner tests for statements justifying current resource allocation (e.g., “this investment will protect our assets”) and checks for emphasis on the value of security measures (“our security team is essential for our operations”). It also verifies claims of resource efficiency (“we are maximizing our security budget with these tools”).
  • Investment Prioritization Conflicts: The scanner detects conflicting statements about investment priorities (e.g., “we need to invest in both X and Y”) and identifies prioritization of certain areas over others (“our top priority is network security, not endpoint protection”). It also verifies claims of balanced investments (“we are evenly distributing our budget across all security needs”).
  • Political Maneuvering Language: The scanner tests for language indicating political maneuvering (e.g., “we need to secure board approval for this investment”) and frames security as a strategic priority (“our security posture is critical to our long-term success”). It also verifies claims of alignment with organizational goals (“this investment aligns with our overall business strategy”).
  • External Influence Indicators: The scanner identifies mentions of external influences on budget decisions (e.g., “regulatory requirements are driving our investments”) and references to industry standards or benchmarks (“we need to meet the latest security standards”). It also verifies claims of compliance-driven investments (“our compliance needs require additional funding”).

Inputs Required:

  • domain (string): Primary domain to analyze (e.g., acme.com)
  • company_name (string): Company name for statement searching (e.g., “Acme Corporation”)

Business Impact: This scanner is crucial as it helps organizations understand the strategic framing of their security needs, which can directly impact funding allocation and resource prioritization. It ensures that security investments are not only effective but also strategically aligned with organizational goals, thereby enhancing overall security posture and resilience against potential threats.

Risk Levels:

  • Critical: The scanner identifies significant discrepancies in budget allocations or strategic misalignments that could lead to severe underinvestment in critical security measures.
  • High: The scanner detects clear indications of budget competition or resource misallocation that may result in suboptimal security investments, posing a high risk to organizational security.
  • Medium: The scanner flags potential issues requiring further investigation regarding investment priorities or strategic misalignments that could lead to moderate risks in the security domain.
  • Low: Informal findings suggesting minor discrepancies in language usage but not indicative of significant resource misallocation or underinvestment.
  • Info: General informational findings about organizational communication practices related to security, which may include best practices for disclosure and strategic alignment.

Example Findings:

  • “The company frequently mentions that their budget is significantly smaller compared to industry peers, suggesting a need for increased funding.”
  • “Statements emphasize the importance of cybersecurity personnel as essential for operations, potentially justifying higher investment in IT security measures.”

Purpose: The Audit Finding Management Scanner is designed to analyze breach disclosure language in order to detect and categorize linguistic patterns that may indicate disputes over finding classification, negotiations regarding incident severity, or shifts in responsibility for resolution. This tool helps organizations identify potential issues related to blame deflection, passive voice usage, minimization of impact, third-party blame patterns, and employee scapegoating.

What It Detects:

  • Blame Deflection Patterns: The scanner identifies claims that may suggest a nation-state actor is responsible for the breach without concrete evidence, such as using phrases like “nation[- ]?state(?:\s+actor)?” or “state[- ]?sponsored.”
  • Passive Voice Usage: It detects instances where systems are described as being accessed, data are mentioned as compromised, information is said to be obtained, and determinations have been made in a passive voice manner.
  • Minimization of Impact: The scanner looks for indications that the impact of the breach has been downplayed, such as statements about limited numbers affected or lack of significant evidence of compromise.
  • Third-Party Blame Patterns: It recognizes shifts in responsibility to vendors, partners, managed service providers, contractors, or consultants and frames attacks as coming from supply chain entities.
  • Employee Scapegoating: The scanner flags attempts to single out specific individuals (rogue employees or insiders) for the breach, emphasizing individual termination announcements and a lack of acknowledgment of broader systemic control failures.

Inputs Required:

  • domain (string): This is the primary domain under analysis, such as “acme.com,” which helps in searching for relevant breach disclosure statements on the company’s website.
  • company_name (string): The name of the company, like “Acme Corporation,” used for specific statement searches to contextualize the findings within the organization’s broader communication practices.

Business Impact: This scanner is crucial as it helps in early detection of potential misinformation and attempts at blame deflection or minimization that could affect an organization’s public perception and regulatory compliance. It aids in proactive risk management by identifying vulnerabilities in disclosure and communication strategies.

Risk Levels:

  • Critical: Findings indicating clear state-sponsored attacks, highly sophisticated breaches without CVE details, and assertions of unprecedented levels of breach impact.
  • High: Passive voice usage extensively across disclosures, especially concerning access to systems and compromise of data.
  • Medium: Minimalization patterns suggesting a downplayed severity or scope of the incident.
  • Low: Third-party blame shifting primarily attributed to vendors or partners without concrete evidence of systemic issues within the organization.
  • Info: Informational findings that do not significantly impact the core security posture but could indicate areas for improvement in communication practices.

If specific risk levels are not detailed in the README, they have been inferred based on typical severity indicators and the purpose of the scanner.

Example Findings:

  1. “The report indicates a clear shift towards blaming external actors without concrete evidence suggesting nation-state involvement.”
  2. “Passive voice usage throughout breach disclosures suggests an attempt to downplay the severity of the compromised data.”

Purpose: This scanner is designed to analyze and detect patterns of security metric politicization within a company. It aims to identify if there are any alterations in success criteria, KPI (Key Performance Indicator) manipulation, obfuscated reporting, policy evasion, and other such practices that may lead to inflated or misleading performance metrics. This can be particularly concerning as it might indicate a lack of transparency and integrity in the security measures implemented by the company.

What It Detects:

  • KPI Inflation: The detection of inflated scores without any actual improvement or evidence of progress, which could be manipulated to show better performance than what exists.
  • Success Criteria Shifts: Alterations in success criteria after a breach have been detected, potentially to maintain the appearance of security effectiveness.
  • Reporting Distortion: Vague descriptions and use of euphemisms in reporting that may obfuscate actual issues or progress, leading to an unclear understanding of the company’s security posture.
  • Policy Evasion: Changes in policies or practices to avoid audits or scrutiny, which might be used to hide non-compliance with certain regulations or standards.
  • KPI Manipulation Post-Breach: Adjustments to metrics and criteria after a breach have occurred, aimed at maintaining the illusion of strong security measures.

Inputs Required:

  • Domain: The internet domain name under investigation (e.g., “google.com”).
  • Company Name: The official or recognizable name of the company operating within that domain (e.g., “Google LLC”).

Business Impact: The detection and reporting of such politicization patterns are crucial as they can significantly impact a company’s reputation, trustworthiness, and regulatory compliance. Misleading metrics could lead to misguided decisions by stakeholders, potentially causing financial losses or legal repercussions. Moreover, it undermines the integrity of security measures that should be transparent and reliable.

Risk Levels:

  • Critical: When multiple instances of KPI inflation are detected alongside evidence of policy evasion and significant reporting distortion, indicating a severe lack of transparency and potential fraudulent activities.
  • High: When there is substantial evidence of KPI manipulation or alterations in success criteria post-breach, suggesting a high level of deception in the company’s security performance metrics.
  • Medium: When isolated instances of misreporting or evasion are detected without strong supporting evidence for other critical risks, indicating potential ethical concerns but lower overall risk if mitigated appropriately.
  • Low: When no significant deviations from standard reporting practices are found and all metrics align with actual progress and compliance efforts.
  • Info: When minor discrepancies in reporting are identified but do not significantly impact the company’s security posture or financial standing, considered more as informational rather than critical.

If the README doesn’t specify exact risk levels, infer them based on the scanner’s purpose and impact.

Example Findings:

  • A company might be found to have inflated its security scores by manipulating data points without actual improvements in security measures.
  • There could be evidence of altering success criteria post-breach to maintain a false sense of security, which is detected through the scanner’s analysis.