AI Trust, Privacy, and Security

This document provides an overview of Bugcrowd’s AI principles, infrastructure, data handling protocols, and security controls for user-facing features such as AI Triage Assistant and AI Analytics.

Statement of principles

At Bugcrowd, our mission is to empower organizations to stay ahead of threat actors via Crowd+AI: human ingenuity amplified by machine speed and scale. For years, Bugcrowd has used machine learning internally to improve the efficiency of services like tester activation and managed triage. We are increasingly adopting AI models in user-facing products, benefiting our customers and our community of hackers, researchers, pentesters, and red team operators.

As this journey continues, we’ve committed to these key principles for the responsible use of AI within our platform and services:

Never exposing private information

The privacy and security of your personal data are top priorities. We are committed to the protection of that information and never using it irresponsibly. Our use of AI is strictly governed by our existing privacy and data security policies, as reflected by our compliance with ISO 27001, SOC 2, SOC 3, and NIST 800-53.

What this means for customers and researchers:

  • We do not allow third parties to train AI, LLM, or generative AI models on customer or researcher data.
  • Customers maintain full authority over our user-reachable AI features and may disable them at any time via the Global Control of LLMs, as detailed below.

Keeping humans in the loop

We believe that AI should be used to empower and assist our teams. For some fully managed services, human judgment, empathy, and expertise can be essential to providing you with the best possible outcomes.

What this means for customers and researchers:

  • For fully managed services like triage and penetration testing, we will offer customers the choice of human review before outcomes are delivered. Humans will always be able to specify what is in and out of bounds.

Maintaining transparency and accountability

Our AI systems are designed to be transparent and governable.

What this means for customers:

  • For AI features made available to customers, our platform includes controls that allow you to enable or disable them. (See the Global Control of LLMs section below.)

Acting fairly and without discrimination

We are committed to using AI fairly. Our teams work to ensure our AI systems do not perpetuate or create discrimination. We strive to be transparent about how we use AI to serve you.

What this means for customers and hackers:

  • We do not use AI to intentionally target, profile, or treat individuals differently based on protected characteristics such as race, gender, or national origin.

Empowering the community through the responsible use of GenAI tools

Guidelines for the use of GenAI tools are becoming a common part of security research. As a result, we have updated our community Code of Conduct to include guidelines about what are acceptable/unacceptable uses of these tools in hunting and reporting.

What this means for customers and researchers:

  • Researchers remain fully accountable for the behavior of GenAI tools. Using GenAI does not exempt them from strict compliance with platform rules or specific program scopes.
  • Researchers must manually verify the accuracy and reproducibility of any findings assisted by GenAI. Automated or unverified outputs are not accepted as valid submissions.

Technical architecture and model orchestration

Foundational models

All inference occurs within Bugcrowd’s private isolated cloud platform. The models utilize a logic engine for vulnerability analysis, summarization, and triage assistance or, for data processing and generating vector embeddings.

Security controls and data isolation

Cross-organization data isolation

Bugcrowd prevents data leakage through a segregated service model. Data services and AI processing services are isolated. The data service provides context to the AI layer only after verifying user permissions and if the Global Control of LLMs is enabled (see section below).

Governance and data policies

Global Control of LLMs

Organization Owners manage GenAI availability at the organization level. While it is possible to enable or disable these capabilities at any time via the Global Control of LLMs, disabling the control deactivates features that use generative AI to accelerate security workflows.

  • Configuration: For details on enabling/disabling features and a list of specific LLM capabilities, refer to the Global Control of LLMs documentation.
  • Opt-out effectiveness: Disabling these AI features prevents the data service from providing context to the AI orchestration layer. Audits verify that these settings restrict the AI service from accessing organization data.

Humans-in-the-loop (HITL)

Bugcrowd’s user-reachable AI features are limited to read-only data access; they cannot autonomously modify report statuses, trigger payments, or send external communications. Instead, all actions require manual human approval.

Frequently asked questions about user-reachable AI features

Does the AI have direct database access?
No. It interacts with the platform through scoped APIs that enforce existing user-level permissions.

How does Bugcrowd prevent cross-contamination or data leakage?
See “Security controls and data isolation” section above.

How is the risk of prompt injection managed?
Our security model assumes that a highly-skilled adversary will always be able to bypass LLM guardrails. We mitigate such attacks at the architecture-level by ensuring that LLM-based solutions at Bugcrowd have:

  • Read-only access to data: Our LLM-based solutions are productivity tools; any output they emit must be reviewed by a user and no change is made without the user taking an explicit action.
  • User-level data access: When a user interacts with an LLM-based solution, the access level of the AI assistant is limited by the user’s access level. In other words, the AI feature can only access data the user can access.
  • No external connectivity: Model inference is running locally within Bugcrowd’s private isolated cloud network.