Skip to content

Handling high volume of AI-generated vulnerability reports #1541

@RafaelGSS

Description

@RafaelGSS

We’ve been receiving an increasing number of vulnerability reports that appear to be AI-generated or heavily AI-assisted. Many are low-signal (vague, non-reproducible, incorrect, or missing impact analysis), which creates triage load and slows response for legitimate reports.

This issue proposes discussing changes to our intake/triage process: policy updates, clearer report requirements, and possible lightweight automation to flag “likely AI-generated / low-signal” submissions without rejecting valid reports.

Triage time is being consumed by reports that:

  • lack reproducible steps / PoCs
  • misinterpret expected behaviour as a vulnerability
  • include generic text and claims not supported by evidence
  • ... etc

My suggestion would be:

  1. Update our HackerOne policy to clarify that AI-assisted reports are not allowed.
  2. Find some mechanism that would help us identify duplicates (H1 already have that, but it's not very clear) and automatically close them
  3. Should we request at least a non-negative user score before reporting a vulnerability to Node.js?

cc: @nodejs/security-triage @nodejs/tsc @nodejs/security-wg

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions