-
-
Notifications
You must be signed in to change notification settings - Fork 131
Open
Description
We’ve been receiving an increasing number of vulnerability reports that appear to be AI-generated or heavily AI-assisted. Many are low-signal (vague, non-reproducible, incorrect, or missing impact analysis), which creates triage load and slows response for legitimate reports.
This issue proposes discussing changes to our intake/triage process: policy updates, clearer report requirements, and possible lightweight automation to flag “likely AI-generated / low-signal” submissions without rejecting valid reports.
Triage time is being consumed by reports that:
- lack reproducible steps / PoCs
- misinterpret expected behaviour as a vulnerability
- include generic text and claims not supported by evidence
- ... etc
My suggestion would be:
- Update our HackerOne policy to clarify that AI-assisted reports are not allowed.
- Find some mechanism that would help us identify duplicates (H1 already have that, but it's not very clear) and automatically close them
- Should we request at least a non-negative user score before reporting a vulnerability to Node.js?
cc: @nodejs/security-triage @nodejs/tsc @nodejs/security-wg
targos, marco-ippolito and mcollina