Investing in the security advisory experience on GitHub 🔐 #189802
Replies: 2 comments
-
|
CI on private forks is by far the most critical for us. It takes too much time and resources to validate fixes without it. Tooling to triage faster would also be really useful, especially response templates and AI-suggestions for de-duplication. |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
We hear you: the signal-to-noise problem is real
Over the past few months, we've heard from maintainers across the ecosystem - in community discussions, in support channels, and directly - that the volume of private vulnerability reports is increasing significantly. Some reports are AI-generated with minimal or no human review. Others describe real behavior but mischaracterize it as a vulnerability, requiring deep investigation to confirm non-impact.
The burden this places on maintainers is significant. Validating even a single poor-quality report can take hours, and when reports arrive in volume, the effect is cumulative: maintainer time is consumed, trust in the reporting channel erodes, and the value of private vulnerability reporting as a coordinated disclosure mechanism is undermined.
We're seeing this reflected in broader ecosystem trends too. Multiple high-profile projects have had to change their vulnerability intake processes, raise submission requirements, or discontinue bounty programs entirely. Industry working groups are formally tracking AI-generated low-quality security reports as a systemic challenge for open source.
What we're working on
We're investing across several areas to improve the experience. Here's what's on our radar at a high level:
Reducing the burden of low-quality reports
We're exploring ways to help maintainers triage faster and deal with fewer junk reports:
AI-assisted triage suggestions to help maintainers quickly assess incoming PVRs - surfacing plain-language claim assessments, duplicate and similarity checks against the Advisory Database, detection of codebase inconsistencies (e.g., "this report references a function that doesn't exist in this repository"), and scope validation against your SECURITY.md. Any AI-powered suggestions would be clearly labeled as AI-generated, visible only to maintainers, and would serve as a decision-support layer only - a human must always make the final call on every report.
Tooling for faster responses - canned response templates, bulk actions for managing multiple reports, and improved filtering and sorting in the triage view.
Raising the submission bar - we're looking at structured fields, rate limiting, and pre-submission validation to improve report quality before it reaches you.
Fine-grained permissions for security advisories
Today, there's no way to give someone security advisory access to a repository without making them an admin. The existing Security Manager role is org-wide only, which doesn't work for organizations that need per-repo scoping. We're working toward enabling fine-grained permissions for security advisories - create, read, edit, and close/accept/publish - that can be assigned at the repository level through custom repository roles. This would let you grant precisely the access your security responders need, on exactly the repositories that are relevant, without over-provisioning admin access.
Enabling CI on advisory workspace private forks
One of the longest-standing requests we hear is the inability to run GitHub Actions on the temporary private forks created for security advisories. Right now, maintainers can't run their CI pipeline against a security fix before merging, which means you're either merging untested patches or maintaining a separate private fork workflow outside of our advisory tooling.
We're actively working through the security model required to enable Actions on these workspaces safely. The core challenge is ensuring that embargoed vulnerability details can't leak through webhook payloads, third-party app integrations, or untrusted workflow execution. This is complex work with real security implications, and we're being deliberate about getting it right - the integrity of the embargo model is foundational to the trust that makes coordinated disclosure work.
Our philosophy
A few things we want to be explicit about:
Humans remain in the loop. Any AI-assisted tooling will be informative and clearly labeled and maintainers decide what happens to every report. Features should ship with sensible defaults, provide granular controls, and give maintainers the ability to configure their experience to match their project's needs - not impose a one-size-fits-all policy. Repository administrators may not always be the appropriate user to handle a vulnerability report, so expanding fine-grained permissions can increase the potential for human security support.
We're not trying to ban AI-assisted security research. AI tools can find real vulnerabilities, and legitimate researchers may use them as part of their workflow. The goal is to raise the quality floor for submissions and give maintainers better tools to separate signal from noise - not to penalize the tool, but to hold the human submitter accountable for the quality of what they submit.
We want your input
We're building this for the people who maintain the open source ecosystem, and your feedback directly shapes our priorities. Some questions we'd love your perspective on:
Triage workflow: What's the most time-consuming part of handling an incoming vulnerability report today? Where do you lose the most time - initial assessment, back-and-forth with reporters, investigating claims, or something else?
Report quality: What information, if required upfront from reporters, would most help you triage faster? (e.g., proof of concept, affected versions, reproduction steps, disclosure of AI tool usage)
Permissions: How are you currently managing who has access to your security advisories? What's broken about the current model?
CI on private forks: How are you working around the lack of CI today? Separate private repos? Merging untested? Something else?
Anything else: What's the single most impactful change we could make to the repository advisory and/or private vulnerability reporting experience?
Thank you for trusting GitHub with your coordinated disclosure workflows. The maintainer community's willingness to share honest, detailed feedback - even when it's critical - is what helps us prioritize the right work. We'll continue to share updates as these efforts progress.
Beta Was this translation helpful? Give feedback.
All reactions