Forward Proxy vs Reverse Proxy: Understanding the Difference
The word "proxy" means acting on behalf of someone else. In networking, a forward proxy acts on behalf of clients — it sits between users and the internet, forwarding their requests to web servers. VPNs and corporate web filters use forward proxies. The server sees the proxy's IP address rather than the client's real IP.
A reverse proxy is the mirror image: it sits between the internet and your backend servers, forwarding incoming requests to the appropriate server. From the client's perspective, they are talking directly to your service. The client sees the reverse proxy's IP and never knows the backend servers exist.
Reverse proxies are ubiquitous in modern web architecture. When you visit a high-traffic website, you are almost certainly interacting with a reverse proxy layer — Nginx, HAProxy, Caddy, AWS Application Load Balancer, or a CDN edge node — before your request reaches any application code. Understanding reverse proxies is foundational to understanding CDNs, load balancers, and API gateways.
Use our HTTP headers checker to see reverse proxy fingerprints in response headers — Nginx identifies itself in the Server header, and many reverse proxies add X-Forwarded-For headers containing the original client IP.
Core Functions of a Reverse Proxy
Reverse proxies serve multiple critical functions simultaneously:
SSL/TLS Termination: The reverse proxy handles HTTPS encryption and decryption. Backend servers receive plain HTTP, eliminating the need to manage TLS certificates on every backend. Centralizing TLS termination simplifies certificate rotation and allows the latest TLS 1.3 to be deployed without upgrading application servers.
Load Balancing: Requests are distributed across multiple backend servers to prevent any single server from being overwhelmed. Common algorithms: round-robin (each server in turn), least-connections (server with fewest active requests), IP hash (same client always goes to same server — useful for session affinity), and weighted (servers with more capacity get proportionally more traffic).
Health Checking: The proxy periodically checks whether backend servers are responding correctly. Failed health checks remove a server from the pool automatically, so traffic only routes to healthy backends.
Request/Response Modification: Headers can be added, removed, or rewritten. The reverse proxy adds X-Forwarded-For with the client's real IP so backend logs show actual client addresses instead of the proxy's IP. It can strip internal headers before they reach clients and add security headers.
Compression: The proxy compresses responses with gzip or Brotli, reducing bandwidth usage without requiring application changes.
Caching: Static and semi-static responses can be cached at the proxy layer, reducing backend load.
Inspect Any Server's Headers and IP
Check what headers your reverse proxy is exposing and look up any IP with our free tools.
Hide My IP NowNginx as a Reverse Proxy: Practical Configuration
Nginx is the most widely deployed reverse proxy and web server. A minimal reverse proxy configuration:
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
# Security headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
location / {
proxy_pass http://backend_pool;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
upstream backend_pool {
least_conn;
server 10.0.0.1:8080;
server 10.0.0.2:8080;
server 10.0.0.3:8080 backup;
}
This configuration handles TLS termination, security header injection, and load balancing across three backend servers using the least-connections algorithm, with one backup server that only receives traffic when the primary servers fail.
The X-Forwarded-For header passes the real client IP to backend applications. Without this, all backend traffic appears to originate from the proxy's IP — breaking analytics, rate limiting, and geo-blocking.
Reverse Proxies as a Security Layer
Positioned at the network perimeter, reverse proxies are a natural place to implement security controls:
Hiding backend topology: The proxy masks backend server IP addresses. Attackers cannot directly target application servers, reducing attack surface. Our IP lookup tool will show the proxy's IP, not the application server's.
Rate limiting: Nginx and HAProxy can limit requests per IP per second, throttling brute-force attacks, credential stuffing, and scraping. Exceeding limits returns 429 Too Many Requests.
IP allowlisting/blocklisting: Administrative interfaces (admin panels, internal APIs) can be restricted to specific IP ranges at the proxy layer, before requests reach the application.
Web Application Firewall (WAF) integration: ModSecurity with the OWASP Core Rule Set can be deployed as an Nginx module, inspecting requests for SQL injection, XSS, and other attack patterns.
DDoS mitigation: Connection limits, request size limits, and suspicious pattern detection at the proxy layer can absorb moderate DDoS attacks before they reach backend servers.
Use our port checker to verify your reverse proxy is the only entry point to your servers — backend ports (8080, 8443) should not be publicly accessible.
Service Mesh and API Gateway: Extending the Reverse Proxy Pattern
In microservices architectures, the reverse proxy pattern extends into two specialized forms:
API Gateway: A reverse proxy specialized for API traffic. It handles authentication (JWT validation, OAuth token introspection), request routing to different backend services based on path or headers, rate limiting per API key, request/response transformation, and API versioning. Examples: Kong, AWS API Gateway, Azure API Management.
Service Mesh: In container environments (Kubernetes), a service mesh deploys a sidecar proxy (typically Envoy) alongside every service container. East-west traffic (service-to-service within the cluster) is routed through these proxies, providing mTLS encryption, distributed tracing, circuit breaking, and retries for all internal communication without requiring application code changes. Examples: Istio, Linkerd, Consul Connect.
The key architectural insight is that the reverse proxy pattern — interposing a proxy between communicating parties to add capabilities — applies at every layer: at the internet edge (CDN/WAF), at the data center perimeter (load balancer), at the application tier (API gateway), and within the service mesh (sidecar proxy).
Understanding how your requests are proxied and modified helps when debugging — use our headers tool to see which proxy headers are present and trace the request path.

Frequently Asked Questions
What is the difference between a reverse proxy and a load balancer?
Load balancing is a function that reverse proxies often perform, but the two are not synonymous. A reverse proxy can do much more than load balancing: TLS termination, caching, header modification, WAF, compression. A dedicated load balancer (like an AWS NLB) operates at Layer 4 (TCP) and purely distributes connections — it does not inspect or modify HTTP content. The terms are often used interchangeably in practice.
How does X-Forwarded-For affect my application?
When a reverse proxy forwards requests, the client IP seen by the backend is the proxy's IP. The <code>X-Forwarded-For</code> header contains the real client IP. Your application must be configured to trust this header only when requests actually come through your proxy — trusting it blindly allows attackers to spoof any IP by setting the header themselves.
Can I see which reverse proxy a website is using?
Yes — check the <code>Server</code> response header using our <a href="/headers">HTTP headers tool</a>. Common values: <code>nginx</code>, <code>Apache</code>, <code>cloudflare</code>, <code>AmazonS3</code>, <code>ECS</code> (Akamai). Many security-conscious operators suppress or spoof this header to avoid disclosing their infrastructure.
What does 'upstream' mean in Nginx configuration?
In Nginx, an 'upstream' block defines a pool of backend servers. 'Upstream' refers to the servers that Nginx proxies requests to — the direction is from the client's perspective: you (downstream) connect to Nginx (the proxy), which connects to your application servers (upstream). The term comes from how network engineers think about traffic flowing toward the origin.
