From Build to Browser
What Caddy Should Know About Your Astro Site
بِسْمِ ٱللَّهِ ٱلرَّحْمَـٰنِ ٱلرَّحِيمِ
Size
Spacing
Font
The gap between building and serving
You have a website. It loads, it works, it shows what you meant to show. But do you know whether returning visitors see your pages load instantly or re-download everything on every click? Whether the connection can be downgraded to HTTP? Whether your homepage shows up as one domain or two in Google? For most sites, the answers are: no, yes, and yes. That last layer of the stack is the web server config, and most businesses never set it.
Astro does a lot of work at build time. It compiles pages to static HTML, hashes every asset into the _astro/ directory, and produces files ready for any static server. The CI pipeline adds another layer: pre-compressing everything into .br, .gz, and .zst sidecar files so the server never has to compress on the fly.
All of that effort can be quietly undone by the last layer in the chain: the web server configuration. Caddy serves the files, but it doesn’t know what it’s serving. It doesn’t know that _astro/ files have content hashes in their names. It doesn’t know that HTML changes on every deploy while CSS doesn’t. It doesn’t know that the site is HTTPS-only with no reason to ever be iframed.
That knowledge gap is what the config is for. And for a while, mine looked like this:
(jav_static) { header /favicon.svg Cache-Control "max-age=3600, must-revalidate" file_server { precompressed br gzip zstd } handle_errors { rewrite * /404.html file_server }}It worked. Files got served, pre-compressed assets got picked up, the 404 page showed when it should. But that’s about it. No security headers, no cache strategy beyond the favicon, no www redirect. The browser was making decisions that should have been mine. And anyone who ran a security scan or checked page speed would see exactly how much was left on the table.
Quick definitions
What is a security header?
A security header is an instruction your web server sends with every response, telling browsers how to handle the page. Not about what the page shows, but about what browsers are allowed to do with it: whether it can appear in an iframe, whether the connection must use HTTPS, whether only certain sources are trusted. The browser enforces them; you set them once in your server config.
What is browser caching?
Browser caching is when a browser saves a local copy of a file (a stylesheet, a font, an image) so it doesn’t have to re-download it on the next visit. The web server controls how long that copy stays valid and whether the browser should check for a newer version before using it.
What is a content hash?
A content hash is a short string embedded in a filename based on what the file contains. arc.FDYzXBdN.js has that string because of exactly what the file contains at that moment. Change the file, the hash changes, and so does the filename. The browser sees it as a new file and fetches the new version, while its cached copy of the old filename stays valid. This is what makes caching those files forever safe.
What actually matters for a static site
I went through the Caddy directives I was evaluating: headers, caching, compression, logging, TLS options, server timeouts, Prometheus metrics. Logging, TLS, timeouts, and metrics didn’t need decisions for a static site of this size. Caddy’s defaults apply. The interesting choices sit in headers, caching, and compression. The question wasn’t “what can Caddy do?” but “what does my site actually need?”
A static site with no login, no forms, no user data, and no backend has a very different threat model than a web application. Most security guides are written for the latter. Applying them blindly to a static site means adding complexity that protects against nothing.
So I evaluated each option against a simple test: does this address a real threat, or does it just make a scanner happy?
Three security headers, not six
The internet has plenty of lists telling you to add every security header that exists. What got added, and what didn’t.
Strict-Transport-Security addresses a real threat. Without it, a visitor on public WiFi could have their connection downgraded to HTTP and intercepted. The site is already HTTPS-only, so this header just tells browsers to never try HTTP. One year, including subdomains, with preload to opt into the HSTS preload list that browsers ship hardcoded. The cost is real: every current and future subdomain must serve HTTPS or become unreachable. Removal from the preload list is out of your hands: it happens in browser release cycles, not in your config. Accepted here because this site is HTTPS-only by intent, and Caddy provisions certificates for every subdomain automatically via Let’s Encrypt.
X-Content-Type-Options: nosniff prevents browsers from guessing content types. Without it, a browser might interpret a text file as executable code. One value, no configuration, no trade-offs.
X-Frame-Options: DENY blocks anyone from embedding the site in an iframe. Prevents clickjacking, where an attacker overlays invisible UI on top of your page to steal clicks. No reason for this site to be iframed, so DENY is the right answer.
header { Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" X-Content-Type-Options "nosniff" X-Frame-Options "DENY"}That’s it. Three headers that address three real attack vectors. The rejected options, and why:
Referrer-Policy already defaults to strict-origin-when-cross-origin in major browsers. Setting it explicitly changes nothing. External sites see your domain as the referrer (good for SEO), but not the full path. This is already the correct behavior without touching the config.
Permissions-Policy disables browser APIs like camera, microphone, geolocation. But a static site doesn’t use these APIs. Blocking something you’re not using doesn’t protect you from anything. If an attacker can inject JavaScript to access the camera, you have far bigger problems than a missing header. And if you embed YouTube videos with fullscreen enabled, as I do, you’d have to carefully carve out exceptions. More complexity for zero security benefit.
Cross-Origin-Opener-Policy prevents other sites from getting a reference to your browser window. This matters for sites with authentication flows where a popup could steal tokens. A static site has no auth, no tokens, no popups.
Content-Security-Policy is a runtime allowlist for scripts, styles, fonts, and images. It contains cross-site scripting, blocks data exfiltration, and catches compromised third-party code, regardless of how that code got into the page. For sites that render untrusted data (forms, user content, external APIs) it mitigates an active surface; for sites without those input paths, it guards against less likely but higher-impact events: a compromised analytics script, a tampered build, a poisoned CDN, a future change that quietly adds an input path. The cost is real: every resource inventoried, silent breakage on mistakes. Astro 6 added an integration that covers what it emits; iframes, external fonts, and third-party runtime services stay manual. CSP will inshallah follow when the audit fits the schedule, or sooner if the threat model changes.
The pattern: each skipped header either duplicates a browser default, blocks something that isn’t a threat, or requires work that hasn’t been done yet. None of them were rejected because security doesn’t matter. They were rejected because they don’t do what people think they do, at least not for a static site.
Cache strategy: let the build output guide you
This is where knowing your framework’s output matters most.
Astro puts every processed asset into the _astro/ directory with a content hash in the filename: JS, CSS, fonts, SVGs, images. arc.FDYzXBdN.js. If the content changes, the hash changes, and the filename changes. The old URL is never reused.
This means the browser can cache these files forever. Not “a long time.” Literally forever. The hash guarantees that if the content changes, the URL changes, so the browser will always request the new version. There’s no risk of serving stale content.
Without this, every repeat visitor re-downloads your CSS, JS, and fonts on every page load. On a mobile connection, that’s the difference between a site that feels instant and one that feels slow. The content hasn’t changed since their last visit, but the browser doesn’t know that.
header /_astro/* Cache-Control "public, max-age=31536000, immutable"max-age=31536000 is one year. immutable tells the browser not to even bother revalidating. Don’t send a conditional request, don’t check if it changed, just use the cached copy. This is the common policy for hashed assets.
HTML files are different. They have no hash in the filename. /en/blog/some-post/index.html stays the same URL across deploys. When I publish a new version, the HTML changes but the URL doesn’t. So the browser needs to check with the server before using a cached copy.
header ?Cache-Control "no-cache"no-cache doesn’t mean “don’t cache.” It means “cache it, but ask the server first.” If the file hasn’t changed, Caddy returns a 304 Not Modified, a tiny response, practically free. If it has changed, the browser gets the new version. The ? prefix means this only applies if no other Cache-Control was already set, so it doesn’t override the _astro/* or favicon rules.
Favicons sit in between. No hash, but they rarely change:
header /favicon.svg Cache-Control "max-age=3600, must-revalidate"header /favicon.ico Cache-Control "max-age=3600, must-revalidate"Three rules, and each one follows directly from what Astro’s build produces. Hashed files get cached forever. Unhashed files get revalidated. That’s the entire strategy.
The small things
Without a www redirect, www.javedab.com and javedab.com are separate sites in Google’s eyes. Link equity splits between two domains. A 301 permanent redirect consolidates everything to one canonical URL and preserves the full path.
www.javedab.com { redir https://javedab.com{uri} permanent}The CI pipeline pre-compresses all assets, and Caddy serves the sidecar files directly. But if a file somehow doesn’t get pre-compressed, a compression fallback should compress it on the fly rather than serving it uncompressed. The encode directive is a safety net. It should rarely fire, and if it does, it means something in the build pipeline needs attention.
encode zstd gzipACME email is easy to overlook. Caddy manages TLS certificates automatically via Let’s Encrypt. If renewal fails, the site goes down with a TLS error. Visitors see a browser warning, and some won’t come back. Without an email on the ACME account, that failure is silent. You find out when a customer tells you your site looks broken. One line in the global config turns it into a warning you receive before that happens.
{ email javed@javedab.com}What the config looks like now
(jav_static) { header { Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" X-Content-Type-Options "nosniff" X-Frame-Options "DENY" }
header /_astro/* Cache-Control "public, max-age=31536000, immutable" header /favicon.svg Cache-Control "max-age=3600, must-revalidate" header /favicon.ico Cache-Control "max-age=3600, must-revalidate" header ?Cache-Control "no-cache"
encode zstd gzip file_server { precompressed br gzip zstd } handle_errors { rewrite * /404.html; file_server }}Every line has a reason. Nothing is there because a blog post said to add it. The security headers address real threats. The cache rules match the build output. The compression fallback is a safety net, not the primary path.
A curl call confirms the policies are live at time of writing:
$ curl -sI https://javedab.com/ | grep -iE '^(cache-control|strict-transport|x-frame)'cache-control: no-cachestrict-transport-security: max-age=31536000; includeSubDomains; preloadx-frame-options: DENYThe point
Your web server config should reflect what your build pipeline produces. Generic defaults work, but they leave decisions to the browser that should be yours. Caddy’s defaults cover the hard parts: HTTP/3 is on, TLS is automatic, pre-compressed sidecar files are supported natively. But caching, security headers, and redirects are things only you can configure, because only you know what your framework outputs and what your site needs.
Reading the docs, evaluating each option, implementing the ones that matter, verifying with curl. The config went from five functional lines to something that reflects this specific site, not a generic template.
None of this is visible to a casual visitor. The site looked the same before and after. But it’s visible to anyone who checks. A potential partner who runs your domain through securityheaders.com. A client who opens DevTools and sees immutable on your assets. The kind of person who notices whether the infrastructure behind the page is as intentional as the page itself.
That’s what owning your infrastructure looks like. Not complicated. Just intentional.
Does this cache strategy work with frameworks other than Astro?
Yes, if your framework outputs content-hashed assets. Next.js, SvelteKit, Nuxt, and most Vite-based tools do this by default. Look for a `_next/static/`, `_app/immutable/`, or similar directory with hashed filenames in the build output. The Caddy rules are identical; only the path pattern changes. HTML files are never hashed in any framework, so the `no-cache` rule applies universally.
How do I verify that my headers and cache rules are actually live?
The quickest check is curl: `curl -sI https://yourdomain.com/ | grep -iE '^(cache-control|strict-transport|x-frame)'`. Each header you configured should appear in the output with the value you set.
For a broader picture, paste your domain into [securityheaders.com](https://securityheaders.com). It grades each header and flags what's missing. Browser DevTools (Network tab, any asset request) show the Cache-Control value Caddy returned for that specific file, which is useful for confirming the `_astro/*` and `no-cache` rules are hitting correctly.
What are the risks of enabling HSTS preload?
The main risk is irreversibility on short timescales. Once you submit to the preload list and browsers ship the update, every subdomain that doesn't serve valid HTTPS becomes unreachable: not just redirected, but blocked at the browser level before a connection is made. Removing yourself from the list requires a submission and then waiting for browser release cycles, which can take months.
Accept it only if every current and future subdomain will serve HTTPS. If you have subdomains you don't fully control, or internal tools that only run on HTTP, leave out `includeSubDomains` and `preload` and use the header without those flags.
Web 2 of 4
Back to WebAerospace engineer
Ethical entrepreneur in public
You handle your business
I handle the digital side
Work with me
- AI with honesty
- Private infrastructure
- Websites that perform
Tell me about your situation:
javed@javedab.com More about me