* fileserver: Support glob expansion in file matcher
* Fix tests
* Fix bugs and tests
* Attempt Windows fix, sigh
* debug Windows, WIP
* Continue debugging Windows
* Another attempt at Windows
* Plz Windows
* Cmon...
* Clean up, hope I didn't break anything
After reading a question about the `handle_response` feature of `reverse_proxy`, I realized that we didn't have a way of serving an arbitrary file with a status code other than 200. This is an issue in situations where you want to serve a custom error page in routes that are not errors, like the aforementioned `handle_response`, where you may want to retain the status code returned by the proxy but write a response with content from a file.
This feature is super simple, basically if a status code is configured (can be a status code number, or a placeholder string) then that status will be written out before serving the file - if we write the status code first, then the stdlib won't write its own (only the first HTTP status header wins).
* encode: implement prefer setting
* encode: minimum_length configurable via caddyfile
* encode: configurable content-types which to encode
* file_server: support precompressed files
* encode: use ReponseMatcher for conditional encoding of content
* linting error & documentation of encode.PrecompressedOrder
* encode: allow just one response matcher
also change the namespace of the encoders back, I accidently changed to precompressed >.>
default matchers include a * to match to any charset, that may be appended
* rounding of the PR
* added integration tests for new caddyfile directives
* improved various doc strings (punctuation and typos)
* added json tag for file_server precompress order and encode matcher
* file_server: add vary header, remove accept-ranges when serving precompressed files
* encode: move Suffix implementation to precompressed modules
* fileserver: Improve and clarify file hiding logic
* Oops, forgot to run integration tests
* Make this one integration test OS-agnostic
* See if this appeases the Windows gods
* D'oh
Previously, all matchers in a route would be evaluated before any
handlers were executed, and a composite route of the matching routes
would be created. This made rewrites especially tricky, since the only
way to defer later matchers' evaluation was to wrap them in a subroute,
or to invoke a "rehandle" which often caused bugs.
Instead, this new sequential design evaluates each route's matchers then
its handlers in lock-step; matcher-handlers-matcher-handlers...
If the first matching route consists of a rewrite, then the second route
will be evaluated against the rewritten request, rather than the original
one, and so on.
This should do away with any need for rehandling.
I've also taken this opportunity to avoid adding new values to the
request context in the handler chain, as this creates a copy of the
Request struct, which may possibly lead to bugs like it has in the past
(see PR #1542, PR #1481, and maybe issue #2463). We now add all the
expected context values in the top-level handler at the server, then
any new values can be added to the variable table via the VarsCtxKey
context key, or just the GetVar/SetVar functions. In particular, we are
using this facility to convey dial information in the reverse proxy.
Had to be careful in one place as the middleware compilation logic has
changed, and moved a bit. We no longer compile a middleware chain per-
request; instead, we can compile it at provision-time, and defer only the
evaluation of matchers to request-time, which should slightly improve
performance. Doing this, however, we take advantage of multiple function
closures, and we also changed the use of HandlerFunc (function pointer)
to Handler (interface)... this led to a situation where, if we aren't
careful, allows one request routed a certain way to permanently change
the "next" handler for all/most other requests! We avoid this by making
a copy of the interface value (which is a lightweight pointer copy) and
using exclusively that within our wrapped handlers. This way, the
original stack frame is preserved in a "read-only" fashion. The comments
in the code describe this phenomenon.
This may very well be a breaking change for some configurations, however
I do not expect it to impact many people. I will make it clear in the
release notes that this change has occurred.
This commit goes a long way toward making automated documentation of
Caddy config and Caddy modules possible. It's a broad, sweeping change,
but mostly internal. It allows us to automatically generate docs for all
Caddy modules (including future third-party ones) and make them viewable
on a web page; it also doubles as godoc comments.
As such, this commit makes significant progress in migrating the docs
from our temporary wiki page toward our new website which is still under
construction.
With this change, all host modules will use ctx.LoadModule() and pass in
both the struct pointer and the field name as a string. This allows the
reflect package to read the struct tag from that field so that it can
get the necessary information like the module namespace and the inline
key.
This has the nice side-effect of unifying the code and documentation. It
also simplifies module loading, and handles several variations on field
types for raw module fields (i.e. variations on json.RawMessage, such as
arrays and maps).
I also renamed ModuleInfo.Name -> ModuleInfo.ID, to make it clear that
the ID is the "full name" which includes both the module namespace and
the name. This clarity is helpful when describing module hierarchy.
As of this change, Caddy modules are no longer an experimental design.
I think the architecture is good enough to go forward.
- Rename http.var.* -> http.vars.* to be more consistent
- Prefixing a path matcher with * now invokes simple suffix matching
- Handlers and matchers that need a root path default to {http.vars.root}
- Clean replacer output on the file matcher's file selection suffix
Use piles from which to draw config values.
Module values can return their name, so now we can do two-way mapping
from value to name and name to value; whereas before we could only map
name to value. This was problematic with the Caddyfile adapter since
it receives values and needs to know the name to put in the config.
Along with several other changes, such as renaming caddyhttp.ServerRoute
to caddyhttp.Route, exporting some types that were not exported before,
and tweaking the caddytls TLS values to be more consistent.
Notably, we also now disable automatic cert management for names which
already have a cert (manually) loaded into the cache. These names no
longer need to be specified in the "skip_certificates" field of the
automatic HTTPS config, because they will be skipped automatically.