The vendor/ folder was created with the help of @FiloSottile's gvt and
vendorcheck.
Any dependencies of Caddy plugins outside this repo are not vendored.
We do not remove any unused, vendored packages because vendorcheck -u
only checks using the current build configuration; i.e. packages that
may be imported by files toggled by build tags of other systems.
CI tests have been updated to ignore the vendor/ folder. When Go 1.9 is
released, a few of the go commands should be revised to again use ./...
as it will ignore the vendor folder by default.
* httpserver/all: Clean up and standardize request URL handling
The HTTP server now always creates a context value on the request which
is a copy of the request's URL struct. It should not be modified by
middlewares, but it is safe to get the value out of the request and make
changes to it locally-scoped. Thus, the value in the context always
stores the original request URL information as it was received. Any
rewrites that happen will be to the request's URL field directly.
The HTTP server no longer cleans /sanitizes the request URL. It made too
many strong assumptions and ended up making a lot of middleware more
complicated, including upstream proxying (and fastcgi). To alleviate
this complexity, we no longer change the request URL. Middlewares are
responsible to access the disk safely by using http.Dir or, if not
actually opening files, they can use httpserver.SafePath().
I'm hoping this will address issues with #1624, #1584, #1582, and others.
* staticfiles: Fix test on Windows
@abiosoft: I still can't figure out exactly what this is for. 😅
* Use (potentially) changed URL for browse redirects, as before
* Use filepath.ToSlash, clean up a couple proxy test cases
* Oops, fix variable name
Original feature request in forum:
https://forum.caddyserver.com/t/caddy-with-specific-hosts-but-on-demand-tls/1704?u=matt
Before, Caddy obtained certificates for every name it could at startup.
And it would only obtain certificates during the handshake for sites
defined with a hostname that didn't qualify at startup (like
"*.example.com" or ":443"). This made sense for most situations, and
helped ensure that certificates were obtained as early and reliably as
possible.
With this change, Caddy will NOT obtain certificates for hostnames it
knows at startup (even if they qualify) if OnDemand is enabled.
But I think this change generalizes well, because a user who specifies
max_certs is deliberately turning on On-Demand TLS, fully aware of
the consequences. It seems dubious to ignore that config when the user
deliberately put it there. We'll see how this goes.
* Fixed issue with {path} actually {uri}
* Test added for path rewrite
* add in uri_escaped
* added rewrite_uri and test
* fix broken test. Just checks for existance of rewrite header
* gitignore
* Use context to store uri value
* ignore .vscode
* tidy up, removal of comments and invalidated tests
* Remove commented out code.
* added comment as requested by lint
* fixed spelling mistake
* clarified code with variable name
* added context for uri and test
* added TODO comment to move consts
* add support for listener middleware
* add proxyprotocol directive
* make caddy.Listener interface required
* Remove tcpKeepAliveListener wrapper from Serve()
This is now done in the Listen() function, along with other potential middleware.
This commit removes _almost_ all instances of hard-coded ports 80 and
443 strings, and now allows the user to define what the HTTP and HTTPS
ports are by the -http-port and -https-ports flags.
(One instance of "80" is still hard-coded in tls.go because it cannot
import httpserver to get access to the HTTP port variable. I don't
suspect this will be a problem in practice, but one workaround would be
to define an exported variable in the caddytls package and let the
httpserver package set it as well as its own HTTPPort variable.)
The port numbers required by the ACME challenges HTTP-01 and TLS-SNI-01
are hard-coded into the spec as ports 80 and 443 for good reasons,
but the big question is whether they necessarily need to be the HTTP
and HTTPS ports. Although the answer is probably no, they chose those
ports for convenience and widest compatibility/deployability. So this
commit also assumes that the "HTTP port" is necessarily the same port
on which to serve the HTTP-01 challenge, and the "HTTPS port" is
necessarily the same one on which to serve the TLS-SNI-01 challenge. In
other words, changing the HTTP and HTTPS ports also changes the ports
the challenges will be served on.
If you change the HTTP and HTTPS ports, you are responsible for
configuring your system to forward ports 80 and 443 properly.
Closes#918 and closes#1293. Also related: #468.
* Use RequestURI when redirecting to canonical path.
Caddy may trim a request's URL path when it starts with the path that's
associated with the virtual host. This change uses the path from the request's
RequestURI when performing a redirect.
Fix issue #1327.
* Rename redirurl to redirURL.
* Redirect to the full URL.
The scheme and host from the virtual host's site configuration is used
in order to redirect to the full URL.
* Add comment and remove redundant check.
* Store the original URL path in request context.
By storing the original URL path as a value in the request context,
middlewares can access both it and the sanitized path. The default
default FileServer handler will use the original URL on redirects.
* Replace contextKey type with CtxKey.
In addition to moving the CtxKey definition to the caddy package, this
change updates the CtxKey references in the httpserver, fastcgi, and
basicauth packages.
* httpserver: Fix reference to CtxKey
Timeouts are important for mitigating slowloris, yes. But after a number
of complaints and seeing that default timeouts are a sore point of
confusion, we're disabling them now. However, the code that sets
default timeouts remains intact; the defaults are just the zero value.
While Caddy aims to be secure by default, Caddy also aims to serve a
worldwide audience. Even my own internet here in Utah is poor at times,
with bad WiFi signal, causing some connections to take over 10s to
be established. Many use the Internet while commuting on slower
connection speeds. Latency across country borders is another concern.
As such, disabling default timeouts will serve a greater population of
users than enabling them, as slowloris is easy to mitigate and does
not seem to be reported often (I've only seen it once). It's also very
difficult sometimes to distinguish slowloris from genuine slow networks.
That decision is best left to the site owner for now.