It returned a 429 error code with a 'Retry-After' header if a
RateLimiter::LimitExceeded was raised and unhandled, but the header was
missing if the request was limited in the 'RequestTracker' middleware.
`/srv/status` routes should not be cached at all. Also, we want to
decouple the route from Redis which `AnonymouseCache` relies on. The
`/srv/status` should continue to return a success response even if Redis
is down.
Previous to this change our anonymous rate limits acted as a throttle.
New implementation means we now also consider rate limited requests towards
the limit.
This means that if an anonymous user is hammering the server it will not be
able to get any requests through until it subsides with traffic.
This reverts commit e3de45359f.
We need to improve out strategy by adding a cache breaker with this change ... some assets on CDNs and clients may have incorrect CORS headers which can cause stuff to break.
When the server gets overloaded and lots of requests start queuing server
will attempt to shed load by returning 429 errors on background requests.
The client can flag a request as background by setting the header:
`Discourse-Background` to `true`
Out-of-the-box we shed load when the queue time goes above 0.5 seconds.
The only request we shed at the moment is the request to load up a new post
when someone posts to a topic.
We can extend this as we go with a more general pattern on the client.
Previous to this change, rate limiting would "break" the post stream which
would make suggested topics vanish and users would have to scroll the page
to see more posts in the topic.
Server needs this protection for cases where tons of clients are navigated
to a topic and a new post is made. This can lead to a self inflicted denial
of service if enough clients are viewing the topic.
Due to the internal security design of Discourse it is hard for a large
number of clients to share a channel where we would pass the full post body
via the message bus.
It also renames (and deprecates) triggerNewPostInStream to triggerNewPostsInStream
This allows us to load a batch of new posts cleanly, so the controller can
keep track of a backlog
Co-authored-by: Joffrey JAFFEUX <j.jaffeux@gmail.com>
- Define the CSP based on the requested domain / scheme (respecting force_https)
- Update EnforceHostname middleware to allow secondary domains, add specs
- Add URL scheme to anon cache key so that CSP headers are cached correctly
Many security scanners ship invalid mime types, this ensures we return
a very cheap response to the clients and do not log anything.
Previous attempt still re-dispatched the request to get proper error page
but in this specific case we want no error page.
Non UTF-8 user_agent requests were bypassing logging due to PG always
wanting UTF-8 strings.
This adds some conversion to ensure we are always dealing with UTF-8
Previously our custom exception handler was unable to handle situations
where an invalid mime type was sent, resulting in a warning log
This ensures we pretend a request is HTML for the purpose of rendering
the error page if an invalid mime type from a scanner is shipped to the app
This commit introduces 2 features:
1. DISCOURSE_COMPRESS_ANON_CACHE (true|false, default false): this allows
you to optionally compress the anon cache body entries in Redis, can be
useful for high load sites with Redis that lives on a separate server to
to webs
2. DISCOURSE_ANON_CACHE_STORE_THRESHOLD (default 2), only pop entries into
redis if we observe them more than N times. This avoids situations where
a crawler can walk a big pile of topics and store them all in Redis never
to be used. Our default anon cache time for topics is only 60 seconds. Anon
cache is in place to avoid the "slashdot" effect where a single topic is
hit by 100s of people in one minute.
This allows custom plugins such as prometheus exporter to log how many
requests are stored in the anon cache vs used by the anon cache.
This metric allows us to fine tune cache behaviors
This reverts commit 39c31a3d76.
Sorry about this, we have decided againse supporting 0-RTT directly in
core, this can be supported with similar hacks to this commit in a
plugin.
That said, we recommend against using a 0-RTT proxy for the Discourse
app due to inherit risk of replay attacks.
This displays more useful messages for the most common issues we see:
- CSRF (when the user switches browser)
- Invalid IAT (when the server clock is wrong)
- OAuth::Unauthorized for OAuth1 providers, when the credentials are incorrect
This commit also stops earlier for disabled authenticators. Now we stop at the request phase, rather than the callback phase.
The message_bus performs a fair amount of work prior to hijacking requests
this change ensures that if there is a situation where the server is flooded
message_bus will inform client to back off for 30 seconds + random(120 secs)
This back-off is ultra cheap and happens very early in the middleware.
It corrects a situation where a flood to message bus could cause the app
to become unresponsive
MessageBus update is here to ensure message_bus gem properly respects
Retry-After header and status 429.
Under normal state this code should never trigger, to disable raise the
value of DISCOURSE_REJECT_MESSAGE_BUS_QUEUE_SECONDS, default is to tell
message bus to go away if we are queueing for 100ms or longer
This adds support for DISCOURSE_ENABLE_PERFORMANCE_HTTP_HEADERS
when set to `true` this will turn on performance related headers
```text
X-Redis-Calls: 10 # number of redis calls
X-Redis-Time: 1.02 # redis time in seconds
X-Sql-Commands: 102 # number of SQL commands
X-Sql-Time: 1.02 # duration in SQL in seconds
X-Queue-Time: 1.01 # time the request sat in queue (depends on NGINX)
```
To get queue time NGINX must provide: HTTP_X_REQUEST_START
We do not recommend you enable this without thinking, it exposes information
about what your page is doing, usually you would only enable this if you
intend to strip off the headers further down the stream in a proxy
This reduces chances of errors where consumers of strings mutate inputs
and reduces memory usage of the app.
Test suite passes now, but there may be some stuff left, so we will run
a few sites on a branch prior to merging
Since 5bfe051e, Discourse user agents are marked as non-crawlers (to avoid accidental blacklisting). This makes sure pageviews for these agents are tracked as crawler hits.
By default, this does nothing. Two environment variables are available:
- `DISCOURSE_LOG_SIDEKIQ`
Set to `"1"` to enable logging. This will log all completed jobs to `log/rails/sidekiq.log`, along with various db/redis/network statistics. This is useful to track down poorly performing jobs.
- `DISCOURSE_LOG_SIDEKIQ_INTERVAL`
(seconds) Check running jobs periodically, and log their current duration. They will appear in the logs with `status:pending`. This is useful to track down jobs which take a long time, then crash sidekiq before completing.
This avoids require dependency on method_profiler and anon cache.
It means that if there is any change to these files the reloader will not pick it up.
Previously the reloader was picking up the anon cache twice causing it to double load on boot.
This caused warnings.
Long term my plan is to give up on require dependency and instead use:
https://github.com/Shopify/autoload_reloader
Previously the 'reconnect' process was a bit magic - IF you were already logged into discourse, and followed the auth flow, your account would be reconnected and you would be 'logged in again'.
Now, we explicitly check for a reconnect=true parameter when the flow is started, store it in the session, and then only follow the reconnect logic if that variable is present. Setting this parameter also skips the 'logged in again' step, which means reconnect now works with 2fa enabled.