This value is included when generating static asset URLs. Updating the value will allow site operators to invalidate all asset urls to recover from configuration issues which may have been cached by CDNs/browsers.
When sending SMTP for group SMTP functionality, we
are running into timeouts for both read and open
when sending mail occassionally, which can cause issues
like the email only being sent to _some_ of the recipients
or to fail altogether.
The defaults of 5s are too low, so bumping them up to
the defaults of the `net-smtp` gem.
Followup to fe05fdae24
For consistency with other S3 settings, make the global setting
the same name as the site setting and use SiteSetting.Upload
too so it reads from the correct place.
This commit adds an `enable_s3_transfer_acceleration` site setting,
which is hidden to begin with. We are adding this because in certain
regions, using https://aws.amazon.com/s3/transfer-acceleration/ can
drastically speed up uploads, sometimes as much as 70% in certain
regions depending on the target bucket region. This is important for
us because we have direct S3 multipart uploads enabled everywhere
on our hosting.
To start, we only want this on the uploads bucket, not the backup one.
Also, this will accelerate both uploads **and** downloads, depending
on whether a presigned URL is used for downloading. This is the case
when secure uploads is enabled, not anywhere else at this time. To
enable the S3 acceleration on downloads more generally would be a
more in-depth change, since we currently store S3 Upload record URLs
like this:
```
url: "//test.s3.dualstack.us-east-2.amazonaws.com/original/2X/6/123456.png"
```
For acceleration, `s3.dualstack` would need to be changed to `s3-accelerate.dualstack`
here.
Note that for this to have any effect, Transfer Acceleration must be enabled
on the S3 bucket used for uploads per https://docs.aws.amazon.com/AmazonS3/latest/userguide/transfer-acceleration-examples.html.
Why this change?
This ensures that malicious requests cannot end up causing the logs to
quickly fill up. The default chosen is sufficient for most legitimate
requests to the Discourse application.
When truncation happens, parsing of logs in supported format like
lograge may break down.
The current default timeout is hardcoded to 2 seconds which is proving
too low for certain cases, and resulting in sporadic timeouts due to slow DNS queries.
If configured, this will be used for static JS assets which are stored on S3. This can be useful if you want to use different CDN providers/configuration for Uploads and JS
When uploads are stored on S3, by default Discourse will fetch the avatars and proxy them through to the requesting client. This is simple, but it can lead to significant inbound/outbound network load in the hosting environment.
This commit adds an optional redirect_avatar_requests GlobalSetting. When enabled, requests for user avatars will be redirected to the S3 asset instead of being proxied. This adds an extra round-trip for clients, but it should significantly reduce server load. To mitigate that extra round-trip for clients, a CDN with 'follow redirect' capability could be used.
* Revert "Revert "FEATURE: Preload resources via link header (#18475)" (#18511)"
This reverts commit 95a57f7e0c.
* put behind feature flag
* env -> global setting
* declare global setting
* forgot one spot
Currently, Discourse rate limits all incoming requests by the IP address they
originate from regardless of the user making the request. This can be
frustrating if there are multiple users using Discourse simultaneously while
sharing the same IP address (e.g. employees in an office).
This commit implements a new feature to make Discourse apply rate limits by
user id rather than IP address for users at or higher than the configured trust
level (1 is the default).
For example, let's say a Discourse instance is configured to allow 200 requests
per minute per IP address, and we have 10 users at trust level 4 using
Discourse simultaneously from the same IP address. Before this feature, the 10
users could only make a total of 200 requests per minute before they got rate
limited. But with the new feature, each user is allowed to make 200 requests
per minute because the rate limits are applied on user id rather than the IP
address.
The minimum trust level for applying user-id-based rate limits can be
configured by the `skip_per_ip_rate_limit_trust_level` global setting. The
default is 1, but it can be changed by either adding the
`DISCOURSE_SKIP_PER_IP_RATE_LIMIT_TRUST_LEVEL` environment variable with the
desired value to your `app.yml`, or changing the setting's value in the
`discourse.conf` file.
Requests made with API keys are still rate limited by IP address and the
relevant global settings that control API keys rate limits.
Before this commit, Discourse's auth cookie (`_t`) was simply a 32 characters
string that Discourse used to lookup the current user from the database and the
cookie contained no additional information about the user. However, we had to
change the cookie content in this commit so we could identify the user from the
cookie without making a database query before the rate limits logic and avoid
introducing a bottleneck on busy sites.
Besides the 32 characters auth token, the cookie now includes the user id,
trust level and the cookie's generation date, and we encrypt/sign the cookie to
prevent tampering.
Internal ticket number: t54739.
It makes much more sense for these to be GlobalSettings, since, in
multisite clusters, only the default site's settings would be respected.
Co-authored-by: David Taylor <david@taylorhq.com>
Over the years we accrued many spelling mistakes in the code base.
This PR attempts to fix spelling mistakes and typos in all areas of the code that are extremely safe to change
- comments
- test descriptions
- other low risk areas
The cluster name can be configured by setting the `DISCOURSE_CLUSTER_NAME` environment variable. If set, you can then call /srv/status with a `?cluster=` parameter. If the cluster does not match, an error will be returned. This is useful if you need a load balancer to be able to verify the identity, as well as the presence, of an application container.
Background: RFC 8314 3.3 asks that:
clients and servers SHOULD implement both STARTTLS on
port 587 and Implicit TLS on port 465
Discourse currently cannot be configured this way.
With this patch, it's possible to set
DISCOURSE_SMTP_FORCE_TLS=true to use implicit TLS on port 465
When the server gets overloaded and lots of requests start queuing server
will attempt to shed load by returning 429 errors on background requests.
The client can flag a request as background by setting the header:
`Discourse-Background` to `true`
Out-of-the-box we shed load when the queue time goes above 0.5 seconds.
The only request we shed at the moment is the request to load up a new post
when someone posts to a topic.
We can extend this as we go with a more general pattern on the client.
Previous to this change, rate limiting would "break" the post stream which
would make suggested topics vanish and users would have to scroll the page
to see more posts in the topic.
Server needs this protection for cases where tons of clients are navigated
to a topic and a new post is made. This can lead to a self inflicted denial
of service if enough clients are viewing the topic.
Due to the internal security design of Discourse it is hard for a large
number of clients to share a channel where we would pass the full post body
via the message bus.
It also renames (and deprecates) triggerNewPostInStream to triggerNewPostsInStream
This allows us to load a batch of new posts cleanly, so the controller can
keep track of a backlog
Co-authored-by: Joffrey JAFFEUX <j.jaffeux@gmail.com>
To avoid blocking the sidekiq queue a limit of 10,000 digests per 30 minutes
is introduced.
This acts as a safety measure that makes sure we don't keep pouring oil on
a fire.
On multisites it is recommended to set the number way lower so sites do not
dominate the backlog. A reasonable default for multisites may be 100-500.
This can be controlled with the environment var
DISCOURSE_MAX_DIGESTS_ENQUEUED_PER_30_MINS_PER_SITE
Demon::EmailSync is used in conjunction with the SiteSetting.enable_imap to sync N IMAP mailboxes with specific groups. It is a process started in unicorn.conf, and it spawns N threads (one for each multisite connection) and for each database spans another N threads (one for each configured group).
We want this off by default so the process is not started when it does not need to be (e.g. development, test, certain hosting tiers)
In some restricted setups all JS payloads need tight control.
This setting bans admins from making changes to JS on the site and
requires all themes be whitelisted to be used.
There are edge cases we still need to work through in this mode
hence this is still not supported in production and experimental.
Use an example like this to enable:
`DISCOURSE_WHITELISTED_THEME_REPOS="https://repo.com/repo.git,https://repo.com/repo2.git"`
By default this feature is not enabled and no changes are made.
One exception is that default theme id was missing a security check
this was added for correctness.
This is so that, on a multisite cluster, when we handle a CDN request,
the hostname that is requested corresponds to one of the sites -
specifically the default site.
Some providers don't implement the Expect: 100-continue support,
which results in a mismatch in the object signature.
With this settings, users can disable the header and use such providers.
MaxMind now requires an account with a license key to download files.
Discourse admins can register for such an account at:
https://www.maxmind.com/en/geolite2/signup
License key generation is available in the profile section.
Once registered you can set the license key using `DISCOURSE_MAXMIND_LICENSE_KEY`
This amends it so we unconditionally skip MaxMind DB downloads if no license key exists.
We have tested rate limiting with admin accounts with block rate limiting for
close to 12 months now on meta.discourse.org.
This has resulted in no degradation of services even to admin accounts that
request a lot of info from the site.
The default of 200 requests a minute and 50 per 10 seconds is very generous.
It simply protects against very aggressive clients.
This setting can be disabled or tweaked using:
DISCOURSE_MAX_REQS_PER_IP_MODE and family.
The only big downside here is in cases when a very large number of users tend
to all come from a single IP.
This can be the case on sites accessing Discourse from an internal network
all sharing the same IP via NAT. Or a misconfigured Discourse that is unable
to resolve IP addresses of users due to proxy mis-configuration.
This commit introduces 2 features:
1. DISCOURSE_COMPRESS_ANON_CACHE (true|false, default false): this allows
you to optionally compress the anon cache body entries in Redis, can be
useful for high load sites with Redis that lives on a separate server to
to webs
2. DISCOURSE_ANON_CACHE_STORE_THRESHOLD (default 2), only pop entries into
redis if we observe them more than N times. This avoids situations where
a crawler can walk a big pile of topics and store them all in Redis never
to be used. Our default anon cache time for topics is only 60 seconds. Anon
cache is in place to avoid the "slashdot" effect where a single topic is
hit by 100s of people in one minute.
Under extreme load on large databases certain regular jobs can take quite
a while to run. We need to ensure we never starve a sidekiq from running
mini scheduler, cause without it we are unable to queue stuff such as
heartbeat jobs.
This adds a 1 minute rate limit to all JS error reporting per IP. Previously
we would only use the global rate limit.
This also introduces DISCOURSE_ENABLE_JS_ERROR_REPORTING, if it is set to
false then no JS error reporting will be allowed on the site.
The message_bus performs a fair amount of work prior to hijacking requests
this change ensures that if there is a situation where the server is flooded
message_bus will inform client to back off for 30 seconds + random(120 secs)
This back-off is ultra cheap and happens very early in the middleware.
It corrects a situation where a flood to message bus could cause the app
to become unresponsive
MessageBus update is here to ensure message_bus gem properly respects
Retry-After header and status 429.
Under normal state this code should never trigger, to disable raise the
value of DISCOURSE_REJECT_MESSAGE_BUS_QUEUE_SECONDS, default is to tell
message bus to go away if we are queueing for 100ms or longer
The global setting disable_search_queue_threshold
(DISCOURSE_DISABLE_SEARCH_QUEUE_THRESHOLD) which default to 1 second was
added.
This protection ensures that when the application is unable to keep up with
requests it will simply turn off search till it is not backed up.
To disable this protection set this to 0.