The test was flaky and failing with the following errors:
```
Failure/Error:
klass
.connection
.select_raw(relation.arel) do |result, _|
result.type_map = DB.type_map
result.nfields == 1 ? result.column_values(0) : result.values
end
NoMethodError:
undefined method `select_raw' for nil
./lib/freedom_patches/fast_pluck.rb:60:in `pluck'
./vendor/bundle/ruby/3.3.0/gems/activerecord-7.2.2.1/lib/active_record/relation/calculations.rb:354:in `pick'
./app/models/web_crawler_request.rb:27:in `request_id'
./app/models/web_crawler_request.rb:31:in `rescue in request_id'
./app/models/web_crawler_request.rb:26:in `request_id'
./app/models/web_crawler_request.rb:19:in `write_cache!'
./app/models/concerns/cached_counting.rb:135:in `block (3 levels) in flush_to_db'
./vendor/bundle/ruby/3.3.0/gems/rails_multisite-6.1.0/lib/rails_multisite/connection_management/null_instance.rb:49:in `with_connection'
./vendor/bundle/ruby/3.3.0/gems/rails_multisite-6.1.0/lib/rails_multisite/connection_management.rb:21:in `with_connection'
./app/models/concerns/cached_counting.rb:134:in `block (2 levels) in flush_to_db'
./app/models/concerns/cached_counting.rb:124:in `each'
./app/models/concerns/cached_counting.rb:124:in `block in flush_to_db'
./lib/distributed_mutex.rb:53:in `block in synchronize'
./lib/distributed_mutex.rb:49:in `synchronize'
./lib/distributed_mutex.rb:49:in `synchronize'
./lib/distributed_mutex.rb:34:in `synchronize'
./app/models/concerns/cached_counting.rb:120:in `flush_to_db'
./app/models/concerns/cached_counting.rb:187:in `perform_increment!'
./app/models/web_crawler_request.rb:15:in `increment!'
./lib/middleware/request_tracker.rb:74:in `log_request'
./lib/middleware/request_tracker.rb:409:in `block in log_later'
./lib/scheduler/defer.rb:125:in `block in do_work'
./vendor/bundle/ruby/3.3.0/gems/rails_multisite-6.1.0/lib/rails_multisite/connection_management/null_instance.rb:49:in `with_connection'
./vendor/bundle/ruby/3.3.0/gems/rails_multisite-6.1.0/lib/rails_multisite/connection_management.rb:21:in `with_connection'
./lib/scheduler/defer.rb:119:in `do_work'
./lib/scheduler/defer.rb:105:in `block (2 levels) in start_thread'
```
This was due to running the defer thread in an async manner which is
actually no representative of the production environment. It also
revealed a spot in our code base where writes are happening in a GET
request which can cause requests to fail if ActiveRecord is in readonly
mode.
Followup 2f2da72747
This commit moves topic view tracking from happening
every time a Topic is requested, which is susceptible
to inflating numbers of views from web crawlers, to
our request tracker middleware.
In this new location, topic views are only tracked when
the following headers are sent:
* HTTP_DISCOURSE_TRACK_VIEW - This is sent on every page navigation when
clicking around the ember app. We count these as browser page views
because we know it comes from the AJAX call in our app. The topic ID
is extracted from HTTP_DISCOURSE_TRACK_VIEW_TOPIC_ID
* HTTP_DISCOURSE_DEFERRED_TRACK_VIEW - Sent when MessageBus initializes
after first loading the page to count the initial page load view. The
topic ID is extracted from HTTP_DISCOURSE_DEFERRED_TRACK_VIEW.
This will bring topic views more in line with the change we
made to page views in the referenced commit and result in
more realistic topic view counts.
This commit moves the logic for crawler rate limits out of the application controller and into the request tracker middleware. The reason for this move is to apply rate limits to all crawler requests instead of just the requests that make it to the application controller. Some requests are served early from the middleware stack without reaching the Rails app for performance reasons (e.g. `AnonymousCache`) which results in crawlers getting 200 responses even though they've reached their limits and should be getting 429 responses.
Internal topic: t/128810.
This can happen for various reasons including rate limiting and middleware bugs. This should resolve the warning we're seeing in the logs
```
RequestTracker.get_data failed : NoMethodError : undefined method `[]' for nil:NilClass
```
Our 'page_view_crawler' / 'page_view_anon' metrics are based purely on the User Agent sent by clients. This means that 'badly behaved' bots which are imitating real user agents are counted towards 'anon' page views.
This commit introduces a new method of tracking visitors. When an initial HTML request is made, we assume it is a 'non-browser' request (i.e. a bot). Then, once the JS application has booted, we notify the server to count it as a 'browser' request. This reliance on a JavaScript-capable browser matches up more closely to dedicated analytics systems like Google Analytics.
Existing data collection and graphs are unchanged. Data collected via the new technique is available in a new 'experimental' report.
This will make it easier to analyze rate limiting in reverse-proxy logs. To make this possible without a database lookup, we add the username to the encrypted `_t` cookie data.
Why this change?
This is a follow up to e8f7b62752.
Tracking of GC stats didn't really belong in the `MethodProfiler` class
so we want to extract that concern into its own class.
As part of this PR, the `track_gc_stat_per_request` site setting has
also been renamed to `instrument_gc_stat_per_request`.
Adds stats for API and user API requests similar to regular page views.
This comes with a new report to visualize API requests per day like the
consolidated page views one.
This will allow consumers (e.g. the discourse-prometheus plugin) to separate topic-timings and message-bus requests. It also fixes the is_background boolean for subfolder sites.
Currently, Discourse rate limits all incoming requests by the IP address they
originate from regardless of the user making the request. This can be
frustrating if there are multiple users using Discourse simultaneously while
sharing the same IP address (e.g. employees in an office).
This commit implements a new feature to make Discourse apply rate limits by
user id rather than IP address for users at or higher than the configured trust
level (1 is the default).
For example, let's say a Discourse instance is configured to allow 200 requests
per minute per IP address, and we have 10 users at trust level 4 using
Discourse simultaneously from the same IP address. Before this feature, the 10
users could only make a total of 200 requests per minute before they got rate
limited. But with the new feature, each user is allowed to make 200 requests
per minute because the rate limits are applied on user id rather than the IP
address.
The minimum trust level for applying user-id-based rate limits can be
configured by the `skip_per_ip_rate_limit_trust_level` global setting. The
default is 1, but it can be changed by either adding the
`DISCOURSE_SKIP_PER_IP_RATE_LIMIT_TRUST_LEVEL` environment variable with the
desired value to your `app.yml`, or changing the setting's value in the
`discourse.conf` file.
Requests made with API keys are still rate limited by IP address and the
relevant global settings that control API keys rate limits.
Before this commit, Discourse's auth cookie (`_t`) was simply a 32 characters
string that Discourse used to lookup the current user from the database and the
cookie contained no additional information about the user. However, we had to
change the cookie content in this commit so we could identify the user from the
cookie without making a database query before the rate limits logic and avoid
introducing a bottleneck on busy sites.
Besides the 32 characters auth token, the cookie now includes the user id,
trust level and the cookie's generation date, and we encrypt/sign the cookie to
prevent tampering.
Internal ticket number: t54739.
Allow admins to configure exceptions to our Rails rate limiter.
Configuration happens in the environment variables, and work with both
IPs and CIDR blocks.
Example:
```
env:
DISCOURSE_MAX_REQS_PER_IP_EXCEPTIONS: >-
14.15.16.32/27
216.148.1.2
```
For sites with login_required set to true, counting anonymous pageviews is
confusing. Requests to /login and other pages would make it look like
anonymous users have access to site's content.
Fixes `Rack::Lint::LintError: a header value must be a String, but the value of 'Retry-After' is a Integer`. (see: 14a236b4f0/lib/rack/lint.rb (L676))
I found it when I got flooded by those warning a while back in a test-related accident 😉 (ember CLI tests were hitting a local rails server at a fast rate)
The regular expression to detect private IP addresses did not always detect them successfully.
Changed to use ruby's in-built IPAddr.new(ip_address).private? method instead
which does the same thing but covers all cases.
It returned a 429 error code with a 'Retry-After' header if a
RateLimiter::LimitExceeded was raised and unhandled, but the header was
missing if the request was limited in the 'RequestTracker' middleware.
Previous to this change our anonymous rate limits acted as a throttle.
New implementation means we now also consider rate limited requests towards
the limit.
This means that if an anonymous user is hammering the server it will not be
able to get any requests through until it subsides with traffic.
Non UTF-8 user_agent requests were bypassing logging due to PG always
wanting UTF-8 strings.
This adds some conversion to ensure we are always dealing with UTF-8
This allows custom plugins such as prometheus exporter to log how many
requests are stored in the anon cache vs used by the anon cache.
This metric allows us to fine tune cache behaviors
The message_bus performs a fair amount of work prior to hijacking requests
this change ensures that if there is a situation where the server is flooded
message_bus will inform client to back off for 30 seconds + random(120 secs)
This back-off is ultra cheap and happens very early in the middleware.
It corrects a situation where a flood to message bus could cause the app
to become unresponsive
MessageBus update is here to ensure message_bus gem properly respects
Retry-After header and status 429.
Under normal state this code should never trigger, to disable raise the
value of DISCOURSE_REJECT_MESSAGE_BUS_QUEUE_SECONDS, default is to tell
message bus to go away if we are queueing for 100ms or longer
This adds support for DISCOURSE_ENABLE_PERFORMANCE_HTTP_HEADERS
when set to `true` this will turn on performance related headers
```text
X-Redis-Calls: 10 # number of redis calls
X-Redis-Time: 1.02 # redis time in seconds
X-Sql-Commands: 102 # number of SQL commands
X-Sql-Time: 1.02 # duration in SQL in seconds
X-Queue-Time: 1.01 # time the request sat in queue (depends on NGINX)
```
To get queue time NGINX must provide: HTTP_X_REQUEST_START
We do not recommend you enable this without thinking, it exposes information
about what your page is doing, usually you would only enable this if you
intend to strip off the headers further down the stream in a proxy
By default, this does nothing. Two environment variables are available:
- `DISCOURSE_LOG_SIDEKIQ`
Set to `"1"` to enable logging. This will log all completed jobs to `log/rails/sidekiq.log`, along with various db/redis/network statistics. This is useful to track down poorly performing jobs.
- `DISCOURSE_LOG_SIDEKIQ_INTERVAL`
(seconds) Check running jobs periodically, and log their current duration. They will appear in the logs with `status:pending`. This is useful to track down jobs which take a long time, then crash sidekiq before completing.
This avoids require dependency on method_profiler and anon cache.
It means that if there is any change to these files the reloader will not pick it up.
Previously the reloader was picking up the anon cache twice causing it to double load on boot.
This caused warnings.
Long term my plan is to give up on require dependency and instead use:
https://github.com/Shopify/autoload_reloader