It seems like the overhead of GPU acceleration is not worth it and is
slowing down our system tests. Locally the following command completes
in `2 minutes 8.6 seconds` with GPU disabled as compared to `2 minutes 45.4 seconds` with GPU enabled.
## What is the problem?
MessageBus by default uses long polling which keeps a connection
open for 25 seconds by default. The problem here is that Capybara does not know about these
connections being kept opened by MessageBus and hence does not know how
to stop these connections at the end of each test. As a result, the long polling MessageBus connections are kept opened by the browser and we hit chrome's limit of 6 concurrent requests per host, new request made in the browser is marked as "pending" until a request is freed up. Since we keep a MessageBus long polling connection opened for 25 seconds, our finders in Capybara end up hitting Capybara's wait time out causing the tests to fail.
## What is the fix?
Since we can't rely on Capybara to close all the existing Capybara
connections, we manually execute a script to stop all MessageBus
connections after each system test.
```
for i in {1..10}; do
echo "Running iteration $i"
PARALLEL_TEST_PROCESSORS=8 CAPYBARA_DEFAULT_MAX_WAIT_TIME=10 bin/turbo_rspec --seed=34908 --profile --verbose --format documentation spec/system
if [ $? -ne 0 ]; then
echo "Error encountered on iteration $i"
exit 1
fi
done
echo "All 10 iterations completed successfully"
```
Without the fix, the script fails consistently in the first few iterations. Running in non-headless mode with the "network" tab opened will reveal the requests that are marked as pending.
Rescuing them still makes timing-out tests fail but doesn't break `after` spec cleanup (which could trigger more errors) Using custom error class to avoid any other possible timeout-catching code.
Also:
* remove an unnecessary `.select { |x| x.size > 0 }`
* fix a typo in a test title
Why is this change required?
By default, `RSpec` comes with a `--profile=[COUNT]` option as well but
enabling that option means that the entire test suite needs to be
executed. This does not work so well for `turbo_rspec` which splits our
test files into various "buckets" for the tests to be executed in
multiple processes. Therefore, this commit adds a similar
`--profile=[COUNT]` option to `turbo_rspec` but will only profile the
tests being executed. Examples:
`LOAD_PLUGINS=1 bin/turbo_rspec --profile plugins/*/spec/system`
or
`LOAD_PLUGINS=1 bin/turbo_rspec --profile=20 plugins/*/spec/system`
What is this change?
This change is an attempt to avoid flakiness in our tests due to
animations being enabled in our tests. An example of flakiness caused by
animations is when the `find(selector).click` pattern is used. When
`find(selector)` returns the node, its position may have changed if the
element is still moving. However, the `click` method will end up
clicking on the old position.
Either way, there is no need for us to make system tests even more
complicated by enabling animations.
Usage:
```
CHROME_DEV_TOOLS=bottom bundle exec rspec /path/to/system/spec
```
This commit also regroups common chrome options under `apply_base_chrome_options`, and removes the size of the mobile window which was incorrect. browser_log param is also passed to mobile chrome options.
New headless shares the same implementation as the chrome browser
instead of being a separate implementation of its own.
See https://developer.chrome.com/articles/new-headless/ for more
details
Co-authored-by: Rafael dos Santos Silva <xfalcox@gmail.com>
This commit also includes two changes to the rails helper which make tests more consistent on different devices. With this change the failure was reproducible locally and not only on CI:
```
options.add_argument("--force-device-scale-factor=1")
```
The fix itself is quite simple and attempts to find safe click coordinates, the previous solution could fail depending on the size of the sidebar.
* Color for turbo_rspec in CI (`progress` and `documentation` formats)
* Show "DONE" only when `documentation` formatter is used
* Fix formatting
* Collapse RSpec commands
* Add line wrapping to the `progress` formatter (to mitigate GH Actions issue)
This pull request is a full overhaul of the chat-composer and contains various improvements to the thread panel. They have been grouped in the same PR as lots of improvements/fixes to the thread panel needed an improved composer. This is meant as a first step.
### New features included in this PR
- A resizable side panel
- A clear dropzone area for uploads
- A simplified design for image uploads, this is only a first step towards more redesign of this area in the future
### Notable fixes in this PR
- Correct placeholder in thread panel
- Allows to edit the last message of a thread with arrow up
- Correctly focus composer when replying to a message
- The reply indicator is added instantly in the channel when starting a thread
- Prevents a large variety of bug where the composer could bug and prevent sending message or would clear your input while it has content
### Technical notes
To achieve this PR, three important changes have been made:
- `<ChatComposer>` has been fully rewritten and is now a glimmer component
- The chat composer now takes a `ChatMessage` as input which can directly be used in other operations, it simplifies a lot of logic as we are always working a with a `ChatMessage`
- `TextareaInteractor` has been created to wrap the existing `TextareaTextManipulation` mixin, it will make future migrations easier and allow us to have a less polluted `<ChatComposer>`
Note ".chat-live-pane" has been renamed ".chat-channel"
Design for upload dropzone is from @chapoi
Similar spirit to e195e6f614,
this moves the Bookmarkable registration to DiscoursePluginRegistry
so plugins which are not enabled do not register additional
bookmarkable classes.
This commit introduces the skeleton of the chat thread UI. The
structure of the components looks like this. Its done this way
so the side panel can be used for other things as well if we wish,
not just for threads:
```
.main-chat-outlet
<ChatLivePane />
<ChatSidePanel>
<-- rendered with {{outlet}} -->
<ChatThread />
</ChatSidePanel>
```
Later on the `ChatThreadList` will be rendered here as well.
Now, when you go to a channel you can open a thread by clicking
on either the Open Thread message action button or by clicking on
the reply indicator. This will take you to a route like `chat/c/:slug/:channelId/t/:threadId`.
This works on mobile as well.
This commit includes basic serializers and routes for threads,
as well as a new `ChatThreadsManager` service in JS that caches
threads for a channel the same way the channel threads manager does.
The chat messages inside the thread are intentionally left out
until a later PR.
**NOTE: These changes are gated behind the site setting enable_experimental_chat_threaded_discussions
and the threading_enabled boolean on a ChatChannel**
* DEV: Rnemae channel path to just c
Also swap the channel id and channel slug params to be consistent with core.
* linting
* channel_path
* Drop slugify helper and channel route without slug
* Request slug and route models through the channel model if possible
* DEV: Pass messageId as a dynamic segment instead of a query param
* Ensure change is backwards-compatible
* drop query param from oneboxes
* Correctly extract channelId from routes
* Better route organization using siblings for regular and near-message
* Ensures sessions are unique even when using parallelism
* prevents didReceiveAttrs to clear input mid test
* we disable animations in capybara so sometimes the message was barely showing
* adds wait
* ensures finished loading
* is it causing more harm than good?
* this check is slowing things for no reason
* actually target the button
* more resilient select chat message
* apply similar fix to bookmark
* fix
---------
Co-authored-by: Joffrey JAFFEUX <j.jaffeux@gmail.com>
In dev/prod, these are absorbed by unicorn. Most commonly, they occur when a client interrupts a message-bus long-polling request.
Also reverts the EPIPE workaround introduced in 011c9b9973
Having this set to ALL pollutes the JS system spec
logs with a bunch of unnecessary noise like this:
> "PresenceChannel '/chat-user/core/1' dropped message (received 315, expecting 246), resyncing..."
Or:
> "DEPRECATION: The \u003Cdiscourse@component:plugin-connector::ember1112>#save computed property was just overridden. This removes the computed property and replaces it with a plain value, and has been deprecated.
Now, we will only log errors. To configure this set
the `SELENIUM_BROWSER_LOG_LEVEL` env var.
Our working theory is that system tests on Github run on much less
powerful hardware as compared to running the tests on our work machines.
Hopefully, increasing the wait time now will help reduce some flakes
that we're seeing on Github.
It should fix flakeys we have due to using_session. This commit is also fixing tests which were failing constantly with treadsafe enabled.
A test has also bene skipped as the issue couldn't be found so far.
More info: https://github.com/teamcapybara/capybara#threadsafe-mode
Previously, browser logs would be printed to STDOUT halfway through the test run. This commit changes the behaviour so that the logs are included in the failure summary along with other rspec failure information.
Currently the `turbo:spec` task will fail when encountering system
tests as Capypara tries to use the same port for each process.
This simple change uses the same strategy as for databases, by just
incrementing the port number by `TEST_ENV_NUMBER` for each process.
We are all in on system specs, so this commit moves all the chat quoting acceptance tests (some of which have been skipped for a while) into system specs.
Follow up to a review in #18937, this commit changes the HashtagAutocompleteService to no longer use class variables to register hashtag data sources or types in context priority order. This is to address multisite concerns, where one site could e.g. have chat disabled and another might not. The filtered plugin registers I added will not be included if the plugin is disabled.
This ensures that all system tests are starting from a clean state and
not leak state between requests. Note that we have to simplify flush the
Redis db here because it is not pratical to manually clean up Redis keys
in system tests.
Will make your test run in an emulated iPhone 12 Pro view. It means you can now use `click(delay: 0.5)` to emulate some long press or that `mobile_view=1` will be set automatically.
Usage:
```
it "works", mobile: true do
visit("/")
end
```
Note: `window-size=390,950` is different than native iPhone 12 Pro size, but due to minimum browser size and the automated browser alert at the top of the view, this was the best size I could find.
This commit fleshes out and adds functionality for the new `#hashtag` search and
lookup system, still hidden behind the `enable_experimental_hashtag_autocomplete`
feature flag.
**Serverside**
We have two plugin API registration methods that are used to define data sources
(`register_hashtag_data_source`) and hashtag result type priorities depending on
the context (`register_hashtag_type_in_context`). Reading the comments in plugin.rb
should make it clear what these are doing. Reading the `HashtagAutocompleteService`
in full will likely help a lot as well.
Each data source is responsible for providing its own **lookup** and **search**
method that returns hashtag results based on the arguments provided. For example,
the category hashtag data source has to take into account parent categories and
how they relate, and each data source has to define their own icon to use for the
hashtag, and so on.
The `Site` serializer has two new attributes that source data from `HashtagAutocompleteService`.
There is `hashtag_icons` that is just a simple array of all the different icons that
can be used for allowlisting in our markdown pipeline, and there is `hashtag_context_configurations`
that is used to store the type priority orders for each registered context.
When sending emails, we cannot render the SVG icons for hashtags, so
we need to change the HTML hashtags to the normal `#hashtag` text.
**Markdown**
The `hashtag-autocomplete.js` file is where I have added the new `hashtag-autocomplete`
markdown rule, and like all of our rules this is used to cook the raw text on both the clientside
and on the serverside using MiniRacer. Only on the server side do we actually reach out to
the database with the `hashtagLookup` function, on the clientside we just render a plainer
version of the hashtag HTML. Only in the composer preview do we do further lookups based
on this.
This rule is the first one (that I can find) that uses the `currentUser` based on a passed
in `user_id` for guardian checks in markdown rendering code. This is the `last_editor_id`
for both the post and chat message. In some cases we need to cook without a user present,
so the `Discourse.system_user` is used in this case.
**Chat Channels**
This also contains the changes required for chat so that chat channels can be used
as a data source for hashtag searches and lookups. This data source will only be
used when `enable_experimental_hashtag_autocomplete` is `true`, so we don't have
to worry about channel results suddenly turning up.
------
**Known Rough Edges**
- Onebox excerpts will not render the icon svg/use tags, I plan to address that in a follow up PR
- Selecting a hashtag + pressing the Quote button will result in weird behaviour, I plan to address that in a follow up PR
- Mixed hashtag contexts for hashtags without a type suffix will not work correctly, e.g. #ux which is both a category and a channel slug will resolve to a category when used inside a post or within a [chat] transcript in that post. Users can get around this manually by adding the correct suffix, for example ::channel. We may get to this at some point in future
- Icons will not show for the hashtags in emails since SVG support is so terrible in email (this is not likely to be resolved, but still noting for posterity)
- Additional refinements and review fixes wil
This commit adds a new `/hashtag/search` endpoint and both
relevant JS and ruby plugin APIs to handle plugins adding their
own data sources and priority orders for types of things to search
when `#` is pressed.
A `context` param is added to `setupHashtagAutocomplete` which
a corresponding chat PR https://github.com/discourse/discourse-chat/pull/1302
will now use.
The UI calls `registerHashtagSearchParam` for each context that will
require a `#` search (e.g. the topic composer), for each type of record that
the context needs to search for, as well as a priority order for that type. Core
uses this call to add the `category` and `tag` data sources to the topic composer.
The `register_hashtag_data_source` ruby plugin API call is for plugins to
add a new data source for the hashtag searching endpoint, e.g. discourse-chat
may add a `channel` data source.
This functionality is hidden behind the `enable_experimental_hashtag_autocomplete`
flag, except for the change to `setupHashtagAutocomplete` since only core and
discourse-chat are using that function. Note this PR does **not** include required
changes for hashtag lookup or new styling.
This commit introduces rails system tests run with chromedriver, selenium,
and headless chrome to our testing toolbox.
We use the `webdrivers` gem and `selenium-webdriver` which is what
the latest Rails uses so the tests run locally and in CI out of the box.
You can use `SELENIUM_VERBOSE_DRIVER_LOGS=1` to show extra
verbose logs of what selenium is doing to communicate with the system
tests.
By default JS logs are verbose so errors from JS are shown when
running system tests, you can disable this with
`SELENIUM_DISABLE_VERBOSE_JS_LOGS=1`
You can use `SELENIUM_HEADLESS=0` to run the system
tests inside a chrome browser instead of headless, which can be useful to debug things
and see what the spec sees. See note above about `bin/ember-cli` to avoid
surprises.
I have modified `bin/turbo_rspec` to exclude `spec/system` by default,
support for parallel system specs is a little shaky right now and we don't
want them slowing down the turbo by default either.
### PageObjects and System Tests
To make querying and inspecting parts of the page easier
and more reusable inbetween system tests, we are using the
concept of [PageObjects](https://www.selenium.dev/documentation/test_practices/encouraged/page_object_models/) in
our system tests. A "Page" here is generally corresponds to
an overarching ember route, e.g. "Topic" for `/t/324345/some-topic`,
and this contains logic for querying components within the topic
such as "Posts".
I have also split "Modals" into their own entity. Further down the
line we may want to explore creating independent "Component"
contexts.
Capybara DSL should be included in each PageObject class,
reference for this can be found at https://rubydoc.info/github/teamcapybara/capybara/master#the-dsl
For system tests, since they are so slow, we want to focus on
the "happy path" and not do every different possible context
and branch check using them. They are meant to be overarching
tests that check a number of things are correct using the full stack
from JS and ember to rails to ruby and then the database.
### CI Setup
Whenever a system spec fails, a screenshot
is taken and a build artifact is produced _after the entire CI run is complete_,
which can be downloaded from the Actions UI in the repo.
Most importantly, a step to build the Ember app using Ember CLI
is needed, otherwise the JS assets cannot be found by capybara:
```
- name: Build Ember CLI
run: bin/ember-cli --build
```
A new `--build` argument has been added to `bin/ember-cli` for this
case, which is not needed locally if you already have the discourse
rails server running via `bin/ember-cli -u` since the whole server is built and
set up by default.
Co-authored-by: David Taylor <david@taylorhq.com>
The cache was causing state to leak between tests since the `WatchedWord` record in the DB would have been rolled back but `WordWatcher` still had the word in the cache.
This pull request follows on from https://github.com/discourse/discourse/pull/16308. This one does the following:
* Changes `BookmarkQuery` to allow for querying more than just Post and Topic bookmarkables
* Introduces a `Bookmark.register_bookmarkable` method which requires a model, serializer, fields and preload includes for searching. These registered `Bookmarkable` types are then used when validating new bookmarks, and also when determining which serializer to use for the bookmark list. The `Post` and `Topic` bookmarkables are registered by default.
* Adds new specific types for Post and Topic bookmark serializers along with preloading of associations in `UserBookmarkList`
* Changes to the user bookmark list template to allow for more generic bookmarkable types alongside the Post and Topic ones which need to display in a particular way
All of these changes are gated behind the `use_polymorphic_bookmarks` site setting, apart from the .hbs changes where I have updated the original `UserBookmarkSerializer` with some stub methods.
Following this PR will be several plugin PRs (for assign, chat, encrypt) that will register their own bookmarkable types or otherwise alter the bookmark serializers in their own way, also gated behind `use_polymorphic_bookmarks`.
This commit also removes `BookmarkQuery.preloaded_custom_fields` and the functionality surrounding it. It was added in 0cd502a558 but only used by one plugin (discourse-assign) where it has since been removed, and is now used by no plugins. We don't need it anymore.
Previously cached counting made redis calls in main thread and performed
the flush in main thread.
This could lead to pathological states in extreme heavy load.
This refactor reduces load and cleans up the interface
Also:
* Remove an unused method (#fill_email)
* Replace a method that was used just once (#generate_username) with `SecureRandom.alphanumeric`
* Remove an obsolete dev puma `tmp/restart` file logic
* File.exists? is deprecated and removed in Ruby 3.2 in favor of
File.exist?
* Dir.exists? is deprecated and removed in Ruby 3.2 in favor of
Dir.exist?
OmniAuth test mode is disabled by default, so that we can integration-test the omniauth strategies. Sometimes, we manually enable test mode for specific specs. This commit ensures that test_mode is always disabled again after each spec.
We can fake redis transactions so that `fab!` works for redis and PG
data, but it's too slow to be used indiscriminately. Instead, you can
opt into it with the `use_redis_snapshotting` helper.
Insofar as snapshotting allows us to `fab!` more things, it provides a
speedup.
Some tests don't pass when this is elevated. They should be fixed,
since, at some point, we may create enough uploads during tests that
they fail naturally.
Currently, Discourse rate limits all incoming requests by the IP address they
originate from regardless of the user making the request. This can be
frustrating if there are multiple users using Discourse simultaneously while
sharing the same IP address (e.g. employees in an office).
This commit implements a new feature to make Discourse apply rate limits by
user id rather than IP address for users at or higher than the configured trust
level (1 is the default).
For example, let's say a Discourse instance is configured to allow 200 requests
per minute per IP address, and we have 10 users at trust level 4 using
Discourse simultaneously from the same IP address. Before this feature, the 10
users could only make a total of 200 requests per minute before they got rate
limited. But with the new feature, each user is allowed to make 200 requests
per minute because the rate limits are applied on user id rather than the IP
address.
The minimum trust level for applying user-id-based rate limits can be
configured by the `skip_per_ip_rate_limit_trust_level` global setting. The
default is 1, but it can be changed by either adding the
`DISCOURSE_SKIP_PER_IP_RATE_LIMIT_TRUST_LEVEL` environment variable with the
desired value to your `app.yml`, or changing the setting's value in the
`discourse.conf` file.
Requests made with API keys are still rate limited by IP address and the
relevant global settings that control API keys rate limits.
Before this commit, Discourse's auth cookie (`_t`) was simply a 32 characters
string that Discourse used to lookup the current user from the database and the
cookie contained no additional information about the user. However, we had to
change the cookie content in this commit so we could identify the user from the
cookie without making a database query before the rate limits logic and avoid
introducing a bottleneck on busy sites.
Besides the 32 characters auth token, the cookie now includes the user id,
trust level and the cookie's generation date, and we encrypt/sign the cookie to
prevent tampering.
Internal ticket number: t54739.
Same issue as 28b00dc6fc, the
Mocha::ExpectationError inherits from Exception instead
of StandardError so RspecErrorTracker does not show the
actual failed expectation in request specs, the status of
the response is just 500 with no further detail.
This commit adds the RailsMultisite middleware in test mode when Rails.configuration.multisite is true. This allows for much more realistic integration testing. The `multisite_spec.rb` file is rewritten to avoid needing to simulate a middleware stack.
* DEV: Output webmock errors in request specs
In request specs, if you had not properly mocked an external
HTTP call, you would end up with a 500 error with no further
information instead of your expected response code, with an
rspec output like this:
```
Failures:
1) UploadsController#generate_presigned_put when the store is external generates a presigned URL and creates an external upload stub
Failure/Error: expect(response.status).to eq(200)
expected: 200
got: 500
(compared using ==)
# ./spec/requests/uploads_controller_spec.rb:727:in `block (4 levels) in <top (required)>'
# ./spec/rails_helper.rb:280:in `block (2 levels) in <top (required)>'
```
This is not helpful at all when you want to find what you actually
failed to mock, which is shown straight away in non-request specs.
This commit introduces a rescue_from block in the application
controller to log this error, so we have a much nicer output that
helps the developer find the issue:
```
Failures:
1) UploadsController#generate_presigned_put when the store is external generates a presigned URL and creates an external upload stub
Failure/Error: expect(response.status).to eq(200)
expected: 200
got: 500
(compared using ==)
# ./spec/requests/uploads_controller_spec.rb:727:in `block (4 levels) in <top (required)>'
# ./spec/rails_helper.rb:280:in `block (2 levels) in <top (required)>'
# ------------------
# --- Caused by: ---
# WebMock::NetConnectNotAllowedError:
# Real HTTP connections are disabled. Unregistered request: GET https://s3-upload-bucket.s3.us-west-1.amazonaws.com/?cors with headers {'Accept'=>'*/*', 'Accept-Encoding'=>'', 'Authorization'=>'AWS4-HMAC-SHA256 Credential=some key/20211101/us-west-1/s3/aws4_request, SignedHeaders=host;user-agent;x-amz-content-sha256;x-amz-date, Signature=test', 'Host'=>'s3-upload-bucket.s3.us-west-1.amazonaws.com', 'User-Agent'=>'aws-sdk-ruby3/3.121.2 ruby/2.7.1 x86_64-linux aws-sdk-s3/1.96.1', 'X-Amz-Content-Sha256'=>'test', 'X-Amz-Date'=>'20211101T035113Z'}
#
# You can stub this request with the following snippet:
#
# stub_request(:get, "https://s3-upload-bucket.s3.us-west-1.amazonaws.com/?cors").
# with(
# headers: {
# 'Accept'=>'*/*',
# 'Accept-Encoding'=>'',
# 'Authorization'=>'AWS4-HMAC-SHA256 Credential=some key/20211101/us-west-1/s3/aws4_request, SignedHeaders=host;user-agent;x-amz-content-sha256;x-amz-date, Signature=test',
# 'Host'=>'s3-upload-bucket.s3.us-west-1.amazonaws.com',
# 'User-Agent'=>'aws-sdk-ruby3/3.121.2 ruby/2.7.1 x86_64-linux aws-sdk-s3/1.96.1',
# 'X-Amz-Content-Sha256'=>'test',
# 'X-Amz-Date'=>'20211101T035113Z'
# }).
# to_return(status: 200, body: "", headers: {})
#
# registered request stubs:
#
# stub_request(:head, "https://s3-upload-bucket.s3.us-west-1.amazonaws.com/")
#
# ============================================================
```
* DEV: Require webmock in application controller if rails.env.test
* DEV: Rescue from StandardError and NetConnectNotAllowedError
We weren't calling clear_all! for the rate limiter which
was the first problem, and the second problem was that it
is very odd to do state cleanup before tests instead of after,
so moved the disabling and clear_all! to after.
* Move onebox gem in core library
* Update template file path
* Remove warning for onebox gem caching
* Remove onebox version file
* Remove onebox gem
* Add sanitize gem
* Require onebox library in lazy-yt plugin
* Remove onebox web specific code
This code was used in standalone onebox Sinatra application
* Merge Discourse specific AllowlistedGenericOnebox engine in core
* Fix onebox engine filenames to match class name casing
* Move onebox specs from gem into core
* DEV: Rename `response` helper to `onebox_response`
Fixes a naming collision.
* Require rails_helper
* Don't use `before/after(:all)`
* Whitespace
* Remove fakeweb
* Remove poor unit tests
* DEV: Re-add fakeweb, plugins are using it
* Move onebox helpers
* Stub Instagram API
* FIX: Follow additional redirect status codes (#476)
Don’t throw errors if we encounter 303, 307 or 308 HTTP status codes in responses
* Remove an empty file
* DEV: Update the license file
Using the copy from https://choosealicense.com/licenses/gpl-2.0/#
Hopefully this will enable GitHub to show the license UI?
* DEV: Update embedded copyrights
* DEV: Add Onebox copyright notice
* DEV: Add MIT license, convert COPYRIGHT.txt to md
* DEV: Remove an incorrect copyright claim
Co-authored-by: Jarek Radosz <jradosz@gmail.com>
Co-authored-by: jbrw <jamie@goatforce5.org>
Over the years we accrued many spelling mistakes in the code base.
This PR attempts to fix spelling mistakes and typos in all areas of the code that are extremely safe to change
- comments
- test descriptions
- other low risk areas
It makes SimpleCov work with turbo_rspec and uses the default Rails
configuration (with some changes) to groups files by their type
(models, controllers, etc).
Previously it matched the behavior of standard ActiveRecord after_commit callbacks. They do not work well within `joinable: false` nested transactions. Now `DB.after_commit` callbacks will only be run when the outermost transaction has been committed.
Tests always run inside transactions, so this also introduces some logic to run callbacks once the test-wrapping transaction is reached.
Extracted commonly used spec helpers into spec/support/uploads_helpers.rb, removed unused stubs and let definitions. Makes it easier to write new S3-related specs without copy and pasting setup steps from other specs.
We removed pry-nav a while back because it is not up to date with pry but it is super useful. Luckily pry-byebug is here to save us all from Satan's power.
To get this to work you need to add the following to your $HOME/.pryrc file.
```
if defined?(PryByebug)
Pry.commands.alias_command 'c', 'continue'
Pry.commands.alias_command 's', 'step'
Pry.commands.alias_command 'n', 'next'
Pry.commands.alias_command 'f', 'finish'
end
Pry::Commands.command /^$/, "repeat last command" do
pry_instance.run_command Pry.history.to_a.last
end
```
The require-ing of pry, pry-rails, and pry-byebug in specs is controlled by the IMPROVED_SPEC_DEBUGGING flag (disabled by default).
This reverts commit 20780a1eee.
* SECURITY: re-adds accidentally reverted commit:
03d26cd6: ensure embed_url contains valid http(s) uri
* when the merge commit e62a85cf was reverted, git chose the 2660c2e2 parent to land on
instead of the 03d26cd6 parent (which contains security fixes)
* FIX: randomize file name when created from fixtures
When a temporary file is created from fixtures it should have a unique name.
It is to prevent a collision in parallel specs evaluation
* FIX: use /tmp/pid folder to keep fixture files
Use a helper method to simplify creating a new register. Previously this would require creating lots of different methods manually, and adding every register to the clear/reset functions
Latest version of Rails contains compatibility fixes for Ruby 2.7 and some
minor security fixes we would like to have
It also broke some of the multisite tests.
Rails tries to use the same connection for reading from a replica as writing
to the leader during tests, because, with everything happening in a
transaction, changes to the DB wouldn't otherwise be reflected in the
replica connection.
The difference now is that Rails tries to do this for connections opened
after the test has started which affected rails multisite connections.
The upshot of this is that, as things stand, you are likely to
experience problems if you try to connect to a different multisite DB in
a test when the `current_db` is not 'default'.
This should catch a few scenarios which can waste a lot of time in development
- Forgetting to run migrations
- Missing plugin migrations
- Missing post_deploy migrations
Zeitwerk simplifies working with dependencies in dev and makes it easier reloading class chains.
We no longer need to use Rails "require_dependency" anywhere and instead can just use standard
Ruby patterns to require files.
This is a far reaching change and we expect some followups here.
* Updated test-prof
* Made rails_helper.rb use new test-prof APIs
Instead of the previous temporary hacks.
* Added environment option to disable prefabrication
It was removed mistakenly
Before: 6:05
After: 5:42
Featuring topics for `list/categories` is a very expensive operation that
happened each time we created a topic. This introduces a test only bypass
* Introduced fab!, a helper that creates database state for a group
It's almost identical to let_it_be, except:
1. It creates a new object for each test by default,
2. You can disable it using PREFABRICATION=0
This change automatically resizes icons for various purposes. Admins can now upload `logo` and `logo_small`, and everything else will be auto-generated. Specific icons can still be uploaded separately if required.
## Core
- Adds an SiteIconManager module which manages automatic resizing and fallback
- Icons are looked up in the OptimizedImage table at runtime, and then cached in Redis. If the resized version is missing for some reason, then most icons will fall back to the original files. Some icons (e.g. PWA Manifest) will return `nil` (because an incorrectly sized icon is worse than a missing icon).
- `SiteSetting.site_large_icon_url` will return the optimized version, including any fallback. `SiteSetting.large_icon` continues to return the upload object. This means that (almost) no changes are required in core/plugins to support this new system.
- Icons are resized whenever a relevant site setting is changed, and during post-deploy migrations
## Wizard
- Allows `requiresRefresh` wizard steps to reload data via AJAX instead of a full page reload
- Add placeholders to the **icons** step of the wizard, which automatically update from the "Square Logo"
- Various copy updates to support the changes
- Remove the "upload-time" resizing for `large_icon`. This is no longer required.
## Site Settings UX
- Move logo/icon settings under a new "Branding" tab
- Various copy changes to support the changes
- Adds placeholder support to the `image-uploader` component
- Automatically reloads site settings after saving. This allows setting placeholders to change based on changes to other settings
- Upload site settings will be assigned a placeholder if SiteIconManager `responds_to?` an icon of the same name
## Dashboard Warnings
- Remove PWA icon and PWA title warnings. Both are now handled automatically.
## Bonus
- Updated the sketch logos to use @awesomerobot's new high-res designs
This change both speeds up specs (less strings to allocate) and helps catch
cases where methods in Discourse are mutating inputs.
Overall we will be migrating everything to use #frozen_string_literal: true
it will take a while, but this is the first and safest move in this direction
Includes support for flags, reviewable users and queued posts, with REST API
backwards compatibility.
Co-Authored-By: romanrizzi <romanalejandro@gmail.com>
Co-Authored-By: jjaffeux <j.jaffeux@gmail.com>
Previously we would bypass touching `Topic.updated_at` for whispers and post
recovery / deletions.
This meant that certain types of caching can not be done where we rely on
this information for cache accuracy.
For example if we know we have zero unread topics as of yesterday and whisper
is made I need to bump this date so the cache remains accurate
This is only half of a larger change but provides the groundwork.
Confirmed none of our serializers leak out Topic.updated_at so this is safe
spot for this info
At the moment edits still do not change this but it is not relevant for the
unread cache.
This commit also cleans up some specs to use the new `eq_time` matcher for
millisecond fidelity comparison of times
Previously `freeze_time` would fudge this which is not that clean.
It is not a setting, and only relevant in specs. The new API is:
```
Jobs.run_later! # jobs will be thrown on the queue
Jobs.run_immediately! # jobs will run right away, avoid the queue
```
SiteSettingExtension triggers message bus which re-establishes a
DB connection in `SiteSettingExtension#process_message`. That happens
concurrently and a test that requires a connection to the db will
fail when the reconnection is happening.
This commit introduces a new helper to enable transactional fixtures
when testing multisite. This would show up as tests that passed the
first time then failed the second time due to stale data being leftover.
This moves us away from the delayed drops pattern which
was problematic on two counts. First, it uses a hardcoded "delay for"
duration which may be too short for certain deployment strategies.
Second, delayed drop doesn't ensure that it only runs after
the latest application code has been deployed. If the migration runs
and the application code fails to deploy, running the migration after
"delay for" has been met will cause the application to blow up.
The new strategy allows post deployment migrations to be skipped if the
env `SKIP_POST_DEPLOYMENT_MIGRATIONS` is provided.
```
SKIP_POST_DEPLOYMENT_MIGRATIONS=1 rake db:migrate
-> deploy app servers
SKIP_POST_DEPLOYMENT_MIGRATIONS=0 rake db:migrate
```
To aid with the generation of a post deployment migration, a generator
has been added. Simply run `rails generate post_migration`.
This refactors it so "Defaults provider" is only responsible for "defaults"
Locale handling and management of locale settings is moved back into
SiteSettingExtension
This eliminates complex state management using DistributedCache and makes
it way easier to test SiteSettingExtension
This allows our request specs to report exceptions so we can debug
May have a few false positives but generally should be quiet
TODO only wire magic in for request specs, currently happens for all
This means then when a service is load balanced and you reach rate limits
there was a case where they counting was way off
also remove the stub from clock_gettime cause we need to be super careful with
it, so we should probably just stub by hand when needed
This refactors handling of s3 so it can be specified via GlobalSetting
This means that in a multisite environment you can configure s3 uploads
without actual sites knowing credentials in s3
It is a critical setting for situations where assets are mirrored to s3.
This change-set allows setting different defaults for different locales.
It also:
- Adds extensive testing around site setting validation
- raises deprecation error if site setting has the default property based on env
- relocated site settings for dev and tests in the initializer
- deprecated client_setting in the site setting's loading process
- ensure it raises when a enum site setting being set
- default_locale is promoted to `required` category.
- fixes incorrect default setting and validation
- fixes ensure type check for site settings
- creates a benchmark for site setting
- sets reasonable defaults for Chinese
Rails yanked out observers many many years ago, instead the functionality
was yanked out to a gem that is very lightly maintained.
For example: if we want to upgrade to rails 5 there is no published gem
Internally the usage of observers had quite a few problem.
The series of refactors renamed a bunch of classes to give us more clarity
and removed some magic.
The opensearch.xml results in a "site search engine" being added to
Chrome, while the sitelinks search tag results in "Search this website"
being added to Google Search.
Since rspec-rails 3, the default installation creates two helper files:
* `spec_helper.rb`
* `rails_helper.rb`
`spec_helper.rb` is intended as a way of running specs that do not
require Rails, whereas `rails_helper.rb` loads Rails (as Discourse's
current `spec_helper.rb` does).
For more information:
https://www.relishapp.com/rspec/rspec-rails/docs/upgrade#default-helper-files
In this commit, I've simply replaced all instances of `spec_helper` with
`rails_helper`, and renamed the original `spec_helper.rb`.
This brings the Discourse project closer to the standard usage of RSpec
in a Rails app.
At present, every spec relies on loading Rails, but there are likely
many that don't need to. In a future pull request, I hope to introduce a
separate, minimal `spec_helper.rb` which can be used in tests which
don't rely on Rails.