Similar spirit to e195e6f614,
this moves the Bookmarkable registration to DiscoursePluginRegistry
so plugins which are not enabled do not register additional
bookmarkable classes.
This commit introduces the skeleton of the chat thread UI. The
structure of the components looks like this. Its done this way
so the side panel can be used for other things as well if we wish,
not just for threads:
```
.main-chat-outlet
<ChatLivePane />
<ChatSidePanel>
<-- rendered with {{outlet}} -->
<ChatThread />
</ChatSidePanel>
```
Later on the `ChatThreadList` will be rendered here as well.
Now, when you go to a channel you can open a thread by clicking
on either the Open Thread message action button or by clicking on
the reply indicator. This will take you to a route like `chat/c/:slug/:channelId/t/:threadId`.
This works on mobile as well.
This commit includes basic serializers and routes for threads,
as well as a new `ChatThreadsManager` service in JS that caches
threads for a channel the same way the channel threads manager does.
The chat messages inside the thread are intentionally left out
until a later PR.
**NOTE: These changes are gated behind the site setting enable_experimental_chat_threaded_discussions
and the threading_enabled boolean on a ChatChannel**
* DEV: Rnemae channel path to just c
Also swap the channel id and channel slug params to be consistent with core.
* linting
* channel_path
* Drop slugify helper and channel route without slug
* Request slug and route models through the channel model if possible
* DEV: Pass messageId as a dynamic segment instead of a query param
* Ensure change is backwards-compatible
* drop query param from oneboxes
* Correctly extract channelId from routes
* Better route organization using siblings for regular and near-message
* Ensures sessions are unique even when using parallelism
* prevents didReceiveAttrs to clear input mid test
* we disable animations in capybara so sometimes the message was barely showing
* adds wait
* ensures finished loading
* is it causing more harm than good?
* this check is slowing things for no reason
* actually target the button
* more resilient select chat message
* apply similar fix to bookmark
* fix
---------
Co-authored-by: Joffrey JAFFEUX <j.jaffeux@gmail.com>
In dev/prod, these are absorbed by unicorn. Most commonly, they occur when a client interrupts a message-bus long-polling request.
Also reverts the EPIPE workaround introduced in 011c9b9973
Having this set to ALL pollutes the JS system spec
logs with a bunch of unnecessary noise like this:
> "PresenceChannel '/chat-user/core/1' dropped message (received 315, expecting 246), resyncing..."
Or:
> "DEPRECATION: The \u003Cdiscourse@component:plugin-connector::ember1112>#save computed property was just overridden. This removes the computed property and replaces it with a plain value, and has been deprecated.
Now, we will only log errors. To configure this set
the `SELENIUM_BROWSER_LOG_LEVEL` env var.
Our working theory is that system tests on Github run on much less
powerful hardware as compared to running the tests on our work machines.
Hopefully, increasing the wait time now will help reduce some flakes
that we're seeing on Github.
It should fix flakeys we have due to using_session. This commit is also fixing tests which were failing constantly with treadsafe enabled.
A test has also bene skipped as the issue couldn't be found so far.
More info: https://github.com/teamcapybara/capybara#threadsafe-mode
Previously, browser logs would be printed to STDOUT halfway through the test run. This commit changes the behaviour so that the logs are included in the failure summary along with other rspec failure information.
Currently the `turbo:spec` task will fail when encountering system
tests as Capypara tries to use the same port for each process.
This simple change uses the same strategy as for databases, by just
incrementing the port number by `TEST_ENV_NUMBER` for each process.
We are all in on system specs, so this commit moves all the chat quoting acceptance tests (some of which have been skipped for a while) into system specs.
Follow up to a review in #18937, this commit changes the HashtagAutocompleteService to no longer use class variables to register hashtag data sources or types in context priority order. This is to address multisite concerns, where one site could e.g. have chat disabled and another might not. The filtered plugin registers I added will not be included if the plugin is disabled.
This ensures that all system tests are starting from a clean state and
not leak state between requests. Note that we have to simplify flush the
Redis db here because it is not pratical to manually clean up Redis keys
in system tests.
Will make your test run in an emulated iPhone 12 Pro view. It means you can now use `click(delay: 0.5)` to emulate some long press or that `mobile_view=1` will be set automatically.
Usage:
```
it "works", mobile: true do
visit("/")
end
```
Note: `window-size=390,950` is different than native iPhone 12 Pro size, but due to minimum browser size and the automated browser alert at the top of the view, this was the best size I could find.
This commit fleshes out and adds functionality for the new `#hashtag` search and
lookup system, still hidden behind the `enable_experimental_hashtag_autocomplete`
feature flag.
**Serverside**
We have two plugin API registration methods that are used to define data sources
(`register_hashtag_data_source`) and hashtag result type priorities depending on
the context (`register_hashtag_type_in_context`). Reading the comments in plugin.rb
should make it clear what these are doing. Reading the `HashtagAutocompleteService`
in full will likely help a lot as well.
Each data source is responsible for providing its own **lookup** and **search**
method that returns hashtag results based on the arguments provided. For example,
the category hashtag data source has to take into account parent categories and
how they relate, and each data source has to define their own icon to use for the
hashtag, and so on.
The `Site` serializer has two new attributes that source data from `HashtagAutocompleteService`.
There is `hashtag_icons` that is just a simple array of all the different icons that
can be used for allowlisting in our markdown pipeline, and there is `hashtag_context_configurations`
that is used to store the type priority orders for each registered context.
When sending emails, we cannot render the SVG icons for hashtags, so
we need to change the HTML hashtags to the normal `#hashtag` text.
**Markdown**
The `hashtag-autocomplete.js` file is where I have added the new `hashtag-autocomplete`
markdown rule, and like all of our rules this is used to cook the raw text on both the clientside
and on the serverside using MiniRacer. Only on the server side do we actually reach out to
the database with the `hashtagLookup` function, on the clientside we just render a plainer
version of the hashtag HTML. Only in the composer preview do we do further lookups based
on this.
This rule is the first one (that I can find) that uses the `currentUser` based on a passed
in `user_id` for guardian checks in markdown rendering code. This is the `last_editor_id`
for both the post and chat message. In some cases we need to cook without a user present,
so the `Discourse.system_user` is used in this case.
**Chat Channels**
This also contains the changes required for chat so that chat channels can be used
as a data source for hashtag searches and lookups. This data source will only be
used when `enable_experimental_hashtag_autocomplete` is `true`, so we don't have
to worry about channel results suddenly turning up.
------
**Known Rough Edges**
- Onebox excerpts will not render the icon svg/use tags, I plan to address that in a follow up PR
- Selecting a hashtag + pressing the Quote button will result in weird behaviour, I plan to address that in a follow up PR
- Mixed hashtag contexts for hashtags without a type suffix will not work correctly, e.g. #ux which is both a category and a channel slug will resolve to a category when used inside a post or within a [chat] transcript in that post. Users can get around this manually by adding the correct suffix, for example ::channel. We may get to this at some point in future
- Icons will not show for the hashtags in emails since SVG support is so terrible in email (this is not likely to be resolved, but still noting for posterity)
- Additional refinements and review fixes wil
This commit adds a new `/hashtag/search` endpoint and both
relevant JS and ruby plugin APIs to handle plugins adding their
own data sources and priority orders for types of things to search
when `#` is pressed.
A `context` param is added to `setupHashtagAutocomplete` which
a corresponding chat PR https://github.com/discourse/discourse-chat/pull/1302
will now use.
The UI calls `registerHashtagSearchParam` for each context that will
require a `#` search (e.g. the topic composer), for each type of record that
the context needs to search for, as well as a priority order for that type. Core
uses this call to add the `category` and `tag` data sources to the topic composer.
The `register_hashtag_data_source` ruby plugin API call is for plugins to
add a new data source for the hashtag searching endpoint, e.g. discourse-chat
may add a `channel` data source.
This functionality is hidden behind the `enable_experimental_hashtag_autocomplete`
flag, except for the change to `setupHashtagAutocomplete` since only core and
discourse-chat are using that function. Note this PR does **not** include required
changes for hashtag lookup or new styling.
This commit introduces rails system tests run with chromedriver, selenium,
and headless chrome to our testing toolbox.
We use the `webdrivers` gem and `selenium-webdriver` which is what
the latest Rails uses so the tests run locally and in CI out of the box.
You can use `SELENIUM_VERBOSE_DRIVER_LOGS=1` to show extra
verbose logs of what selenium is doing to communicate with the system
tests.
By default JS logs are verbose so errors from JS are shown when
running system tests, you can disable this with
`SELENIUM_DISABLE_VERBOSE_JS_LOGS=1`
You can use `SELENIUM_HEADLESS=0` to run the system
tests inside a chrome browser instead of headless, which can be useful to debug things
and see what the spec sees. See note above about `bin/ember-cli` to avoid
surprises.
I have modified `bin/turbo_rspec` to exclude `spec/system` by default,
support for parallel system specs is a little shaky right now and we don't
want them slowing down the turbo by default either.
### PageObjects and System Tests
To make querying and inspecting parts of the page easier
and more reusable inbetween system tests, we are using the
concept of [PageObjects](https://www.selenium.dev/documentation/test_practices/encouraged/page_object_models/) in
our system tests. A "Page" here is generally corresponds to
an overarching ember route, e.g. "Topic" for `/t/324345/some-topic`,
and this contains logic for querying components within the topic
such as "Posts".
I have also split "Modals" into their own entity. Further down the
line we may want to explore creating independent "Component"
contexts.
Capybara DSL should be included in each PageObject class,
reference for this can be found at https://rubydoc.info/github/teamcapybara/capybara/master#the-dsl
For system tests, since they are so slow, we want to focus on
the "happy path" and not do every different possible context
and branch check using them. They are meant to be overarching
tests that check a number of things are correct using the full stack
from JS and ember to rails to ruby and then the database.
### CI Setup
Whenever a system spec fails, a screenshot
is taken and a build artifact is produced _after the entire CI run is complete_,
which can be downloaded from the Actions UI in the repo.
Most importantly, a step to build the Ember app using Ember CLI
is needed, otherwise the JS assets cannot be found by capybara:
```
- name: Build Ember CLI
run: bin/ember-cli --build
```
A new `--build` argument has been added to `bin/ember-cli` for this
case, which is not needed locally if you already have the discourse
rails server running via `bin/ember-cli -u` since the whole server is built and
set up by default.
Co-authored-by: David Taylor <david@taylorhq.com>
The cache was causing state to leak between tests since the `WatchedWord` record in the DB would have been rolled back but `WordWatcher` still had the word in the cache.
This pull request follows on from https://github.com/discourse/discourse/pull/16308. This one does the following:
* Changes `BookmarkQuery` to allow for querying more than just Post and Topic bookmarkables
* Introduces a `Bookmark.register_bookmarkable` method which requires a model, serializer, fields and preload includes for searching. These registered `Bookmarkable` types are then used when validating new bookmarks, and also when determining which serializer to use for the bookmark list. The `Post` and `Topic` bookmarkables are registered by default.
* Adds new specific types for Post and Topic bookmark serializers along with preloading of associations in `UserBookmarkList`
* Changes to the user bookmark list template to allow for more generic bookmarkable types alongside the Post and Topic ones which need to display in a particular way
All of these changes are gated behind the `use_polymorphic_bookmarks` site setting, apart from the .hbs changes where I have updated the original `UserBookmarkSerializer` with some stub methods.
Following this PR will be several plugin PRs (for assign, chat, encrypt) that will register their own bookmarkable types or otherwise alter the bookmark serializers in their own way, also gated behind `use_polymorphic_bookmarks`.
This commit also removes `BookmarkQuery.preloaded_custom_fields` and the functionality surrounding it. It was added in 0cd502a558 but only used by one plugin (discourse-assign) where it has since been removed, and is now used by no plugins. We don't need it anymore.
Previously cached counting made redis calls in main thread and performed
the flush in main thread.
This could lead to pathological states in extreme heavy load.
This refactor reduces load and cleans up the interface
Also:
* Remove an unused method (#fill_email)
* Replace a method that was used just once (#generate_username) with `SecureRandom.alphanumeric`
* Remove an obsolete dev puma `tmp/restart` file logic
* File.exists? is deprecated and removed in Ruby 3.2 in favor of
File.exist?
* Dir.exists? is deprecated and removed in Ruby 3.2 in favor of
Dir.exist?
OmniAuth test mode is disabled by default, so that we can integration-test the omniauth strategies. Sometimes, we manually enable test mode for specific specs. This commit ensures that test_mode is always disabled again after each spec.
We can fake redis transactions so that `fab!` works for redis and PG
data, but it's too slow to be used indiscriminately. Instead, you can
opt into it with the `use_redis_snapshotting` helper.
Insofar as snapshotting allows us to `fab!` more things, it provides a
speedup.
Some tests don't pass when this is elevated. They should be fixed,
since, at some point, we may create enough uploads during tests that
they fail naturally.
Currently, Discourse rate limits all incoming requests by the IP address they
originate from regardless of the user making the request. This can be
frustrating if there are multiple users using Discourse simultaneously while
sharing the same IP address (e.g. employees in an office).
This commit implements a new feature to make Discourse apply rate limits by
user id rather than IP address for users at or higher than the configured trust
level (1 is the default).
For example, let's say a Discourse instance is configured to allow 200 requests
per minute per IP address, and we have 10 users at trust level 4 using
Discourse simultaneously from the same IP address. Before this feature, the 10
users could only make a total of 200 requests per minute before they got rate
limited. But with the new feature, each user is allowed to make 200 requests
per minute because the rate limits are applied on user id rather than the IP
address.
The minimum trust level for applying user-id-based rate limits can be
configured by the `skip_per_ip_rate_limit_trust_level` global setting. The
default is 1, but it can be changed by either adding the
`DISCOURSE_SKIP_PER_IP_RATE_LIMIT_TRUST_LEVEL` environment variable with the
desired value to your `app.yml`, or changing the setting's value in the
`discourse.conf` file.
Requests made with API keys are still rate limited by IP address and the
relevant global settings that control API keys rate limits.
Before this commit, Discourse's auth cookie (`_t`) was simply a 32 characters
string that Discourse used to lookup the current user from the database and the
cookie contained no additional information about the user. However, we had to
change the cookie content in this commit so we could identify the user from the
cookie without making a database query before the rate limits logic and avoid
introducing a bottleneck on busy sites.
Besides the 32 characters auth token, the cookie now includes the user id,
trust level and the cookie's generation date, and we encrypt/sign the cookie to
prevent tampering.
Internal ticket number: t54739.
Same issue as 28b00dc6fc, the
Mocha::ExpectationError inherits from Exception instead
of StandardError so RspecErrorTracker does not show the
actual failed expectation in request specs, the status of
the response is just 500 with no further detail.
This commit adds the RailsMultisite middleware in test mode when Rails.configuration.multisite is true. This allows for much more realistic integration testing. The `multisite_spec.rb` file is rewritten to avoid needing to simulate a middleware stack.
* DEV: Output webmock errors in request specs
In request specs, if you had not properly mocked an external
HTTP call, you would end up with a 500 error with no further
information instead of your expected response code, with an
rspec output like this:
```
Failures:
1) UploadsController#generate_presigned_put when the store is external generates a presigned URL and creates an external upload stub
Failure/Error: expect(response.status).to eq(200)
expected: 200
got: 500
(compared using ==)
# ./spec/requests/uploads_controller_spec.rb:727:in `block (4 levels) in <top (required)>'
# ./spec/rails_helper.rb:280:in `block (2 levels) in <top (required)>'
```
This is not helpful at all when you want to find what you actually
failed to mock, which is shown straight away in non-request specs.
This commit introduces a rescue_from block in the application
controller to log this error, so we have a much nicer output that
helps the developer find the issue:
```
Failures:
1) UploadsController#generate_presigned_put when the store is external generates a presigned URL and creates an external upload stub
Failure/Error: expect(response.status).to eq(200)
expected: 200
got: 500
(compared using ==)
# ./spec/requests/uploads_controller_spec.rb:727:in `block (4 levels) in <top (required)>'
# ./spec/rails_helper.rb:280:in `block (2 levels) in <top (required)>'
# ------------------
# --- Caused by: ---
# WebMock::NetConnectNotAllowedError:
# Real HTTP connections are disabled. Unregistered request: GET https://s3-upload-bucket.s3.us-west-1.amazonaws.com/?cors with headers {'Accept'=>'*/*', 'Accept-Encoding'=>'', 'Authorization'=>'AWS4-HMAC-SHA256 Credential=some key/20211101/us-west-1/s3/aws4_request, SignedHeaders=host;user-agent;x-amz-content-sha256;x-amz-date, Signature=test', 'Host'=>'s3-upload-bucket.s3.us-west-1.amazonaws.com', 'User-Agent'=>'aws-sdk-ruby3/3.121.2 ruby/2.7.1 x86_64-linux aws-sdk-s3/1.96.1', 'X-Amz-Content-Sha256'=>'test', 'X-Amz-Date'=>'20211101T035113Z'}
#
# You can stub this request with the following snippet:
#
# stub_request(:get, "https://s3-upload-bucket.s3.us-west-1.amazonaws.com/?cors").
# with(
# headers: {
# 'Accept'=>'*/*',
# 'Accept-Encoding'=>'',
# 'Authorization'=>'AWS4-HMAC-SHA256 Credential=some key/20211101/us-west-1/s3/aws4_request, SignedHeaders=host;user-agent;x-amz-content-sha256;x-amz-date, Signature=test',
# 'Host'=>'s3-upload-bucket.s3.us-west-1.amazonaws.com',
# 'User-Agent'=>'aws-sdk-ruby3/3.121.2 ruby/2.7.1 x86_64-linux aws-sdk-s3/1.96.1',
# 'X-Amz-Content-Sha256'=>'test',
# 'X-Amz-Date'=>'20211101T035113Z'
# }).
# to_return(status: 200, body: "", headers: {})
#
# registered request stubs:
#
# stub_request(:head, "https://s3-upload-bucket.s3.us-west-1.amazonaws.com/")
#
# ============================================================
```
* DEV: Require webmock in application controller if rails.env.test
* DEV: Rescue from StandardError and NetConnectNotAllowedError
We weren't calling clear_all! for the rate limiter which
was the first problem, and the second problem was that it
is very odd to do state cleanup before tests instead of after,
so moved the disabling and clear_all! to after.
* Move onebox gem in core library
* Update template file path
* Remove warning for onebox gem caching
* Remove onebox version file
* Remove onebox gem
* Add sanitize gem
* Require onebox library in lazy-yt plugin
* Remove onebox web specific code
This code was used in standalone onebox Sinatra application
* Merge Discourse specific AllowlistedGenericOnebox engine in core
* Fix onebox engine filenames to match class name casing
* Move onebox specs from gem into core
* DEV: Rename `response` helper to `onebox_response`
Fixes a naming collision.
* Require rails_helper
* Don't use `before/after(:all)`
* Whitespace
* Remove fakeweb
* Remove poor unit tests
* DEV: Re-add fakeweb, plugins are using it
* Move onebox helpers
* Stub Instagram API
* FIX: Follow additional redirect status codes (#476)
Don’t throw errors if we encounter 303, 307 or 308 HTTP status codes in responses
* Remove an empty file
* DEV: Update the license file
Using the copy from https://choosealicense.com/licenses/gpl-2.0/#
Hopefully this will enable GitHub to show the license UI?
* DEV: Update embedded copyrights
* DEV: Add Onebox copyright notice
* DEV: Add MIT license, convert COPYRIGHT.txt to md
* DEV: Remove an incorrect copyright claim
Co-authored-by: Jarek Radosz <jradosz@gmail.com>
Co-authored-by: jbrw <jamie@goatforce5.org>
Over the years we accrued many spelling mistakes in the code base.
This PR attempts to fix spelling mistakes and typos in all areas of the code that are extremely safe to change
- comments
- test descriptions
- other low risk areas
It makes SimpleCov work with turbo_rspec and uses the default Rails
configuration (with some changes) to groups files by their type
(models, controllers, etc).
Previously it matched the behavior of standard ActiveRecord after_commit callbacks. They do not work well within `joinable: false` nested transactions. Now `DB.after_commit` callbacks will only be run when the outermost transaction has been committed.
Tests always run inside transactions, so this also introduces some logic to run callbacks once the test-wrapping transaction is reached.
Extracted commonly used spec helpers into spec/support/uploads_helpers.rb, removed unused stubs and let definitions. Makes it easier to write new S3-related specs without copy and pasting setup steps from other specs.