We are seeing the following error in our logs when Sidekiq is sent a
`USR1` signal in production when logrotate happens:
```
log writing failed. stream closed in another thread
Error encountered while starting Sidekiq: can't be called from trap context\n/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/unicorn-6.1.0/lib/unicorn/util.rb:71:in `reopen'
```
I'm not quite sure where the error is triggered from so I'm improving
the way we log errors.
We were using `autoclose` as the topic status update
when silently closing topics using the bulk
actions (introduced in 0464ddcd9b).
However, this resulted in a message like this showing in
the topic as a small moderator post:
> This topic was automatically closed after X days.
This is not accurate, the topic was bulk closed by someone.
Instead, we can use `closed` as the status, and a more accurate
> Closed on DATE
message is used. `TopicStatusUpdater` needed an additional
option to keep the same "fake read" behaviour as autoclose
so we can keep the same functionality for silently closing
topics in bulk actions.
decorator-transforms (https://github.com/ef4/decorator-transforms) is a modern replacement for babel's plugin-proposal-decorators. It provides a decorator implementation using modern browser features, without needing to enable babel's full suite of class feature transformations. This improves the developer experience and performance.
In local testing with Google's 'tachometer' tool, this reduces Discourse's 'init-to-render' time by around 3-4% (230ms -> 222ms).
It reduces our initial gzip'd JS payloads by 3.2% (2.43MB -> 2.35MB), or 7.5% (14.5MB -> 13.4MB) uncompressed.
This was previously reverted in 97847f6. This version includes a babel transformation which works around the bug in Safari <= 15.
For Cloudflare compatibility issues, check https://meta.discourse.org/t/311390
Not all HTML elements are converted into Markdown. Some are kept as HTML.
Without this fix XML/HTML entities that are formatted as text instead of code are swallowed by Discourse.
This also fixes quotes in the `title` attribute of the `<abbr>` tag.
Previously `HtmlToMarkdown` always converted HTML tables into Markdown tables. That lead to some badly formatted Markdown tables, e.g. when the table contained `rowspan` or `colspan`. This solves the issue by using very basic HTML tables in those cases.
Inline the helper functions, avoid creating and then immediately destructuring arrays, use complete strings instead of string interpolation, Map instead of a pojo.
This commit introduces a hidden `s3_inventory_bucket` site setting which
replaces the `enable_s3_inventory` and `s3_configure_inventory_policy`
site setting.
The reason `enable_s3_inventory` and `s3_configure_inventory_policy`
site settings are removed is because this feature has technically been
broken since it was introduced. When the `enable_s3_inventory` feature
is turned on, the app will because configure a daily inventory policy for the
`s3_upload_bucket` bucket and store the inventories under a prefix in
the bucket. The problem here is that once the inventories are created,
there is nothing cleaning up all these inventories so whoever that has
enabled this feature would have been paying the cost of storing a whole
bunch of inventory files which are never used. Given that we have not
received any complains about inventory files inflating S3 storage costs,
we think that it is very likely that this feature is no longer being
used and we are looking to drop support for this feature in the not too
distance future.
For now, we will still support a hidden `s3_inventory_bucket` site
setting which site administrators can configure via the
`DISCOURSE_S3_INVENTORY_BUCKET` env.
Continued work on moderate flags UI.
In this PR admins are allowed to change the order of flags. The notify user flag is always on top but all other flags can be moved.
Previously "themes frontend" CI job would:
1. pull compatible versions of themes that happened to be in the base image
2. clone all official themes (overriding the compatible versions from 1.)
3. run tests
This commit adds the ability for site administrators to mark users'
passwords as expired. Note that this commit does not add any client side
interface to mark a user's password as expired.
The following changes are introduced in this commit:
1. Adds a `user_passwords` table and `UserPassword` model. While the
`user_passwords` table is currently used to only store expired
passwords, it will be used in the future to store a user's current
password as well.
2. Adds a `UserPasswordExpirer.expire_user_password` method which can
be used from the Rails console to mark a user's password as expired.
3. Updates `SessionsController#create` to check that the user's current
password has not been marked as expired after confirming the
password. If the password is determined to be expired based on the
existence of a `UserPassword` record with the `password_expired_at`
column set, we will not log the user in and will display a password
expired notice. A forgot password email is automatically send out to
the user as well.
This commit introduces the following changes:
1. Introduce the `SignalTrapLogger` singleton which starts a single
thread that polls a queue to log messages with the specified logger.
This thread is necessary becasue most loggers cannot be used inside
the `Signal.trap` context as they rely on mutexes which are not
allowed within the context.
2. Moves the monkey patch in `freedom_patches/unicorn_http_server_patch.rb` to
`config/unicorn.config.rb` which is already monkey patching
`Unicorn::HttpServer`.
3. `Unicorn::HttpServer` will now automatically send a `USR2` signal to
a unicorn worker 2 seconds before the worker is timed out by the
Unicorn master.
4. When a Unicorn worker receives a `USR2` signal, it will now log only
the main thread's backtraces to `Rails.logger`. Previously, it was
`put`ing the backtraces to `STDOUT` which most people wouldn't read.
Logging it via `Rails.logger` will make the backtraces easily
accessible via `/logs`.
This commit introduces the following changes which allows a site
administrator to mark `Upload` records with the `s3_file_missing`
verification status which will result in the `Upload` record being ignored when
`Discourse.store.list_missing_uploads` is ran on a site where S3 uploads
are enabled and `SiteSetting.enable_s3_inventory` is set to `true`.
1. Introduce `s3_file_missing` to `Upload.verification_statuses`
2. Introduce `Upload.mark_invalid_s3_uploads_as_missing` which updates
`Upload#verification_status` of all `Upload` records from `invalid_etag` to `s3_file_missing`.
3. Introduce `rake uploads:mark_invalid_s3_uploads_as_missing` Rake task
which allows a site administrator to change `Upload` records with
`invalid_etag` verification status to the `s3_file_missing`
verificaton_status.
4. Update `S3Inventory` to ignore `Upload` records with the
`s3_file_missing` verification status.
When uploading a video, the composer will now show a thumbnail image in
the composer preview instead of just the video placeholder image.
If `enable_diffhtml_preview` is enabled the video will be rendered in
the composer preview and is playable.
This commit updates all Sidekiq signal handling event logs to go through
Unicorn's logger instead of logging to STDOUT. Going through a proper logger
means the log messages are logged in the format which the logger has configured.
This means we get proper timestamp for the log messages.
Introduced back in 2022 in
e3d495850d,
our new more specific message-id format for inbound and
outbound emails has now been in use for a very long time,
we can remove the support for the old formats:
`topic/:topic_id/:post_id.:random@:host`
`topic/:topic_id@:host`
`topic/:topic_id.:random@:host`
In this PR service objects were moved to Core https://github.com/discourse/discourse/pull/26506
However, ServiceRunner should be moved as well. Mostly for CI to run effortlessly without loading plugins.
This commit moves the logic for crawler rate limits out of the application controller and into the request tracker middleware. The reason for this move is to apply rate limits to all crawler requests instead of just the requests that make it to the application controller. Some requests are served early from the middleware stack without reaching the Rails app for performance reasons (e.g. `AnonymousCache`) which results in crawlers getting 200 responses even though they've reached their limits and should be getting 429 responses.
Internal topic: t/128810.
This commit adds a `DISCOURSE_DUMP_BACKTRACES_ON_UNICORN_WORKER_TIMEOUT`
environment that will allow us to dump all backtraces for all threads of
a Unicorn worker 2 seconds before it times out. In development,
backtraces are dumped to `STDOUT` and in production we will dump it to
`unicorn.stdout.log`.
We want to dump all the backtraces to make it easier to identify the
cause of a Unicorn worker timing out.
This commit updates `S3Inventory#files` to ignore S3 inventory files
which have a `last_modified` timestamp which are not at least 2 days
older than `BackupMetadata.last_restore_date` timestamp.
This check was previously only in `Jobs::EnsureS3UploadsExistence` but
`S3Inventory` can also be used via Rake tasks so this protection needs
to be in `S3Inventory` and not in the scheduled job.
* DEV: allow reply_by_email, visit_link_to_respond strings to be modified by plugins
* DEV: separate visit_link_to_respond and reply_by_email modifiers out
This PR aims to add bulk actions to the user's bookmarks.
After this feature, all users should be able to select multiple bookmarks and perform the actions of "deleting" or "clear reminders"
This fixes the `PrettyText.make_all_links_absolute` to better handle subfolder.
In subfolder, when given the cooked version of a post, links to mentions includes the `Discourse.base_path` prefix. Adding the `Discourse.base_url` was doubling the `Discourse.base_path`.
The issue was hidden behind the specs which was stubbing `Discourse.base_url` instead of relying on `Discourse.base_path`.
This fixes both the "algorithm" used in `PrettyText.make_all_links_absolute` to better handle this case and correct the specs to properly handle subfolder cases.
There are lots of changes in the specs due to a refactoring to use squiggly heredoc strings for easier reading and less escaping.
This commit adds a step in our tests workflow on Github actions to update the themes to
use the compatible version when not running aginast the `main` branch.
This is to ensure that we are not running
the tests for themes against an incompatible version of Discourse.
AuthProvider#enabled_setting=, used primarily by plugins, has been deprecated since version 2.9, in favour of Authenticator#enabled?. This PR confirms we are seeing no more usage and removes the method.
Whenever one creates, updates, or deletes a post, we should keep the `topic.word_count` counter in sync.
Context - https://meta.discourse.org/t/-/308062
This commits updates `FinalDestination#get` to not forward
`Authorization` header on redirects since most HTTP clients I tested like
curl and wget does not it.
This also fixes a recent problem in `DiscourseIpInfo.mmdb_download`
where we will fail to download the databases when both `GlobalSetting.maxmind_account_id` and
`GlobalSetting.maxmind_license_key` has been set. The failure is due to
the bug above where the redirected URL given by MaxMind does not accept
an `Authorization` header.
This commit optimises the database query generated by
`TopicsFilter#filter_categories` when the `-category:*` filter is used.
Previously, the method will add the `topics.category_id NOT IN
(<category ids to be excluded>)` filter to the resulting query. However,
we noticed that the performance of the query degrades as the number of
rows in the `topics` table grow and when the number of category ids to be
excluded is large.
Sample of query we ran on a large database in production to demonstrate
the improvement:
Before:
```
SELECT topics.id FROM topics WHERE topics.category_id NOT IN (83, 136, 149, 143, 153, 165, 161, 123, 155, 163, 144, 134, 69, 135, 158, 141, 151, 160, 131, 133, 89, 104, 150, 147, 132, 145, 108, 146, 122, 100, 128, 154, 95, 102, 140, 139, 88, 91, 87) ORDER BY topics.id DESC LIMIT 5;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=27795.34..27795.34 rows=1 width=4) (actual time=29.317..30.165 rows=5 loops=1)
-> Sort (cost=27795.34..27795.34 rows=1 width=4) (actual time=29.316..30.163 rows=5 loops=1)
Sort Key: id DESC
Sort Method: top-N heapsort Memory: 25kB
-> Gather (cost=1000.10..27795.33 rows=1 width=4) (actual time=0.187..26.132 rows=73478 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Parallel Seq Scan on topics (cost=0.10..26795.23 rows=1 width=4) (actual time=0.013..22.252 rows=24493 loops=3)
Filter: (category_id <> ALL ('{83,136,149,143,153,165,161,123,155,163,144,134,69,135,158,141,151,160,131,133,89,104,150,147,132,145,108,146,122,100,128,154,95,102,140,139,88,91,87}'::integer[]))
Rows Removed by Filter: 77276
Planning Time: 0.140 ms
Execution Time: 30.181 ms
```
After:
```
SELECT topics.id FROM topics WHERE NOT EXISTS (
SELECT 1
FROM unnest(array[83, 136, 149, 143, 153, 165, 161, 123, 155, 163, 144, 134, 69, 135, 158, 141, 151, 160, 131, 133, 89, 104, 150, 147, 132, 145, 108, 146, 122, 100, 128, 154, 95, 102, 140, 139, 88, 91, 87]) AS excluded_categories(category_id)
WHERE topics.category_id IS NULL OR excluded_categories.category_id = topics.category_id
) ORDER BY topics.id DESC LIMIT 5 ;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.42..13.52 rows=5 width=4) (actual time=0.028..0.110 rows=5 loops=1)
-> Nested Loop Anti Join (cost=0.42..179929.62 rows=68715 width=4) (actual time=0.027..0.109 rows=5 loops=1)
Join Filter: ((topics.category_id IS NULL) OR (excluded_categories.category_id = topics.category_id))
Rows Removed by Join Filter: 239
-> Index Scan Backward using forum_threads_pkey on topics (cost=0.42..108925.71 rows=305301 width=8) (actual time=0.012..0.062 rows=44 loops=1)
-> Function Scan on unnest excluded_categories (cost=0.00..0.39 rows=39 width=4) (actual time=0.000..0.001 rows=6 loops=44)
Planning Time: 0.126 ms
Execution Time: 0.124 ms
(8 rows)
```
This commit changes request method for "categories/search" from GET to
POST to make sure that long filters can be passed to the server. For
example, category selectors with many categories are setting the full
list of selected category IDs to ensure these are filtered out from the
list of choices. This can result in a long URL that exceeds the maximum
length.
This reverts commit 0f4520867b.
This has led to two problems:
1. An incompatibility with Cloudflare's "auto minify" feature. They've deprecated this feature because of incompatibility with modern JS syntax. But unfortunately it will remain enabled on existing properties until 2024-08-05.
2. Discourse fails to boot in Safari 15. This is strange, because Safari does support all the required features in our production JS bundles. Even more strangely, things start working as soon as you open the developer tools. That suggests the cause could be a Safari bug rather than a simple incompatibility.
Reverting while we work out a path forward on both those issues.
This commit switches `DiscourseIpInfo.mmdb_download` to use the
permalinks supplied by MaxMind to download the MaxMind databases as
specified in
https://dev.maxmind.com/geoip/updating-databases#directly-downloading-databases
which states:
```
To directly download databases, follow these steps:
1. In the "Download Links" column, click "Get Permalink(s)" for the desired database.
2. Copy the permalink(s) provided in the modal window.
3. Provide your account ID and your license key using Basic Authentication to authenticate.
```
Previously we are downloading from `https://download.maxmind.com/app/geoip_download` but this is not
documented anyway on MaxMind's docs so this URL can in theory break
in the future without warning. Therefore, we are taking a proactive
approach to download the databases from MaxMind the recommended way
instead of relying on a hidden URL. This old way of downloading the
databases with only a license key will be deprecated in 3.3 and be
removed in 3.4.
In 95a82d608d, we lowered the default for
`Onebox.options.max_download_kb` from 10mb to 2mb for security hardening
purposes. However, this resulted in multiple bug reports where seemingly
nomral URLs stopped being oneboxed. It turns out that lowering
`Onebox.options.max_download_kb` resulted in `Onebox::Helpers::DownloadTooLarge` being raised
more often for more URLs in `Onebox::Helpers.fetch_response` which
`Onebox::Helpers.fetch_html_doc` relies on. When
`Onebox::Helpers::DownloadTooLarge` is raised in
`Onebox::Helpers.fetch_response`, we throw away whatever response body
which we have already downloaded at that point. This is not ideal
because Nokogiri can parse incomplete HTML documents and there is a
really high chance that the incomplete HTML document still contains the
information which we need for oneboxing.
Therefore, this commit updates `Onebox::Helpers.fetch_html_doc` to not
throw away the response body when the size of the response body exceeds
`Onebox.options.max_download_size`. Instead, we just take whatever
response which we have and get Nokogiri to parse it.
This can happen for various reasons including rate limiting and middleware bugs. This should resolve the warning we're seeing in the logs
```
RequestTracker.get_data failed : NoMethodError : undefined method `[]' for nil:NilClass
```
decorator-transforms (https://github.com/ef4/decorator-transforms) is a modern replacement for babel's plugin-proposal-decorators. It provides a decorator implementation using modern browser features, without needing to enable babel's full suite of class feature transformations. This improves the developer experience and performance.
In local testing with Google's 'tachometer' tool, this reduces Discourse's 'init-to-render' time by around 3-4% (230ms -> 222ms).
It reduces our initial gzip'd JS payloads by 3.2% (2.43MB -> 2.35MB), or 7.5% (14.5MB -> 13.4MB) uncompressed.
This keeps coming up in user testing as something
we want to get rid of. The `navigation_menu` setting
has been set to sidebar by default for some time now,
and we are rolling out admin sidebar widely. It just
doesn't make sense to let people turn this off in
the first step of the wizard -- we _want_ people to
use the sidebar.
Whenever a post already failed "lightweight" validations, we skip all the expensive validations (that cooks the post or run SQL queries) so that we reply as soon as possible.
Also skip validating polls when there's no "[/poll]" in the raw.
Internal ref - t/115890
When running `rake uploads:regenerate_missing_optimized`,
a `Discourse::InvalidAccess` will be raised if an SVG
file is being processed as `OptimizedImage.prepend_decoder!`
doesn't support the svg extension. This commit simply copies
the original SVG file as the thumbnail, just like currently
`OptimizedImage.create_for` does.
Ignored columns can only be dropped when its associated post-deploy
migration has been promoted to a regular migration. This is so because
Discourse doesn't rely on a schema file system to setup a brand new
database and thus the column information will be loaded by the
application first before the post-deploy migration runs.
- Use 'cheap-source-map' webpack config on low-memory machines
This results in worse quality sourcemaps in browser dev tools, but it significantly reduces memory use in our webpack build. In approximate local testing it drops from 1100mb to 590mb. This should make the rebuild process on low-memory machines much faster and less likely to trigger OOM errors.
In development, and on higher-memory machines, the higher-quality 'source-map' option is maintained.
- Disable Webpack's built-in `minimize` feature. Embroider already applies Terser after the webpack build is complete. There is no need to double-minimize the output.
- Update ember-cli-progress-ci to print to stderr instead of stdout. For some reason, pups (used by discourse_docker) buffers the stdout of commands and only prints when they are finished. stderr does not have this same limitation, so switching will mean sysadmins can see the progress of the ember build in real-time.
Given the number of variables it's hard to promise exact numbers. But, in my tests on a DO droplet with 1GB RAM (+2GB swap), this reduced the `ember build` portion of a `./launcher rebuild app` from ~50 minutes to ~15 minutes.
This commit adds a `isValidUrl` helper function to the context in
which theme migrations are ran in. This helper function is to make it
easier for theme developers to check if a string is a valid URL or path
when writing theme migrations. This can be helpful in cases when
migrating a string based setting to `type: objects` which contain `type:
string` properties with URL validations enabled.
This commit also introduces the `UrlHelper.is_valid_url?` method
which actually checks that the URL string is of the valid format instead of
only checking if the URL string is parseable which is what `UrlHelper.relaxed_parse` does
and is not sufficient for our needs.
This is a follow up of 5fcb7c262d
It was missing the case where secure uploads is enabled, which creates a copy of the upload no matter what.
So this checks for the original_sha1 of the uploads as well when checking for duplicates.
This commit introduces a few changes as a result of
customer issues with finding why a topic was relisted.
In one case, if a user edited the OP of a topic that was
unlisted and hidden because of too many flags, the topic
would get relisted by directly changing topic.visible,
instead of going via TopicStatusUpdater.
To improve tracking we:
* Introduce a visibility_reason_id to topic which functions
in a similar way to hidden_reason_id on post, this column is
set from the various places we change topic visibility
* Fix Post#unhide! which was directly modifying topic.visible,
instead we use TopicStatusUpdater which sets visibility_reason_id
and also makes a small action post
* Show the reason topic visibility changed when hovering the
unlisted icon in topic status on topic titles
This commit fixes a bug in theme settings migrations where values of `objects` typed theme settings aren't passed to migrations even when there are overriding values for those settings. What causes this bug is that, when creating the hash that contains all the overridden settings and will be passed to migrations, the values of `objects` typed settings are incorrectly retrieved from the `value` column (which is always nil for `objects` type) instead of `json_value`. `objects` settings are different from all other types in that they store their values in the `json_value` column and they need to be special-cased when retrieving their values.
Our 'page_view_crawler' / 'page_view_anon' metrics are based purely on the User Agent sent by clients. This means that 'badly behaved' bots which are imitating real user agents are counted towards 'anon' page views.
This commit introduces a new method of tracking visitors. When an initial HTML request is made, we assume it is a 'non-browser' request (i.e. a bot). Then, once the JS application has booted, we notify the server to count it as a 'browser' request. This reliance on a JavaScript-capable browser matches up more closely to dedicated analytics systems like Google Analytics.
Existing data collection and graphs are unchanged. Data collected via the new technique is available in a new 'experimental' report.
This commit fixes a bug in the `themes:update` rake task which resulted
in the ActiveRecord transaction not being rolled back when an error was
encountered. The transaction was first introduced in
7f0682f4f2 which changed a `begin..rescue`
block to `transaction do..rescue`. The problem with that change
prevented the transaction from ever rolling back as the code block
looks something like this:
```
transaction do
begin
update_theme
rescue => e
# surpress error
end
end
```
From the transaction's point of view now, it will never rollback even if
an error was encountered when updating the remote theme because it will
never see the error.
Instead we should have done something like this if we wanted to surpress
the errors encountered while still ensuring that the transaction is
rolled back.
```
begin
transaction do
update_theme
end
rescue => e
# surpress error
end
```
This is essential for us to determine which site is encountering an
error while updating remote themes. We are also including the theme's id
because themes can have the same name.
This service-worker caching functionality was disabled by default in 1c58395bca, and the setting to re-enable was marked as experimental. Now we are dropping all the related logic.
- Run the CSP-nonce-related middlewares on the generated response
- Fix the readonly mode checking to avoid empty strings being passed (the `check_readonly_mode` before_action will not execute in the case of these re-dispatched exceptions)
- Move the BlockRequestsMiddleware cookie-setting to the middleware, so that it is included even for unusual HTML responses like these exceptions
LinkedIn has grandfathered its old OAuth2 provider. This can only be used by existing apps. New apps have to use the new OIDC provider.
This PR adds a linkedin_oidc provider to core. This will exist alongside the discourse-linkedin-auth plugin, which will be kept for those still using the deprecated provider.
For e-mails, secure uploads redacts all secure images, and later uses the access control post to re-attached allowed ones. We pass the ID of this post through the X-Discourse-Post-Id header. As the name suggests, this assumes there's only ever one access control post. This is not true for activity summary e-mails, as they summarize across posts.
This adds a new header, X-Discourse-Post-Ids, which is used the same way as the old header, but also works for the case where an e-mail is associated with multiple posts.
Automatically add `moderators` and `admins` auto groups to specific site settings.
In the new group-based permissions systems, we just want to check the user’s groups since it more accurately reflects reality
Affected settings:
- tag_topic_allowed_groups
- create_tag_allowed_groups
- send_email_messages_allowed_groups
- personal_message_enabled_groups
- here_mention_allowed_groups
- approve_unless_allowed_groups
- approve_new_topics_unless_allowed_groups
- skip_review_media_groups
- email_in_allowed_groups
- create_topic_allowed_groups
- edit_wiki_post_allowed_groups
- edit_post_allowed_groups
- self_wiki_allowed_groups
- flag_post_allowed_groups
- post_links_allowed_groups
- embedded_media_post_allowed_groups
- profile_background_allowed_groups
- user_card_background_allowed_groups
- invite_allowed_groups
- ignore_allowed_groups
- user_api_key_allowed_groups
This is to enable :array type attributes for Contract
attributes in services, this is a followup to the move
of services from chat to core here:
cab178a405
Co-authored-by: Joffrey JAFFEUX <j.jaffeux@gmail.com>
This SQL tries to insert as much data as possible into the `user_stats` table by either calculating or by approximating stats based on existing. It also fixes an error in the calculation of `reply_count`which mistakenly contained all posts, not just replies.
This change also disables some steps in the `import:ensure_consistency` rake task by setting the `SKIP_USER_STATS` env variable. Otherwise, the rake task will overwrite the calculated data in the `user_stats` table with inaccurate data. I'm not changing or removing the logic from the rake task yet because other bulk import scripts seem to depend on it.
This modifier allows plugins to alter the outcome of
`should_secure_uploads?` on a Post record, for cases when
plugins need post-attached uploads to always be secure (or
not secure) in specific scenarios.
This commit changes enum typed theme objects property to be optional.
Previously, an enum typed property is always required but we have found
that this might not be ideal so we want to change it.
This method name is a bit confusing; with_secure_uploads implies
it may return a block or something with the uploads of the post,
and has_secure_uploads implies that it's checking whether the post
is linked to any secure uploads.
should_secure_uploads? communicates the true intent of this method --
which is to say whether uploads attached to this post should be
secure or not.
This commit changes `DiscourseIpInfo.mmdb_download` to use `File.join`
instead of `URI.join` when `GlobalSetting.maxmind_mirror_url` has been
configured. This is necessary because `URI.join` does not work the way I
expect it to work when I implemented it previously.
`URI.join("http://www.example.com/mirror", "test.tar.gz") results in
`http://www.example.com/test.tar.gz` instead of our expected
`http://www.exmaple.com/mirror/test.tar.gz`. For our simple use case
here, `File.join` is sufficient.
We will be collecting the logo URL and the site's default locale values along with existing basic details to display the site on the Discourse Discover listing page. It will be included only if the site is opted-in by enabling the "`include_in_discourse_discover`" site setting.
Also, we no longer going to use `about.json` and `site/statistics.json` endpoints retrieve these data. We will be using only the `site/basic-info.json` endpoint.
Doesn't actually seem to be used by any of our formatters, but let's send the proper data anyway for future-proofing. Followup to ff6cb1bc05 and 8098876bfa
When a user is manually deactivated, they should not be deleted by our background job that purges inactive users.
In addition, site settings keywords should accept an array of keywords.
In this PR, all references in the UI to the word "`upgrade`" are changed to "`update`". This is to differentiate the update process in self-hosted sites from the plan "upgrade" process in hosted sites.
Follow-up to the PR: https://github.com/discourse/docker_manager/pull/208
Why this change?
This allows downloading the MaxMind databases from a mirror in cases
where downloading directly from MaxMind's API endpoint is problematic
due to API limits.
Why this change?
We currently support `GlobalSetting.refresh_maxmind_db_during_precompile_days` which
should cache the maxmind databases on disk for the configured number of
days before it downloads the databases from maxmind again via the API.
This was previously added to help us avoid hitting the API rate limit from maxmind.
However, there was a bug in the `copy_maxmind` when we copied the latest
downloaded database to the cache directory. In particular, `FileUtils.cp` was called with
`preserve: true` which would preserve the modified time of the file
being copied. This is problematic because download the database from
maxmind on 2 April 2024 can give us a file with an mtime of 29 March
2024. If `GlobalSetting.refresh_maxmind_db_during_precompile_days` is
set to `2` for example, the cache will never be used since we will
think that the file has been downloaded for more than 2 days in our
checks.
What is the fix here?
While we want to preserve the owner and group of the file, we do not
want to preserve the modified time and hence we will call
`FileUtils.touch` when copying the file.
Why this change?
For a schema like this:
```
schema = {
name: "section",
properties: {
category_property: {
type: "categories",
required: true,
},
},
}
```
When the value of the property is set to an empty array, we are not
raising an error which we should because the property is marked as
required.
Why this change?
This is a follow-up to 86b2e3a.
Basically, we want to allow people to select more than 1 group as well.
What does this change do?
1. Change `type: group` to `type: groups` and support `min` and `max`
validations for `type: groups`.
2. Fix the `<SchemaThemeSetting::Types::Groups>` component to support the
`min` and `max` validations and switch it to use the `<GroupChooser>` component
instead of the `<ComboBoxComponent>` component which previously only supported
selecting a single group.
Previously the problem check registry simply looked at the subclasses of ProblemCheck. This was causing some confusion in environments where eager loading is not enabled, as the registry would appear empty as a result of the classes never being referenced (and thus never loaded.)
This PR changes the approach to a more explicit one. I followed other implementations (bookmarkable and hashtag autocomplete.) As a bonus, this now has a neat plugin entry point as well.
When a topic fails to be created due to the user not having permission to add tags to the topic, the error message that you get is `There was an error tagging the topic` which is very generic and doesn't explain where/what the problem is. This commit adds a clearer error message for this scenario.
The `TopicCreator` class has a `skip_validations` option that can force-create a topic without performing permission checks or validation rules. However, at the moment it doesn't skip validations that are related to tags, so topics that are created by the system or by some scrip can still fail if they use tags. This commit makes the `TopicCreator` class skip all tags-related checks if the `skip_validations` is specified.
Internal topic: t/124280.
Followup 0bbca318f2,
rather than making developers provide the plugin path
name (which may not always be the same depending on
dir names and git cloning etc) we can infer the plugin
dir from the caller in plugin_file_from_fixtures
Why this change?
This is a follow-up to 86b2e3aa3e.
Basically, we want to allow people to select more than 1 category as well.
What does this change do?
1. Change `type: category` to `type: categories` and support `min` and `max`
validations for `type: categories`.
2. Fix the `<SchemaThemeSetting::Types::Categories>` component to support the
`min` and `max` validations and switch it to use the `<CategorySelector>` component
instead of the `<CategoryChooser>` component which only supports selecting one category.
At the moment, all topic `?page=` views are served with exactly identical page titles. If you search for something which is mentioned many times in the same Discourse topic, this makes for some very hard-to-understand search results! All the result titles are exactly the same, with no indication of why there are multiple results showing.
This commit adds a `- Page #` suffix to the titles in this situation. This lines up with our existing strategy for topic-list pagination.
When crawlers visit a post-specific URL like `/t/-/{topic-id}/{post-number}`, we use the canonical to direct them to the appropriate crawler-optimised paginated view (e.g. `?page=3`).
However, analysis of google results shows that the post-specific URLs are still being included in the index. Google doesn't tell us exactly why this is happening. However, as a general rule, 'A large portion of the duplicate page's content should be present on the canonical version'.
In our previous implementation, this wasn't 100% true all the time. That's because a request for a post-specific URL would include posts 'surrounding' that post, and won't exactly conform to the page boundaries which are used in the canonical version of the page. Essentially: in some cases, the content of the post-specific pages would include many posts which were not present on the canonical paginated version.
This commit aims to resolve that problem by simplifying the implementation. Instead of rendering posts surrounding the target post_number, we will only render the target post, and include a link to 'show post in topic'. With this new implementation, 100% of the post-specific page content will be present on the canonical paginated version, which will hopefully mean google reduces their indexing of the non-canonical post-specific pages.
To generate letter avatars, we’re currently using the ImageMagick suite
and we’re using the Helvetica font family. However, that font isn’t
shipped anymore in the latest stable version of Debian (Bookworm).
Instead it seems to have been replaced by the Nimbus font. The rendering
is extremely similar (not to say it’s the same thing) so it shouldn’t be
noticeable.
That change is necessary for us to upgrade our docker images to Debian
Bookworm.
This is so the CI output on GitHub actions isn't showing
tons and tons of unnecessary log data every time you want
to see the important thing, which is the actual test failure.
We were previously using the `EMBER_ENV=production` environment variable, which appears to produce the same output. But, some parts of ember-cli don't seem to support it, which leads to a confusing 'Environment: development' being printed on the console.
This commit adds `-prod` by default, which is the more common way to invoke ember-cli for production builds.
Why this change?
While working on the tag selector for the theme object editor, I
realised that there is an extremely high possibility that users might want to select
more than one tag. By supporting the ability to select more than one
tag, it also means that we get support for a single tag for free as
well.
What does this change do?
1. Change `type: tag` to `type: tags` and support `min` and `max`
validations for `type: tags`.
2. Fix the `<SchemaThemeSetting::Types::Tags>` component to support the
`min` and `max` validations
Why this change?
Fortunately or unfortunately in Discourse core, we mainly use `Tag#name`
to look up tags and not its id. This assumption is built into the
frontend as well so we need to use the tag's name instead of the id
here.
Followup 3094f32ff5,
this fixes an issue with the logic in this commit where
we were returning false if any of the conditionals here
were false, regardless of the type of `obj`, where we should
have only done this if `obj` was a `PostAction`, which lead
us to return false in cases where we were checking if the
user could edit their own post as anon.
When categories are loaded by the frontend, the parent category is
looked up by ID and the `parentCategory` is set with the result. If the
categories returned are not in order, the parent category may miss.
Why this change?
This ENV allows the brotli compression quality to be configurable such
that one can opt for a higher/lower level of compression based on their
preferences.
This commit changes the API for registering the plugin config
page nav configuration from a server-side to a JS one;
there is no need for it to be server-side.
It also makes some changes to allow for 2 different ways of displaying
navigation for plugin pages, depending on complexity:
* TOP - This is the best mode for simple plugins without a lot of different
custom configuration pages, and it reuses the grey horizontal nav bar
already used for admins.
* SIDEBAR - This is better for more complex plugins; likely this won't
be used in the near future, but it's readily available if needed
There is a new AdminPluginConfigNavManager service too to manage which
plugin the admin is actively viewing, otherwise we would have trouble
hiding the main plugin nav for admins when viewing a single plugin.
For some identity providers, 10 minutes isn't much time for a user to complete authentication/registration on the identity provider. Increasing the default to 30 minutes should help in those situations. The nonce is still tied to a single browser session, so there is no material impact on security.
This commit makes it so the site settings filter controls and
the list of settings input editors themselves can be used elsewhere
in the admin UI outside of /admin/site_settings
This allows us to provide more targeted groups of settings in different
UI areas where it makes sense to provide them, such as on plugin pages.
You could open a single page for a plugin where you can see information
about that plugin, change settings, and configure it with custom UIs
in the one place.
In future we will do this in "config areas" for other parts of the
admin UI.
This commit moves the generation of category background CSS from the
server side to the client side. This simplifies the server side code
because it does not need to check which categories are visible to the
current user.
Since we are introducing new ways to search in Discourse, like the AI
semantic search using embeddings, posts can be part of a search result
list without having any search data.
Since the code path already handles this, we only need to add a safety
check when accessing the post_search_data.
In AdminDashboardData we have a bunch of problem checks implemented as methods on that class. This PR absolves it of the responsibility by promoting each of those checks to a first class ProblemCheck. This way each of them can have their own priority and arbitrary functionality can be isolated in its own class.
Think "extract class" refactoring over and over. Since they were all moved we can also get rid of the @@problem_syms class variable which was basically the old version of the registry now replaced by ProblemCheck.realtime.
In addition AdminDashboardData::Problem value object has been entirely replaced with the new ProblemCheck::Problem (with compatible API).
Lastly, I added some RSpec matchers to simplify testing of problem checks and provide helpful error messages when assertions fail.
Why this change?
Previously, we identified that ActiveRecord's PostgreSQL adapter
executes 3 db queries each time a new connection is created. The 3 db
queries was identified when we looked at the `pg_stats_statement` table
on one of our multisite production cluster. At that time, the hypothesis
is that because we were agressively reaping and creating connections,
the db queries executed each time a connection is created is wasting
resources on our database servers. However, we didn't see any the needle
move much on our servers after deploying the patch so we have decided to
drop this patch as it makes it harder for us to upgrade ActiveRecord in
the future.
This commit adds new plugin show routes (`/admin/plugins/:plugin_id`) as we move
towards every plugin having a consistent UI/landing page.
As part of this, we are introducing a consistent way for plugins
to show an inner sidebar in their config page, via a new plugin
API `register_admin_config_nav_routes`
This accepts an array of links with a label/text, and an
ember route. Once this commit is merged we can start the process
of conforming other plugins to follow this pattern, as well
as supporting a single-page version of this for simpler plugins
that don't require an inner sidebar.
Part of /t/122841 internally
Previously, if the sso= payload was invalid Base64, but signed correctly, there would be no useful log or error. This commit improves things by:
- moving the base64 check before the signature checking so that it's properly surfaced
- split the ParseError exception into PayloadParseError and SignatureError
- add user-facing errors for both of those
- add/improve spec for both
There are a couple of reasons for this.
The first one is practical, and related to eager loading. Since /lib is not eager loaded, when the application boots, ProblemCheck["identifier"] will be nil because the child classes aren't loaded.
The second one is more conceptual. There turns out to be a lot of inter-dependencies between the part of the problem check system that live in /app and the parts that live in /lib, which probably suggests it should all go in /app.
Why this change?
On the `/admin/customize/themes/<:id>` route, we allow admins to edit
all settings via a settings editor. Prior to this change, trying to edit
and save a typed objects theme settings will result in an error on the
server.
Why this change?
On a slow network, using the `AceEditor` component will result in a blob
of text being shown first before being swapped out with the `ace.js`
editor after it has completed loading.
There is also a problem when setting the theme for the editor which
would result in a "flash" as reported in
https://github.com/ajaxorg/ace/issues/3286. To avoid this, we need to
load the theme js file before displaying the editor.
What does this change do?
1. Adds a loading spinner and set the `div.ace` with a `.hidden` class.
2. Once all the relevant scripts and initialization is done, we will
then remove the loading spinner and remove `div.ace`.
This commit fixes two issues:
1. The wrong exception was being printed as the 'cause' in turbo_rspec output. This was happening because RSpec [expects exceptions to be subclasses of `Exception`](d6e320dc11/lib/rspec/core/formatters/exception_presenter.rb (L102)). This commit resolves the issue by replacing the `FakeException` `Struct` with a subclass of `Exception`.
2. The `full_cause_backtrace` option we set in `rails_helper.rb` does not carry through to the RSpec formatters running in the turbo_rspec reporter process. To fix that, this commit duplicates the necessary config in `lib/turbo_tests.rb`.
Example before - note that the cause is a duplicate of the original exception, and only has three lines of backtrace:
```
Failure/Error: raise capybara_timeout_error
CapybaraTimeoutExtension::CapybaraTimedOut:
This spec passed, but capybara waited for the full wait duration (4s) at least once. This will slow down the test suite. Beware of negating the result of selenium's RSpec matchers.
[Screenshot Image]: /Users/david/discourse/discourse/tmp/capybara/failures_r_spec_example_groups_glimmer_header_when_cmd_f_keyboard_shortcut_pressed_when_within_a_topic_with_less_than20_posts_does_not_open_search_484.png
~~~~~~~ JS LOGS ~~~~~~~
~~~~~ END JS LOGS ~~~~~
# ./spec/rails_helper.rb:372:in `block (3 levels) in <top (required)>'
# ./spec/rails_helper.rb:472:in `block (2 levels) in <top (required)>'
# /Users/david/.rvm/gems/ruby-3.2.1/gems/webmock-3.23.0/lib/webmock/rspec.rb:39:in `block (2 levels) in <top (required)>'
# ------------------
# --- Caused by: ---
# CapybaraTimeoutExtension::CapybaraTimedOut:
# This spec passed, but capybara waited for the full wait duration (4s) at least once. This will slow down the test suite. Beware of negating the result of selenium's RSpec matchers.
# ./spec/rails_helper.rb:372:in `block (3 levels) in <top (required)>'
# ./spec/rails_helper.rb:472:in `block (2 levels) in <top (required)>'
# /Users/david/.rvm/gems/ruby-3.2.1/gems/webmock-3.23.0/lib/webmock/rspec.rb:39:in `block (2 levels) in <top (required)>'
```
After - note correct causing exception, and the full backtrace 🎉
```
Failure/Error: raise capybara_timeout_error
CapybaraTimeoutExtension::CapybaraTimedOut:
This spec passed, but capybara waited for the full wait duration (4s) at least once. This will slow down the test suite. Beware of negating the result of selenium's RSpec matchers.
[Screenshot Image]: /Users/david/discourse/discourse/tmp/capybara/failures_r_spec_example_groups_glimmer_header_when_cmd_f_keyboard_shortcut_pressed_when_within_a_topic_with_less_than20_posts_does_not_open_search_61.png
~~~~~~~ JS LOGS ~~~~~~~
~~~~~ END JS LOGS ~~~~~
# ./spec/rails_helper.rb:372:in `block (3 levels) in <top (required)>'
# ./spec/rails_helper.rb:472:in `block (2 levels) in <top (required)>'
# /Users/david/.rvm/gems/ruby-3.2.1/gems/webmock-3.23.0/lib/webmock/rspec.rb:39:in `block (2 levels) in <top (required)>'
# ------------------
# --- Caused by: ---
# Capybara::ExpectationNotMet:
# expected to find css ".search-menu .search-menu-panel" but there were no matches
# /Users/david/.rvm/gems/ruby-3.2.1/gems/capybara-3.40.0/lib/capybara/node/matchers.rb:112:in `block in assert_selector'
# /Users/david/.rvm/gems/ruby-3.2.1/gems/capybara-3.40.0/lib/capybara/node/matchers.rb:869:in `block in _verify_selector_result'
# /Users/david/.rvm/gems/ruby-3.2.1/gems/capybara-3.40.0/lib/capybara/node/base.rb:84:in `synchronize'
# ./spec/rails_helper.rb:345:in `synchronize'
# /Users/david/.rvm/gems/ruby-3.2.1/gems/capybara-3.40.0/lib/capybara/node/matchers.rb:868:in `_verify_selector_result'
# /Users/david/.rvm/gems/ruby-3.2.1/gems/capybara-3.40.0/lib/capybara/node/matchers.rb:110:in `assert_selector'
# /Users/david/.rvm/gems/ruby-3.2.1/gems/capybara-3.40.0/lib/capybara/node/matchers.rb:39:in `block in has_selector?'
# /Users/david/.rvm/gems/ruby-3.2.1/gems/capybara-3.40.0/lib/capybara/node/matchers.rb:902:in `make_predicate'
# /Users/david/.rvm/gems/ruby-3.2.1/gems/capybara-3.40.0/lib/capybara/node/matchers.rb:39:in `has_selector?'
# /Users/david/.rvm/gems/ruby-3.2.1/gems/capybara-3.40.0/lib/capybara/session.rb:774:in `has_selector?'
# ./spec/system/page_objects/pages/search.rb:46:in `has_search_menu_visible?'
# ./spec/system/header_spec.rb:206:in `block (4 levels) in <main>'
# ./spec/rails_helper.rb:472:in `block (2 levels) in <top (required)>'
# /Users/david/.rvm/gems/ruby-3.2.1/gems/webmock-3.23.0/lib/webmock/rspec.rb:39:in `block (2 levels) in <top (required)>'
```
Why this change?
When creating a new theme setting that does not have a corresponding row
in the `theme_settings` table, we end up writing to the database twice
because `ActiveRecord::Base#save!` is called once before the `value`
or `json_value` column is updated again with another database query with
another call to `ActiveRecord::Base#save!`.
What does this change do?
Adds the column to be updated to argument for the `ActiveRecord::Base#create!`
method call so that we only have one write query to the database.
Having minitest as a direct dependency causes ruby-lsp to use it as our test runner (per https://github.com/Shopify/ruby-lsp/blob/d1da8858a1/lib/ruby_lsp/requests/support/dependency_detector.rb#L40-L55). This makes VSCode's test explorer incorrectly display Minitest 'run' buttons above all our tests.
We were only using it in `emoji.rake`... and that wasn't even working with the latest version of Minitest. This commit refactors `emoji.rake` to work without minitest, and removes the dependency.
Sometimes we add scripts outside of Rails. This commit provides a way to generate a nonce placeholder even if you don't have access to an ApplicationController instance.
Forcing a thread will work even in channel which don't have `threading_enabled` or in direct message channels.
For now this feature is only available through the `ChatSDK`:
```ruby
ChatSDK::Message.create(in_reply_to_id: 1, guardian: guardian, raw: "foo bar baz", channel_id: 2, force_thread: true)
```
Why this change?
The `/admin/customize/themes/:id/schema/name` route is a work in
progress but we want to be able to start navigating to it from the
`/admin/customize/themes/:id` route.
What does this change do?
1. Move `adminCustomizeThemes.schema` to a child route of
`adminCustomizeThemes.show`. This is because we need the model
from the parent route and if it isn't a child route we end up
having to load the theme model again from the server.
1. Add the `objects_schema` attribute to `ThemeSettingsSerializer`
1. Refactor `SiteSettingComponent` to be able to render a button
so that we don't have to hardcode the button rendering into the
`SiteSettings::String` component
Why this change?
Prior this change, we were using `URI.regexp` which was too strict as it
doesn't allow a URL path.
What does this change do?
Just parse the string using `URI.parse` and if it doesn't raise an error
we consider the string to be a valid URL
When a post is created by an incoming email, we show
an envelope icon on it which then opens a modal with the
raw email contents. Previously this was staff (admin+mod)
only, but now this commit adds the `view_raw_email_allowed_groups`
site setting, so any group can be added to give users permission
to see this.
Currently, the trust level method is calculating trust level based on maximum value from:
- locked trust level
- group automatic trust level
- previously granted trust level by admin
https://github.com/discourse/discourse/blob/main/lib/trust_level.rb#L33
Let's say the user belongs to groups with automatic trust level 1 and in the meantime meets all criteria to get trust level 2.
Each time, a user is removed from a group with automatic trust_level 1, they will be downgraded to trust_level 1 and promoted to trust_level 2
120a2f70a9/lib/promotion.rb (L142)
This will cause duplicated promotion messages.
Therefore, we have to check if the user meets the criteria, before downgrading.
## What?
Depending on the email software used, when you reply to an email that has some attachments, they will be sent along, since they're part of the embedded (replied to) email.
When Discourse processes the reply as an incoming email, it will automatically add all the (valid) attachments at the end of the post. Including those that were sent as part of the "embedded reply".
This generates posts in Discourse with duplicate attachments 🙁
## How?
When processing attachments of an incoming email, before we add it to the bottom of the post, we check it against all the previous uploads in the same topic. If there already is an `Upload` record, it means that it's a duplicate and it is _therefore_ skipped.
All the inline attachments are left untouched since they're more likely new attachments added by the sender.
Before this change, if the "Plugins backend" task on GitHub CI
failed, we would get a huge amount of extra output at the end
just to show the command that rake ran which failed (the bin/turbo_rspec
command). This is useless and just makes it hard to see the failing
specs. If you need the full command, it's already output at the
top of the "Plugins backend" task in the GitHub CI.
When we send a bookmark reminder, there is an option to delete
the underlying bookmark. The Notification record stays around.
However, if you want to filter your notifications user menu
to only bookmark-based notifications, we were not showing unread
bookmark notifications for deleted bookmarks.
This commit fixes the issue _going forward_ by adding the
bookmarkable_id and bookmarkable_type to the Notification data,
so we can look up the underlying Post/Topic/Chat::Message
for a deleted bookmark and check user access in this way. Then,
it doesn't matter if the bookmark was deleted.
Why this change?
This is a follow up to 408d2f8e69. When
`ActiveRecord::ConnectionAdapaters::PostgreSQLAdatper#reload_type_map`
is called, we need to clear the type map cache otherwise migrations
adding an array column will end up throwing errors.
Why this change?
This patch has been added to address the problems identified in https://github.com/rails/rails/issues/35311. For every,
new connection created using the PostgreSQL adapter, 3 queries are executed to fetch type map information from the `pg_type`
system catalog, adding about 1ms overhead to every connection creation.
On multisite clusters where connections are reaped more aggressively, the 3 queries executed
accounts for a significant portion of CPU usage on the PostgreSQL cluster. This patch works around the problem by
caching the type map in a class level attribute to reuse across connections.
Prior to this fix even when the user was not part of a group allowing sending pm we would show the prompt: "You've replied to ... X times, did you know you could send them a personal message instead?"
Why this change?
Before this change, the error messages returned when validating theme
settings of typed objects was an array of array instead of just an
array.
Why this change?
Previously in cac60a2c6b, I added support
for `type: "category"` for a property in the theme objects schema. This
commit extend the work previously to add support for types `topic`,
`post`, `group`, `upload` and `tag`.
Why this change?
```
some_setting:
default: 0
type: string
```
A theme setting like the above will cause an error to be thrown on the
server when importing the theme because the default would be parsed as
an integer which caused an error to be thrown when we are validating the
value of the setting.
What does this change do?
Convert the value to a string when working with string typed theme
settings.
Why this change?
This change adds validation for the default value for `type: objects` theme
settings when a setting theme field is uploaded. This helps the theme
author to ensure that the objects which they specifc in the default
value adhere to the schema which they have declared.
When an error is encountered in one of the objects, the error
message will look something like:
`"The property at JSON Pointer '/0/title' must be at least 5 characters
long."`
We use a JSON Pointer to reference the property in the object which is
something most json-schema validator uses as well.
What does this change do?
1. This commit once again changes the shape of hash returned by
`ThemeSettingsObjectValidator.validate`. Instead of using the
property name as the key previously, we have decided to avoid
multiple levels of nesting and instead use a JSON Pointer as the key
which helps to simplify the implementation.
2 Introduces `ThemeSettingsObjectValidator.validate_objects` which
returns an array of validation error messages for all the objects
passed to the method.
Before this commit, we had a yarn package set up in the root directory and also in `app/assets/javascripts`. That meant two `yarn install` calls and two `node_modules` directories. This commit merges them both into the root location, and updates references to node_modules.
A previous attempt can be found at https://github.com/discourse/discourse/pull/21172. This commit re-uses that script to merge the `yarn.lock` files.
Co-authored-by: Jarek Radosz <jradosz@gmail.com>
Users can hide their public profile and presence information by checking
“Hide my public profile and presence features” on the
`u/{username}/preferences/interface` page. In that case, we also don't
want to return user status from the server.
This work has been started in https://github.com/discourse/discourse/pull/23946.
The current PR fixes all the remaining places in Core.
Note that the actual fix is quite simple – a5802f484d.
But we had a fair amount of duplication in the code responsible for
the user status serialization, so I had to dry that up first. The refactoring
as well as adding some additional tests is the main part of this PR.
This PR adds a new scheduled problem check that simply tries to connect to Twitter OAuth endpoint to check that it's working. It is using the default retry strategy of 2 retries 30 seconds apart.
Why this change?
This regressed in 6e9fbb5bab because we
had a `request.xhr?` check before we decide to block requests. However,
there could not none-xhr requests which we need to block as well at the
end of each system test when `@@block_requests` is true.
This also reverts commit 6437f27f90.
Now forums can enroll their sites to be showcased in the Discourse [Discover](https://discourse.org/discover) directory. Once they enable the site setting `include_in_discourse_discover` to enroll their forum the `CallDiscourseHub` job will ping the `api.discourse.org/api/discover/enroll` endpoint. Then the Discourse Hub will fetch the basic details from the forum and add it to the review queue. If the site is approved then the forum details will be displayed in the `/discover` page.
Also, remove experimental setting and simply use top_menu for feature detection
This means that when people eventually enable the hot top menu, there will
be topics in it
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
Why this change?
On CI, we have been seeing flaky system tests because ActiveRecord is
unable to checkout a connection. This patch is meant to help us debug
which thread is not returning the connection to the queue.
Example of timeout issue: https://github.com/discourse/discourse/actions/runs/8012541636/job/21888013082
Previously, problem checks were all added as either class methods or blocks in AdminDashboardData. Another set of class methods were used to add and run problem checks.
As of this PR, problem checks are promoted to first-class citizens. Each problem check receives their own class. This class of course contains the implementation for running the check, but also configuration items like retry strategies (for scheduled checks.)
In addition, the parent class ProblemCheck also serves as a registry for checks. For example we can get a list of all existing check classes through ProblemCheck.checks, or just the ones running on a schedule through ProblemCheck.scheduled.
After this refactor, the task of adding a new check is significantly simplified. You add a class that inherits ProblemCheck, you implement it, add a test, and you're good to go.
Add categories to the serialized search results together with the topics
when lazy load categories is enabled. This is necessary in order for the
results to be rendered correctly and display the category information.
Why this change?
The current shape of errors returns the error messages after it has been
translated but there are cases where we want to customize the error
messages and the current way return only translated error messages is
making customization of error messages difficult. If we
wish to have the error messages in complete sentences like
"`some_property` property must be present in #link 1", this is not
possible at the moment with the current shape of the errors we return.
What does this change do?
This change introduces the `ThemeSettingsObjectValidator::ThemeSettingsObjectErrors`
and `ThemeSettingsObjectValidator::ThemeSettingsObjectError` classes to
hold the relevant error key and i18n translation options.
Why this change?
This change supports a property of `type: category` in the schema that
is declared for a theme setting object. Example:
```
sections:
type: objects
schema:
name: section
properties:
category_property:
type: category
```
The value of a property declared as `type: category` will have to be a
valid id of a row in the `categories` table.
What does this change do?
Adds a property value validation step for `type: category`. Care has
been taken to ensure that we do not spam the database with a ton of
requests if there are alot of category typed properties. This is done by
walking through the entire object and collecting all the values for
properties typed category. After which, a single database query is
executed to validate which values are valid.
Why this change?
The logic for validating a theme setting's value and default value was
not consistent as each part of the code would implement its own logic.
This is not ideal as the default value may be validated differently than
when we are setting a new value. Therefore, this commit seeks to
refactor all the validation logic for a theme setting's value into a
single service class.
What does this change do?
Introduce the `ThemeSettingsValidator` service class which holds all the
necessary helper methods required to validate a theme setting's value
This commit moves some code out of UploadController#show_secure
so it can be reused in other controllers if a secure upload
needs to have permission checks run.
Why this change?
This commit updates `ThemeSettingsObjectValidator` to validate a
property's value against the validations listed in the schema.
For string types, `min_length`, `max_length` and `url` are supported.
For integer and float types, `min` and `max` are supported.
Why this change?
This change adds property value validation to `ThemeSettingsObjectValidator`
for the following types: "string", "integer", "float", "boolean", "enum". Note
that this class is not being used anywhere yet and is still in
development.
This data is cached, so we don't want to include any site-specific-logic in there. Let's just keep the old URL-collecting behaviour, and let it be stripped out by `CSP::Builder` at runtime.
The strict-dynamic CSP directive is supported in all our target browsers, and makes for a much simpler configuration. Instead of allowlisting paths, we use a per-request nonce to authorize `<script>` tags, and then those scripts are allowed to load additional scripts (or add additional inline scripts) without restriction.
This becomes especially useful when admins want to add external scripts like Google Tag Manager, or advertising scripts, which then go on to load a ton of other scripts.
All script tags introduced via themes will automatically have the nonce attribute applied, so it should be zero-effort for theme developers. Plugins *may* need some changes if they are inserting their own script tags.
This commit introduces a strict-dynamic-based CSP behind an experimental `content_security_policy_strict_dynamic` site setting.
Followup to 978d52841a
It's complicated...we have multiple "anonymous" user concepts
in core, and even two classes called the exact same thing --
AnonymousUser.
The first case is Guardian::AnonymousUser, which is used for
people who are browsing the forum without being authenticated.
The second case is the model AnonymousUser, which is used when
a user is liking or posting anonymously via allow_anonymous_likes
or allow_anonymous_posting site settings.
We will untangle this naming nightmare later on...but for the
time being, only authenticated users who are pretending to be
anonymous should be able to like posts if allow_anonymous_likes
is on.
Why this change?
This is a first pass at adding an objects validator which main's job is
to validate an object against a defined schema which we will support. In
this pass, we are simply validating that properties that has been marked
as required are present in the object.
The `-ping` option significantly speeds up the ImageMagick `identify` command per our testing and the [documentation](https://imagemagick.org/script/command-line-options.php#ping):
> -ping
Efficiently determine these image characteristics: image number, the file name, the width and height of the image, whether the image is colormapped or not, the number of colors in the image, the number of bytes in the image, the format of the image (JPEG, PNM, etc.). Use +ping to ensure accurate image properties.
We already pass the `-ping` option in other places where the `identify` command is used, so it makes sense to use the option everywhere.
Internal topic: t/121431.
Why this change?
On CI, we have been seeing flaky system tests because ActiveRecord is
unable to checkout a connection. This patch is meant to help us debug
which thread is not returning the connection to the queue.
Fixes an issue where private topics that are quoted have an incorrectly formatted url when using a subfolder install.
This update returns a relative url that includes the base_path rather than a combination of base_url + base_path.
This commit changes `max_image_megapixels` to be used
as is without multiplying by 2 to give extra leway.
We found in reality this was just causing confusion
for admins, especially with the already permissive
40MP default.
This reverts commit 767b49232e.
If anything else (e.g. GTM integration) introduces a nonce/hash, then this change stops the splash screen JS to fail and makes sites unusable.
Removes duplication from LimitedEdit to see who can edit
posts, and also removes the old trust level setting check
since it's no longer necessary.
Also make it so staff can always edit since can_edit_post?
already has a staff escape hatch.
Why this change?
This commit introduces an experimental `type: objects` theme setting
which will allow theme developers to store a collection of objects as
JSON in the database. Currently, the feature is still in development and
this commit is simply setting up the ground work for us to introduce the
feature in smaller pieces.
What does this change do?
1. Adds a `json_value` column as `jsonb` data type to the `theme_settings` table.
2. Adds a `experimental_objects_type_for_theme_settings` site setting to
determine whether `ThemeSetting` records of with the `objects` data
type can be created.
3. Updates `ThemeSettingsManager` to support read/write access from the
`ThemeSettings#json_value` column.
Followup fb087b7ff6
post_links_allowed_groups is an odd check tied to
unrestricted_link_posting? in PostGuardian, in that
it doesn't have an escape hatch for staff like most
of the rest of these group based settings.
It doesn't make sense to exclude admins or mods from
posting links, so just always allow them to avoid confusion.
Browsers will ignore unsafe-inline if nonces or hashes are included in the CSP. When unsafe-inline is enabled, nonces and hashes are not required, so we can skip them.
Our strong recommendation remains that unsafe-inline should not be used in production.
We've changed access settings to be group membership based rather than based on the TL value directly. We kept both conditions here while we updated any plugins and themes. It should now be safe to remove.
JS assets added by plugins via `register_asset` will be outside the `assets/javascripts` directory, and are therefore exempt from being transpiled. That means that there isn't really any need to run them through DiscourseJsProcessor. Instead, we can just concatenate them together, and avoid the need for all the sprockets-wrangling.
This commit also takes the opportunity to clean up a number of plugin-asset-related codepaths which are no longer required (e.g. globs, handlebars)
Why this change?
Since 1dba1aca27, we have been remapping
the `<->` proximity operator in a tsquery to `&`. However, there is
another variant of it which follows the `<N>` pattern. For example, the
following text "end-to-end" will eventually result in the following
tsquery `end-to-end:* <-> end:* <2> end:*` being generated by Postgres.
Before this fix, the tsquery is remapped to `end-to-end:* & end:* <2>
end:*` by us. This is requires the search data which we store to contain
`end` at exactly 2 position apart. Due to the way we limit the
number of duplicates in our search data, the search term may end up not
matching anything. In bd32912c5e, we made
it such that we do not allow any duplicates when indexing a topic's
title. Therefore, search for `end-to-end` against a topic title with
`end-to-end` will never match because our index will only contain one
`end` term.
What does this change do?
We will remap the `<N>` variant of the proximity operator.
We were having a minor issue with emails with embedded images
that had newlines in the alt string; for example:
```
<p class="MsoNormal"><span style="font-size:11.0pt"><img width="898"
height="498" style="width:9.3541in;height:5.1875in" id="Picture_x0020_5"
src="cid:image003.png@01DA4EBA.0400B610" alt="A screenshot of a computer
program
Description automatically generated"></span><span
style="font-size:11.0pt"><o:p></o:p></span></p>
```
Once this was parsed and converted to markdown (or directly to HTML
in some cases), this caused an issue in the composer and the post
UI, where the markdown parser didn't know how to deal with this,
making the HTML show directly instead of showing an image.
The easiest way to deal with this is to just strip \n from image
alt and title attrs in the HTMLToMarkdown class.
These routes were previously rendered using Rails, and had a fairly fragile 2fa implementation in vanilla-js. This commit refactors the routes to be handled in the Ember app, removes the custom vanilla-js bundles, and leans on our centralized 2fa implementation. It also introduces a set of system specs for the behavior.
Safari has a bug which means that scripts with the `defer` attribute are executed before stylesheets have finished loading. This is being tracked at https://bugs.webkit.org/show_bug.cgi?id=209261.
This commit works around the problem by introducing a no-op inline `<script>` to the end of our HTML document. This works because defer scripts are guaranteed to run after inline scripts, and inline scripts are guaranteed to run after any preceding stylesheets.
Technically we only need this for Safari. But given that the cost is so low, it makes sense to include it everywhere rather than incurring the complexity of gating it by user-agent.
In a handful of situations, we need to verify a user's 2fa credentials before `current_user` is assigned. For example: login, email_login and change-email confirmation. This commit adds an explicit `target_user:` parameter to the centralized 2fa system so that it can be used for those situations.
For safety and clarity, this new parameter only works for anon. If some user is logged in, and target_user is set to a different user, an exception will be raised.
We're changing the implementation of trust levels to use groups. Part of this is to have site settings that reference trust levels use groups instead. It converts the min_trust_level_to_tag_topics site setting to tag_topic_allowed_groups.
Previously, it was not possible to modify the sorting order of the `TopicQuery` result from a plugin. This feature adds support to specify custom sorting functionality in a plugin. We're using the `apply_modifier` method in the `DiscoursePluginRegistry` module to achieve it.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
We're changing the implementation of trust levels to use groups. Part of this is to have site settings that reference trust levels use groups instead. It converts the min_trust_level_for_user_api_key site setting to user_api_key_allowed_groups.
This isn't used by any of our plugins or themes, so very little fallout.
If configuring only moderators in a group based access setting, the mapping to the old setting wouldn't work correctly, because the case was unaccounted for.
This PR accounts for moderators group when doing the mapping.
We're changing the implementation of trust levels to use groups. Part of this is to have site settings that reference trust levels use groups instead. It converts the min_trust_to_post_links site setting to post_links_allowed_groups.
This isn't used by any of our plugins or themes, so very little fallout.
- Decrease gravity, we come in too hot prioritizing too many new topics
- Remove all muted topics / categories and tags from the hot list
- Punish topics with zero likes in algorithm
This introduces a new experimental hot sort ordering.
It attempts to float top conversations by first prioritizing a topics with lots of recent activity (likes and users responding)
The schedule that updates hot topics is disabled unless the hidden site setting: `experimental_hot_topics` is enabled.
You can control "decay" with `hot_topic_gravity` and `recency` with `hot_topics_recent_days`
Data is stored in the new `topic_hot_scores` table and you can check it out on the `/hot` route once
enabled.
---------
Co-authored-by: Penar Musaraj <pmusaraj@gmail.com>
Previously only Sidekiq was allowed to generate more than one optimized image at the same time per machine. This adds an easy mechanism to allow the same in rake tasks and other tools.
Why this change?
While the constant does not change very often, we should still avoid
duplicating the value of a constant used on the server side in the
client side to avoid the values going out of sync.
Why this change?
Currently, is it hard to iteratively write a theme settings migrations
because our theme migrations system does not rollback. Therefore, we
want to allow theme developers to be able to write QUnit tests for their
theme migrations files enabling them to iteratively write their theme
migrations.
What does this change do?
1. Update `Theme#baked_js_tests_with_digest` to include all `ThemeField`
records of `ThemeField#target` equal to `migrations`. Note that we do
not include the `settings` and `themePrefix` variables for migration files.
2. Don't minify JavaScript test files becasue it makes debugging in
development hard.
This will make it easier to analyze rate limiting in reverse-proxy logs. To make this possible without a database lookup, we add the username to the encrypted `_t` cookie data.
We're changing the implementation of trust levels to use groups. Part of this is to have site settings that reference trust levels use groups instead. It converts the min_trust_level_to_tag_topics site setting to tag_topic_allowed_groups.
When navigating straight to a topic the category was not displayed at
all because the categories were not loaded. Similarly, the categories
for suggested topics were not loaded either.
This commit adds a list of categories to topic view model class and
serializer.
We're changing the implementation of trust levels to use groups. Part of this is to have site settings that reference trust levels use groups instead. It converts the min_trust_to_send_email_messages site setting to send_email_messages_allowed_groups.
Some plugins have names (e.g. discourse-x-yz) that
are totally different from what they are actually called,
and that causes issues when showing them in a sorted way
in the admin plugin list.
Now, we should use the setting category name from client.en.yml
if it exists, otherwise fall back to the name, for sorting.
This is what we do on the client to determine what text to
show for the plugin name as well.
We're changing the implementation of trust levels to use groups. Part of this is to have site settings that reference trust levels use groups instead. It converts the min_trust_level_to_create_tag site setting to create_tag_allowed_groups.
This PR maintains backwards compatibility until we can update plugins and themes using this.
This was used by chat's HTML documentation experiment. That documentation experiment isn't being actively used/updated, but may be revisited in future. Therefore, this commit updates the jsdoc config to remove the custom theme, but keeps it functional (with the default jsdoc theme).
This fixes an issue where any string for an enum site setting
(such as TrustLevelSetting) would be converted to an integer
if the default value for the enum was an integer. This is an
issue because things like "admin" and "staff" would get silently
converted to 0 which is "valid" because it's TrustLevel[0],
but it's unexpected behaviour. It's best to just let the site
setting validator catch this broken value.
Why this change?
The `can survive cache miss` test in `spec/requests/stylesheets_controller_spec.rb`
was failing because the file was not found on disk for the cache to be
regenerated. This is because a test in
`spec/lib/stylesheet/manager_spec.rb` was removing the entire
`tmp/stylesheet-cache` directory which is incorrect because the folder
in the test environment further segretates the stylesheet caches based
on the process of the test.
What does this change do?
1. Introduce `Stylesheet::Manager.rm_cache_folder` method for the test
environment to properly clean up the cache folder.
2. Make `Stylesheet::Manager::CACHE_PATH` a private constant since the
cache path should be obtained from the `Stylesheet::Manager.cache_fullpath` method.
Why this change?
We have been running into flaky tests which seems to be related to
AR transaction problems. However, we are not able to reproduce this
locally and do not have sufficient information on our builds now to
debug the problem.
What does this change do?
Noe the following changes only applies when `ENV["GITHUB_ACTIONS"]` is
present.
This change introduces an RSpec around hook when `capture_log: true` has
been set for a test. The responsibility of the hook is to capture the
ActiveRecord debug logs and print them out.
We're changing the implementation of trust levels to use groups. Part of this is to have site settings that reference trust levels use groups instead. It converts the min_trust_to_allow_self_wiki site setting to self_wiki_allowed_groups.
Nothing of note here. This is used in exactly one place, and there's no fallout.
When setting an old TL based site setting in the console e.g.:
SiteSetting.min_trust_level_to_allow_ignore = TrustLevel[3]
We will silently convert this to the corresponding Group::AUTO_GROUP. And vice-versa, when we read the value on the old setting, we will automatically get the lowest trust level corresponding to the lowest auto group for the new setting in the database.
Why this change?
Currently we only rerun failing tests to check if they are flaky tests
when there are 10 or less failing tests. When there are more than 10
failing tests in the first run, we assume that the odds of those tests
being flaky are low and do not rerun the tests. However, there was a bug
where we do not clean up the potential flaky tests being logged when
there are too many test failures. This resulted in those test failures
being treated as flaky tests.
What does this change do?
Clean up the flaky tests report when we do not rerun the tests.
A bug that allowed TL1 to convert other's posts to wiki.
The issue was introduced in this PR: https://github.com/discourse/discourse/pull/24999/files
The wiki can be created if a user is TL3 and it is their own post - default 3 for setting `SiteSetting.min_trust_to_allow_self_wiki`
In addition, a wiki can be created by staff and TL4 users for any post.
This changes the Plugins link in the admin sidebar to
be a section instead, which then shows all enabled plugin
admin routes (which are custom routes some plugins e.g.
chat define).
This is done via adding some special preloaded data for
all controllers based on AdminController, and also specifically
on Admin::PluginsController, to have the routes loaded without
additional requests on page load.
We just use a cog for all the route icons for now...we don't
have anything better.
Ability to automatically generate migration when site setting is changed from trust level to groups.
Example usage:
rails generate site_setting_move_to_groups_migration min_trust_to_create_topic create_topic_allowed_groups
We're changing the implementation of trust levels to use groups. Part of this is to have site settings that reference trust levels use groups instead. It converts the min_trust_level_to_allow_ignore site setting to ignore_allowed_groups.
This PR maintains backwards compatibility until we can update plugins and themes using this.
We're changing the implementation of trust levels to use groups. Part of this is to have site settings that reference trust levels use groups instead. It converts the min_trust_level_to_allow_invite site setting to invite_allowed_groups.
Nothing much of note. This is used in one place and there's no fallout.
Using min_trust_to_create_topic and create_topic_allowed_groups together was part of #24740
Now, when plugins specs are fixed, we can safely remove that part of logic.
This is v0 of admin sidebar navigation, which moves
all of the top-level admin nav from the top of the page
into a sidebar. This is hidden behind a enable_admin_sidebar_navigation
site setting, and is opt-in for now.
This sidebar is dynamically shown whenever the user enters an
admin route in the UI, and is hidden and replaced with either
the:
* Main forum sidebar
* Chat sidebar
Depending on where they navigate to. For now, custom sections
are not supported in the admin sidebar.
This commit removes the experimental admin sidebar generation rake
task but keeps the experimental sidebar UI for now for further
testing; it just uses the real nav as the default now.
Before, when needed to get stats in a plugin, we called Core classes directly.
Introducing plugin API will decouple plugins from Core and give as more freedom
in refactoring stats in Core. Without this API, I wasn't able to do all refactorings
I wanted when working on d91456f.
We're changing the implementation of trust levels to use groups. Part of this is to have site settings that reference trust levels use groups instead. It converts the min_trust_level_to_allow_user_card_background site setting to user_card_background_allowed_groups.
Nothing of note here. This is used in exactly one place, and there's no fallout.
This validator is used for site settings where one or more groups are to be input.
At the moment this validator just checks that the value isn't blank. This PR adds a validation for the existence of the groups passed in.
We're changing the implementation of trust levels to use groups. Part of this is to have site settings that reference trust levels use groups instead. It converts the tl4_delete_posts_and_topics site setting to delete_all_posts_and_topics_allowed_groups.
This one is a bit different from previous ones, as it's a boolean flag, and the default should be no group. Pay special attention to the migration during review.
* FEATURE: core code, tests for feature to allow backups to removed based on a time window
* FEATURE: getting tests working for time-based backup
* FEATURE: getting tests running
* FEATURE: linting
We're changing the implementation of trust levels to use groups. Part of this is to have site settings that reference trust levels use groups instead. It converts the min_trust_to_flag_posts site setting to flag_post_allowed_groups.
Note: In the original setting, "posts" is plural. I have changed this to "post" singular in the new setting to match others.
We're changing the implementation of trust levels to use groups. Part of this is to have site settings that reference trust levels use groups instead. It converts the min_trust_to_edit_post site setting to edit_post_allowed_groups.
The old implementation will co-exist for a short period while I update any references in plugins and themes.
This change converts the min_trust_to_create_topic site setting to
create_topic_allowed_groups.
See: https://meta.discourse.org/t/283408
- Hides the old setting
- Adds the new site setting
- Add a deprecation warning
- Updates to use the new setting
- Adds a migration to fill in the new setting if the old setting was
changed
- Adds an entry to the site_setting.keywords section
- Updates tests to account for the new change
- After a couple of months, we will remove the min_trust_to_create_topicsetting entirely.
Internal ref: /t/117248
This change converts the allow_uploaded_avatars site setting to uploaded_avatars_allowed_groups.
See: https://meta.discourse.org/t/283408
Hides the old setting
Adds the new site setting
Adds a deprecation warning
Updates to use the new setting
Adds a migration to fill in the new setting if the old setting was changed
Adds an entry to the site_setting.keywords section
Updates tests to account for the new change
After a couple of months, we will remove the allow_uploaded_avatars setting entirely.
Internal ref: /t/117248
What motivated this change?
Our builds on Github actions have been extremely flaky mostly due to system tests. This has led to a drop in confidence
in our test suite where our developers tend to assume that a failed job is due to a flaky system test. As a result, we
have had occurrences where changes that resulted in legitimate test failures are merged into the `main` branch because developers
assumed it was a flaky test.
What does this change do?
This change seeks to reduce the flakiness of our builds on Github Actions by automatically re-running RSpec tests once when
they fail. If a failed test passes subsequently in the re-run, we mark the test as flaky by logging it into a file on disk
which is then uploaded as an artifact of the Github workflow run. We understand that automatically re-runs will lead to
lower accuracy of our tests but we accept this as an acceptable trade-off since a fragile build has a much greater impact
on our developers' time. Internally, the Discourse development team will be running a service to fetch the flaky tests
which have been logged for internal monitoring.
How is the change implemented?
1. A `--retry-and-log-flaky-tests` CLI flag is added to the `bin/turbo_rspec` CLI which will then initialize `TurboTests::Runner`
with the `retry_and_log_flaky_tests` kwarg set to `true`.
2. When the `retry_and_log_flaky_tests` kwarg is set to `true` for `TurboTests::Runner`, we will register an additional
formatter `Flaky::FailuresLoggerFormatter` to the `TurboTests::Reporter` in the `TurboTests::Runner#run` method.
The `Flaky::FailuresLoggerFormatter` has a simple job of logging all failed examples to a file on disk when running all the
tests. The details of the failed example which are logged can be found in `TurboTests::Flaky::FailedExample.to_h`.
3. Once all the tests have been run once, we check the result for any failed examples and if there are, we read the file on
disk to fetch the `location_rerun_location` of the failed examples which is then used to run the tests in a new RSpec process.
In the rerun, we configure a `TurboTests::Flaky::FlakyDetectorFormatter` with RSpec which removes all failed examples from the log file on disk since those examples are not flaky tests. Note that if there are too many failed examples on the first run, we will deem the failures to likely not be due to flaky tests and not re-run the test failures. As of writing, the threshold of failed examples is set to 10. If there are more than 10 failed examples, we will not re-run the failures.
Ability to automatically generate migration when site setting name is changed.
Example usage: `rails generate site_setting_rename_migration site_description contact_email`
Applies the embed_unlisted site setting consistently across topic embeds, including those created via the WP Discourse plugin. Relatedly, adds a embed exception to can_create_unlisted_topic? check. Users creating embedded topics are not always staff.
This change converts the min_trust_to_edit_wiki_post site setting to edit_wiki_post_allowed_groups.
See: https://meta.discourse.org/t/283408
Hides the old setting
Adds the new site setting
Add a deprecation warning
Updates to use the new setting
Adds a migration to fill in the new setting if the old setting was changed
Adds an entry to the site_setting.keywords section
Updates tests to account for the new change
After a couple of months, we will remove the email_in_min_trust setting entirely.
Internal ref: /t/117248
Notable changes:
* Imports a lot more tables from core and plugins
* site settings
* uploads with necessary upload references
* groups and group members
* user profiles
* user options
* user fields & values
* muted users
* user notes (plugin)
* user followers (plugin)
* user avatars
* tag groups and tags
* tag users (notification settings for tags / user)
* category permissions
* polls with options and votes
* post votes (plugin)
* solutions (plugin)
* gamification scores (plugin)
* events (plugin)
* badges and badge groupings
* user badges
* optimized images
* topic users (notification settings for topics)
* post custom fields
* permalinks and permalink normalizations
* It creates the `migration_mappings` table which is used to store the mapping for a handful of imported tables
* Detects duplicate group names and renames them
* Pre-cooking for attachments, images and mentions
* Outputs instructions when gems are missing
* Supports importing uploads from a DB generated by `uploads_importer.rb`
* Checks that all required plugins exists and enables them if needed
* A couple of optimizations and additions in `import.rake`
Sassc-embedded fixes a performance issue with a leaking DartSass process. And it also fixes an issue with source map file paths (without any extra flags).
This commit refactors the Wizard component code in preparation for moving it to the 'static' directory for Embroider route-splitting. It also includes a number of general improvements and simplifications.
Extracted from https://github.com/discourse/discourse/pull/23678
Co-authored-by: Godfrey Chan <godfreykfc@gmail.com>
Introduces the concept of image thumbnails in chat, prior to this we uploaded and used full size chat images within channels and direct messages.
The following changes are covered:
- Post processing of image uploads to create the thumbnail within Chat::MessageProcessor
- Extract responsive image ratios into CookedProcessorMixin (used for creating upload variations)
- Add thumbnail to upload serializer from plugin.rb
- Convert chat upload template to glimmer component using .gjs format
- Use thumbnail image within chat upload component (stores full size img in orig-src data attribute)
- Old uploads which don't have thumbnails will fallback to full size images in channels/DMs
- Update Magnific lightbox to use full size image when clicked
- Update Glimmer lightbox to use full size image (enables zooming for chat images)
I took the wrong approach here, need to rethink.
* Revert "FIX: Use Guardian.basic_user instead of new (anon) (#24705)"
This reverts commit 9057272ee2.
* Revert "DEV: Remove unnecessary method_missing from GuardianUser (#24735)"
This reverts commit a5d4bf6dd2.
* Revert "DEV: Improve Guardian devex (#24706)"
This reverts commit 77b6a038ba.
* Revert "FIX: Introduce Guardian::BasicUser for oneboxing checks (#24681)"
This reverts commit de983796e1.
c.f. de983796e1
There will soon be additional login_required checks
for Guardian, and the intent of many checks by automated
systems is better fulfilled by using BasicUser, which
simulates a logged in TL0 forum user, rather than an
anon user.
In some cases the use of anon still makes sense (e.g.
anonymous_cache), and in that case the more explicit
`Guardian.anon_user` is used
It's quite confusing for blank? to be overridden
on AnonymousUser and BasicUser to represent
whether the fake user is authenticated or not;
we can achieve the same thing more clearly with
a wrapper GuardianUser class around these
user classes. Also fixes an issue where
`def user` would be returning nil.
Through internal discussion, it has become clear that
we need a conceptual Guardian user that bridges the
gap between anon users and a logged in forum user with
an absolute baseline level of access to public topics,
which can be used in cases where:
1. Automated systems are running which shouldn't see any
private data
1. A baseline level of user access is needed
In this case we are fixing the latter; when oneboxing a local
topic, and we are linking to a topic in another category from
the current one, we need to operate off a baseline level of
access, since not all users have access to the same categories,
and we don't want e.g. editing a post with an internal link to
expose sensitive internal information.
We add `Access-Control-Allow-Origin: *` to all asset requests which are requested via a configured CDN. This is particularly important now that we're using browser-native `import()` to load the highlightjs bundle. Unfortunately, user-configurable 'cors_origins' site setting was overriding the wldcard value on CDN assets and causing CORS errors.
This commit updates the logic to give the `*` value precedence, and adds a spec for the situation. It also invalidates the cache of hljs assets (because CDNs will have cached the bad Access-Control-Allow-Origin header).
The rack-cors middleware is also slightly tweaked so that it is always inserted. This makes things easier to test and more consistent.
Followup e37fb3042d
* Automatically remove the prefix `Discourse ` from all the plugin titles to avoid repetition
* Remove the :discourse_dev: icon from the author. Consider a "By Discourse" with no labels as official
* We add a `label` metadata to plugin.rb
* Only plugins made by us in `discourse` and `discourse-org` GitHub organizations will show these in the list
* Make the plugin author font size a little smaller
* Make the commit sha look like a link so it's more obvious it goes to the code
Also I added some validation and truncation for plugin metadata
parsing since currently you can put absolutely anything in there
and it will show on the plugin list.
In development, I sometimes get `nil` `location.absolute_path` values. It looks like this is sometimes expected (https://bugs.ruby-lang.org/issues/10561) so we should fallback to `.path` and add a nil check.
This change refactors the check `user.groups.any?` and instead uses
`user.staged?` to check if the user is staged or not.
Also fixes several tests to ensure the users have their auto trust level
groups created.
Follow up to:
- 8a45f84277
- 447d9b2105
- c89edd9e86
This value is included when generating static asset URLs. Updating the value will allow site operators to invalidate all asset urls to recover from configuration issues which may have been cached by CDNs/browsers.
This commit introduces the scaffolding for us to easily switch between Ember 3.28 and Ember 5 on the `main` branch of Discourse. Unfortunately, there is no built-in system to apply this kind of flagging within yarn / ember-cli. There are projects like `ember-try` which are designed for running against multiple version of a dependency, but they do not allow us to 'lock' dependency/sub-dependency versions, and are therefore unsuitable for our use in production.
Instead, we will be maintaining two root `package.json` files, and two `yarn.lock` files. For ember-3, they remain as-is. For ember5, we use a yarn 'resolution' to override the version for ember-source across the entire yarn workspace.
To allow for easy switching with minimal diff against the repository, `package.json` and `yarn.lock` are symlinks which point to `package-ember3.json` and `yarn-ember3.lock` by default. To switch to Ember 5, we can run `script/switch ember version 5` to update the symlinks to point to `package-ember5.json` and `package-ember3.json` respectively. In production, and when using `bin/ember-cli` for development, the ember version can also be upgraded using the `EMBER_VERSION=5` environment variable.
When making changes to dependencies, these should be made against the default `ember3` versions, and then `script/regen_ember_5_lockfile` should be used to regenerate `yarn-ember5.lock` accordingly. A new 'Ember Version Lockfiles' GitHub workflow will automate this process on Dependabot PRs.
When running a local environment against Ember 5, the two symlink changes will show up as git diffs. To avoid us accidentally committing/pushing that change, another GitHub workflow is introduced which checks the default Ember version and raises an error if it is greater than v3.
Supporting two ember versions simultaneously obviously carries significant overhead, so our aim will be to get themes/plugins updated as quickly as possible, and then drop this flag.