This is for backwards compatibility purposes. Even if `Upload#url` has a
format that we don't recognize, we should still return the upload object
as long as the upload record is present.
This test of `prevent_anons_from_downloading_files` was testing an image instead of an attachment and it was testing the wrong upload URL. I fixed the test, but with `config.public_file_server.enabled = true` on the test environment, this will always fail, as preventing anonymous file downloads depends on nginx. So, I marked the test as skipped, for now.
* DEV: Replace site_setting_saved DiscourseEvent with site_setting_changed
site_setting_saved is confusing for a few reasons:
- It is attached to the after_save of the ActiveRecord model. This is confusing because it only works 'properly' with the db_provider
- It passes the activerecord model as a parameter, which is confusing because you get access to the 'database' version of the setting, rather than the ruby setting. For example, booleans appear as 'y' or 'n' strings.
- When the event is called, the local process cache has not yet been updated. So if you call SiteSetting.setting_name inside the event handler, you will receive the old site setting value
I have deprecated that event, and added a new site_setting_changed event. It passes three parameters:
- Setting name (symbol)
- Old value (in ruby format)
- New value (in ruby format)
It is triggered after the setting has been persisted, and the local process cache has been updated.
This commit also includes a test case which describes the confusing behavior. This can be removed once site_setting_saved is removed.
If you turn it on now, default all users to approved since they were
previously. Also support approving a user that doesn't have a reviewable
record (it will be created first.)
This also includes a refactor to move class method calls to
`DiscourseEvent` into an initializer. Otherwise the load order of
classes makes a difference in the test environment and some settings
might be triggered and others not, randomly.
This new site setting determines the maximum age of unread topics in
suggested. By default if you have any unread topics older than 90 days
they will be omitted from suggested.
This change was added for 2 reasons:
1. A performance safeguard, some users tend to collect a huge amount of
read state so it becomes super expensive to find unread
2. People who collect a large amount of unread are much more interested in
recent unread topics vs ancient unread topics, this makes suggested more
relevant
Also, this is a minor speed up for tests cause 3 expensive tests became 1.
Theme developers can include any number of scss files within the /scss/ directory of a theme. These can then be imported from the main common/desktop/mobile scss.
* FIX: correctly retrieve 'login required' setting value on wizard
FEATURE: extract 'invite only' setting in a separate checkbox control
* Update invite_only checkbox locale on wizard.
Co-Authored-By: techAPJ <arpit@techapj.com>
The compress brotli functionality is no longer optional, this has worked
well for years. The name of the ENV var is also confusing cause it does
not have a `DISCOURSE_` prefix which caused issues with the web upgrader
Brotli support is now unconditionally on
Previously jobs would fail silently in test mode. Now they will raise the exception and cause the relevant test to fail. This identified a few broken tests, which I will fix in a followup commit
- Plugin developers using OpenID2.0 should migrate to OAuth2 or OIDC. OpenID2.0 APIs will be removed in v2.4.0
- For sites requiring Yahoo login, it can be implemented using the OpenID Connect plugin: https://meta.discourse.org/t/103632
For more information, see https://meta.discourse.org/t/113249
In certain edge cases, the message bus won't send the message to the
user about the updated review count and it can go out of sync.
This patch synchronizes the review count every time:
1. The user visits the "Needs Review" page
2. Every time the user performs an action
This optimisation avoids large scans joining the topics table with the
topic_users table.
Previously when a user carried a lot of read state we would have to join
the entire read state with the topics table. This operation would slow down
home page and every topic page. The more read state you accumulated the
larger the impact.
The optimisation helps people who clean up unread, however if you carry
unread from years ago it will only have minimal impact.
A new checkbox has been added to the Tags tab of the category settings modal
which is used when some tags and/or tag groups are restricted to the category,
and all other unrestricted tags should also be allowed.
Default is the same as the previous behaviour: only allow the specified set of
tags and tag groups in the category.
Sometimes sidekiq is so fast that it starts jobs before transactions
have comitted. This patch moves the message bus stuff until after things
have comitted.
"Rejecting" a user in the queue is equivalent to deleting them, which
would then making it impossible to review rejected users. Now we store
information about the user in the payload so if they are deleted things
still display in the Rejected view.
Secondly, if a user is destroyed outside of the review queue, it will
now automatically "Reject" that queue item.
Conversely, if a user is deactivated the reviewable should automatically
be rejected.
Before this fix, if a user was not active they'd still show in the
review queue but without an "Approve" button which was confusing.
Previously due to #b2dc65f9534ea date on the quoted_posts table could not
be trusted.
This changes it so we use the date on the actual post as the grant date.
Note: there is an edge case where you create a post and only add a quote
a week later. In this case the badge will not be awarded at the correct
time (it will display it was granted a week ago).
That said this is far more rare than the current situation.
Previously every rebake would remove and recreate records in this table
This caused created_at and updated_at to keep changing
Yes, I know the SQL is somewhat complex, but this makes quote extraction
more efficient cause we do everything in 2 round trips.
This also removes some concurrency protection we should no longer need
This functionality was never supported but before the new review queue
it didn't have any errors. Now the combination of settings is prevented
and existing sites with sso enabled will be migrated to remove invite
only.
If the post ids keep loading, we might end up in a situations where
we're always loading the same post ids over and over again without
indexing anything new.
Follow up to daeda80ada.
The default ranking options ranks by the number of matches which is
highly problematic when posts are stuffed with a keyword. The ranking
will now be divided by the document length which is a much fairer way to
rank.
This commit fixes the follow quality issue with `PostSearchData#raw_data`:
1. URLs are being tokenized and links with similar href and characters
are being duplicated in the raw data.
`Post#cooked`:
```
<p><a href=\"https://meta.discourse.org/some.png\" class=\"onebox\" target=\"_blank\" rel=\"nofollow noopener\">https://meta.discourse.org/some.png</a></p>
```
`PostSearchData#raw_data` Before:
```
This is a test topic 0 Uncategorized https://meta.discourse.org/some.png discourse org/some png https://meta.discourse.org/some.png discourse org/some png
```
`PostSearchData#raw_data` After:
```
This is a test topic 0 Uncategorized https://meta.discourse.org/some.png meta discourse org
```
2. Ligthbox being included in search pollutes the
`PostSearchData#raw_data` unncessarily.
From 28 March 2018 to 28 March 2019, searches for the term `image` on
`meta.discourse.org` had a click through rate of 2.1%. Non-lightboxed images are not included in indexing for search yet we were indexing content within a lightbox. Also, search for terms like `image` was affected we were using `Pasted image` as the filename for
uploads that were pasted.
`Post#cooked`
```
<p>Let me see how I can fix this image<br>\n<div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://meta.discourse.org/some.png\" title=\"some.png\" rel=\"nofollow noopener\"><img src=\"https://meta.discourse.org/some.png\" width=\"275\" height=\"299\"><div class=\"meta\">\n<svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use xlink:href=\"#far-image\"></use></svg><span class=\"filename\">some.png</span><span class=\"informations\">1750×2000</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use xlink:href=\"#discourse-expand\"></use></svg>\n</div></a></div></p>
```
`PostSearchData#raw_data` Before:
```
This is a test topic 0 Uncategorized Let me see how I can fix this image some.png png https://meta.discourse.org/some.png discourse org/some png some.png png 1750×2000
```
`PostSearchData#raw_data` After:
```
This is a test topic 0 Uncategorized Let me see how I can fix this image
```
In terms of indexing performance, we now have to parse the given HTML
through nokogiri twice. However performance is not a huge worry here since a string length of 194170 takes only 30ms
to scrub plus the indexing takes place in a background job.
Includes support for flags, reviewable users and queued posts, with REST API
backwards compatibility.
Co-Authored-By: romanrizzi <romanalejandro@gmail.com>
Co-Authored-By: jjaffeux <j.jaffeux@gmail.com>
Previously we would bypass touching `Topic.updated_at` for whispers and post
recovery / deletions.
This meant that certain types of caching can not be done where we rely on
this information for cache accuracy.
For example if we know we have zero unread topics as of yesterday and whisper
is made I need to bump this date so the cache remains accurate
This is only half of a larger change but provides the groundwork.
Confirmed none of our serializers leak out Topic.updated_at so this is safe
spot for this info
At the moment edits still do not change this but it is not relevant for the
unread cache.
This commit also cleans up some specs to use the new `eq_time` matcher for
millisecond fidelity comparison of times
Previously `freeze_time` would fudge this which is not that clean.
- The test_email job is removed, because it was always being run synchronously (not in sidekiq)
- 34b29f62 added a bypass for critical emails, to match the spec. This removes the bypass, and removes the spec.
- This adapts the specs for 72ffabf6, so that they check for emails being sent
- This reimplements c2797921, allowing test emails to be sent even when emails are disabled
Fixes two issues:
1. Redirecting to an external origin's path after login did not work
2. User would be erroneously redirected to the external origin after logout
https://meta.discourse.org/t/109755
* This is causing certain posts to appear in searches incorrectly as `PostSearchData#raw_data` contains the outdated title, category name and tag names.
* Remove use of 0 in favor of `TrustLevel.levels[:newuser]`.
* Consolidate two tests into a single one.
* Test that disabling the feature works.
* Avoid loading full ActiveRecord object in test when we only need to
know the existence of the record.