I made a regression here 17366d3bcc (diff-ddeebb36d131f89ca91be9d04c2baefaR10)
When the tag is added, people watching specific tag are notified but also people watching specific category.
Therefore, `notify_post_users` should accept options who should be notified.
So when `category` is added to the topic, users watching topic and users watching category are notified.
When `tag` is added to the topic, users watching topic and users watching tag are notified
Finally, when a new post is created, everybody is notified, topic watchers, category watchers, tag watchers.
* Fix user title logic when badge name customized
* Fix an issue where a user's title was not considered a badge granted title when the user used a badge for their title and the badge name was customized. this affected the effectiveness of revoke_ungranted_titles! which only operates on badge_granted_titles.
* When a user's title is set now it is considered a badge_granted_title if the badge name OR the badge custom name from TranslationOverride is the same as the title
* When a user's badge is revoked we now also revoke their title if the user's title matches the badge name OR the badge custom name from TranslationOverride
* Add a user history log when the title is revoked to remove confusion about why titles are revoked
* Add granted_title_badge_id to user_profile, now when we set badge_granted_title on a user profile when updating a user's title based on a badge, we also remember which badge matched the title
* When badge name (or custom text) changes update titles of users in a background job
* When the name of a badge changes, or in the case of system badges when their custom translation text changes, then we need to update the title of all corresponding users who have a badge_granted_title and matching granted_title_badge_id. In the case of system badges we need to first get the proper badge ID based on the translation key e.g. badges.regular.name
* Add migration to backfill all granted_title_badge_ids for both normal badge name titles and titles using custom badge text.
Issue was mentioned in this [meta topic](https://meta.discourse.org/t/send-a-notification-to-watching-users-when-adding-tag/125314)
It is working well when category is changed because NotifyCategoryChange job already got that code:
```
if post&.topic&.visible?
post_alerter = PostAlerter.new
post_alerter.notify_post_users(post, User.where(id: args[:notified_user_ids]))
post_alerter.notify_first_post_watchers(post, post_alerter.category_watchers(post.topic))
end
```
For NotifyTagChange job notify post users were missing so it worked only when your notification was set to `watching first post`
- Allow revoking keys without deleting them
- Auto-revoke keys after a period of no use (default 6 months)
- Allow multiple keys per user
- Allow attaching a description to each key, for easier auditing
- Log changes to keys in the staff action log
- Move all key management to one place, and improve the UI
Adds the settings:
raw_email_max_length, raw_rejected_email_max_length, delete_rejected_email_after_days.
These settings control retention of the "raw" emails logs.
raw_email_max_length ensures that if we get incoming email that is huge we will truncate it removing uploads from the raw log.
raw_rejected_email_max_length introduces an even more aggressive truncation for rejected incoming mail.
delete_rejected_email_after_days controls how many days we will keep rejected emails for (default 90)
Previously every hour we would run a full scan of the entire DB searching
for expired uploads that need to be moved to the tombstone folder.
This commit amends it so we only run the job 2 times per clean_orpha_uploads_grace_period_hours
There is a upper bound of 7 days so even if the grace period is set really
high it will still run at least once a week.
By default we have a 48 grace period so this amends it to run this cleanup
daily instead of hourly. This eliminates 23 times we run this ultra expensive
query.
Doing .pluck(:column).first is a very common pattern in Discourse and in
most cases, a limit cause isn't being added. Instead of adding a limit
clause to all these callsites, this commit adds two new methods to
ActiveRecord::Relation:
pluck_first, equivalent to limit(1).pluck(*columns).first
and pluck_first! which, like other finder methods, raises an exception
when no record is found
* FIX: Do not encode the URL twice
Now that we encode slugs in the server we don't need this anymore.
Reverts fe5na33
* FIX: More places do deal with encoded slugs
* the param is a string now, not a hash
* FIX: Handle the nil slug on /categories
* DEV: Add seeded? method to identity default categories
* DEV: Use SiteSetting to keep track of seeded categories
When an admin changes the site setting slug_generation_method to
encoded, we weren't really encoding the slug, but just allowing non-ascii
characters in the slug (unicode).
That brings problems when a user posts a link to topic without the slug, as
our topic controller tries to redirect the user to the correct URL that contains
the slug with unicode characters. Having unicode in the Location header in a
response is a RFC violation and some browsers end up in a redirection loop.
Bug report: https://meta.discourse.org/t/-/125371?u=falco
This commit also checks if a site uses encoded slugs and clear all saved slugs
in the db so they can be regenerated using an onceoff job.
* FEATURE: Adds an extra protection layer when decompressing files.
* Rename exporter/importer to zip importer. Update old locale
* Added a new composite class to decompress a file with multiple strategies
* Set max file size inside a site setting
* Ensure that file is deleted after compression
* Sanitize path and files before compressing/decompressing
Zeitwerk simplifies working with dependencies in dev and makes it easier reloading class chains.
We no longer need to use Rails "require_dependency" anywhere and instead can just use standard
Ruby patterns to require files.
This is a far reaching change and we expect some followups here.
Previously, calculating thresholds for reviewables was done based on the
50th and 85th percentile across all reviewables. However, many forum
owners provided feedback that these thresholds were too easy to hit, in
particular when it came to auto hiding content.
The calculation has been adjusted to base the priorities on reviewables
that have a minimum of 2 scores (flags). This should push the amount of
flags required to hide something higher then before.
On forums with very few flags you don't want to calculate averages
because they won't be very useful. Stick with the defaults until we hit
15 reviewables at least.
This reverts commit e805d44965.
We now have mechanisms in place to ensure heartbeat will always
be scheduled even if the scheduler is overloaded per: 098f938b
Databases can have a lot of user actions, self joining and running an
aggregate on millions of rows can be very costly
This optimisation will reduce the regular window of consistency down to 13
hours, this ensures the job runs much faster
* FIX: Heartbeat check per sidekiq process
* Rename method
* Remove heartbeat queues of previous bootups
* Regis feedback
* Refactor before_start
* Update lib/demon/sidekiq.rb
Co-Authored-By: Régis Hanol <regis@hanol.fr>
* Update lib/demon/sidekiq.rb
Co-Authored-By: Régis Hanol <regis@hanol.fr>
* Expire redis keys after 3600 seconds
* Don't use redis to store the list of queues
No need to let notifications stay around when users can't access
a topic after it was converted into a PM or posts were moved
into a restricted topic.
Also makes sure that moving to a new topic correctly uses the
guardian for the first post by enqueuing jobs outside of a
transaction.
DEV: deprecate `invite.via_email` in favor of `invite.emailed_status`
This commit adds a new column `emailed_status` in `invites` table for
tracking email sending status.
0 - not required
1 - pending
2 - bulk pending
3 - sending
4 - sent
For normal email invites, invite record is created with emailed_status
set to 'pending'.
When bulk invites are sent invite record is created with emailed_status
set to 'bulk pending'.
For invites that generates link, invite record is created with
emailed_status set to 'not required'.
When invite email is in queue emailed_status is updated to 'sending'
Once the email is sent via `InviteEmail` job the invite emailed_status
is updated to 'sent'.
* Revert "Revert "FEATURE: admin/user exports are compressed using the zip format (#7784)""
This reverts commit f89bd55576.
* Replace .tar.zip with .zip
* FEATURE: admin/user exports are compressed using the zip format
* Update translations. Theme exporter now exports .zip file. Theme importer supports .zip and .gz files
* Fix controller test, updated locale and skip saving the csv export to disk
* Support private uploads in S3
* Use localStore for local avatars
* Add job to update private upload ACL on S3
* Test multisite paths
* update ACL for private uploads in migrate_to_s3 task
The site settings beginning with "topic views heat" and "topic post like
heat" are set to defaults when installing Discourse, but there has not
been a process or guidance for updating these values based on
community activity.
This feature will update them once a month. The low, medium, and
high settings will be based on the minimums of the 45th, 25th, and
10th percentile topics respectively, so that 45% of topics will have
some "heat".
Disable automatic changes with the automatic_topic_heat_values setting.
This does two things
1. Our "index grace period" has been wound down to 1 day, there is no point
keeping a bloated index for a week, usually when people delete stuff they
mean for it to be removed
2. We were never dropping deleted posts from the index, only posts from
deleted topics
These changes speed up search a tiny bit and reduce background work.
The `AutoQueueHandler` will ignore really old flags. In that case, don't
notify the user that the moderator is looking into it. They probably
never saw it because it didn't meet the reviewable minimum priority.
Historically we would keep the user data export posts around but delete
the uploads.
This leaves a lot of broken uploads in the system.
This rake task allows us to clean up old mess.
This reduces chances of errors where consumers of strings mutate inputs
and reduces memory usage of the app.
Test suite passes now, but there may be some stuff left, so we will run
a few sites on a branch prior to merging
We found score hard to understand. It is still there behind the scenes
for sorting purposes, but it is no longer shown.
You can now filter by minimum priority (low, med, high) instead of
score.
This removes all uses of both `send` and `public_send` from consumers of
SiteSetting and instead introduces a `get` helper for dynamic lookup
This leads to much cleaner and safer code long term as we are always explicit
to test that a site setting is really there before sending an arbitrary
string to the class
It also removes a couple of risky stubs from the auth provider test
After careful analysis of large data-sets it became apparent that avg_time
had no impact whatsoever on "best of" topic scoring. Calculating avg_time
was a very costly operation especially on large databases.
We have some longer term plans of introducing other weighting that is read
time based into our scoring for "best of" and "top" topics, but in the
interim to stop a large amount of work that is not achieving any value we
are removing the jobs.
Column removal will follow once we decide on a new replacement metric.
`Upload#url` is more likely and can change from time to time. When it
does changes, we don't want to have to look through multiple tables to
ensure that the URLs are all up to date. Instead, we simply associate
uploads properly to `UserProfile` so that it does not have to replicate
the URLs in the table.
* Fix handling SNS notifications for AWS SES
This fixes detection of email bounce by:
- removing hard requirement for email ID, ID in webhook msg never equals this in email_log
- gets bounce_score from user stats instead of nonexistent field in webhook msg
* Remove empty line
* Prettify access to EmailLog for parsing SNS notification
Co-Authored-By: SystemZ <SystemZ@users.noreply.github.com>
If you turn it on now, default all users to approved since they were
previously. Also support approving a user that doesn't have a reviewable
record (it will be created first.)
This also includes a refactor to move class method calls to
`DiscourseEvent` into an initializer. Otherwise the load order of
classes makes a difference in the test environment and some settings
might be triggered and others not, randomly.
Theme developers can include any number of scss files within the /scss/ directory of a theme. These can then be imported from the main common/desktop/mobile scss.
also add a hard limit of 1000 users per job run so we do not clog the
scheduler
destroyer.destroy has a transaction and this can have some serious complications
with the open record set find_each has going
"Rejecting" a user in the queue is equivalent to deleting them, which
would then making it impossible to review rejected users. Now we store
information about the user in the payload so if they are deleted things
still display in the Rejected view.
Secondly, if a user is destroyed outside of the review queue, it will
now automatically "Reject" that queue item.
Conversely, if a user is deactivated the reviewable should automatically
be rejected.
Before this fix, if a user was not active they'd still show in the
review queue but without an "Approve" button which was confusing.
If the post ids keep loading, we might end up in a situations where
we're always loading the same post ids over and over again without
indexing anything new.
Follow up to daeda80ada.
This commit fixes the follow quality issue with `PostSearchData#raw_data`:
1. URLs are being tokenized and links with similar href and characters
are being duplicated in the raw data.
`Post#cooked`:
```
<p><a href=\"https://meta.discourse.org/some.png\" class=\"onebox\" target=\"_blank\" rel=\"nofollow noopener\">https://meta.discourse.org/some.png</a></p>
```
`PostSearchData#raw_data` Before:
```
This is a test topic 0 Uncategorized https://meta.discourse.org/some.png discourse org/some png https://meta.discourse.org/some.png discourse org/some png
```
`PostSearchData#raw_data` After:
```
This is a test topic 0 Uncategorized https://meta.discourse.org/some.png meta discourse org
```
2. Ligthbox being included in search pollutes the
`PostSearchData#raw_data` unncessarily.
From 28 March 2018 to 28 March 2019, searches for the term `image` on
`meta.discourse.org` had a click through rate of 2.1%. Non-lightboxed images are not included in indexing for search yet we were indexing content within a lightbox. Also, search for terms like `image` was affected we were using `Pasted image` as the filename for
uploads that were pasted.
`Post#cooked`
```
<p>Let me see how I can fix this image<br>\n<div class=\"lightbox-wrapper\"><a class=\"lightbox\" href=\"https://meta.discourse.org/some.png\" title=\"some.png\" rel=\"nofollow noopener\"><img src=\"https://meta.discourse.org/some.png\" width=\"275\" height=\"299\"><div class=\"meta\">\n<svg class=\"fa d-icon d-icon-far-image svg-icon\" aria-hidden=\"true\"><use xlink:href=\"#far-image\"></use></svg><span class=\"filename\">some.png</span><span class=\"informations\">1750×2000</span><svg class=\"fa d-icon d-icon-discourse-expand svg-icon\" aria-hidden=\"true\"><use xlink:href=\"#discourse-expand\"></use></svg>\n</div></a></div></p>
```
`PostSearchData#raw_data` Before:
```
This is a test topic 0 Uncategorized Let me see how I can fix this image some.png png https://meta.discourse.org/some.png discourse org/some png some.png png 1750×2000
```
`PostSearchData#raw_data` After:
```
This is a test topic 0 Uncategorized Let me see how I can fix this image
```
In terms of indexing performance, we now have to parse the given HTML
through nokogiri twice. However performance is not a huge worry here since a string length of 194170 takes only 30ms
to scrub plus the indexing takes place in a background job.
Includes support for flags, reviewable users and queued posts, with REST API
backwards compatibility.
Co-Authored-By: romanrizzi <romanalejandro@gmail.com>
Co-Authored-By: jjaffeux <j.jaffeux@gmail.com>
- The test_email job is removed, because it was always being run synchronously (not in sidekiq)
- 34b29f62 added a bypass for critical emails, to match the spec. This removes the bypass, and removes the spec.
- This adapts the specs for 72ffabf6, so that they check for emails being sent
- This reimplements c2797921, allowing test emails to be sent even when emails are disabled