On sites that were seeded with a general category id and that category
was deleted prior to the fix in efb116d2bd
this migration will reset the SiteSetting.general_category_id back to
the default because there is no longer a corresponding category for it.
This is to fix a composer bug that occurs if there is a
SiteSetting.general_category_id value present that doesn't match an
existing category.
This commit introduces a new framework for building user tutorials as
popups using the Tippy JS library. Currently, the new framework is used
to replace the old notification spotlight and tips and show a new one
related to the topic timeline.
All popups follow the same structure and have a title, a description and
two buttons for either dismissing just the current tip or all of them
at once.
The state of all seen popups is stored in a user option. Updating
skip_new_user_tips will automatically update the list of seen popups
accordingly.
Adds a new upload field for a second dark mode category logo.
This alternative will be used when the browser is in dark mode (similar to the global site setting for a dark logo).
* FEATURE: Make General the default category
* Set general as the default category in the composer model instead
* use semicolon
* Enable allow_uncategorized_topics in create_post spec helper for now
* Check if general_category_id is set
* Enable allow_uncategorized_topics for test env
* Provide an option to the create_post helper to not set allow_uncategorized_topics
* Add tests to check that category… is not present and that General is selected automatically
This commit renames all secure_media related settings to secure_uploads_* along with the associated functionality.
This is being done because "media" does not really cover it, we aren't just doing this for images and videos etc. but for all uploads in the site.
Additionally, in future we want to secure more types of uploads, and enable a kind of "mixed mode" where some uploads are secure and some are not, so keeping media in the name is just confusing.
This also keeps compatibility with the `secure-media-uploads` path, and changes new
secure URLs to be `secure-uploads`.
Deprecated settings:
* secure_media -> secure_uploads
* secure_media_allow_embed_images_in_emails -> secure_uploads_allow_embed_images_in_emails
* secure_media_max_email_embed_image_size_kb -> secure_uploads_max_email_embed_image_size_kb
This will replace `enable_personal_messages` and
`min_trust_to_send_messages`, this commit introduces
the setting `personal_message_enabled_groups`
and uses it in all places that `enable_personal_messages`
and `min_trust_to_send_messages` currently apply.
A migration is included to set `personal_message_enabled_groups`
based on the following rules:
* If `enable_personal_messages` was false, then set
`personal_message_enabled_groups` to `3`, which is
the staff auto group
* If `min_trust_to_send_messages` is not default (1)
and the above condition is false, then set the
`personal_message_enabled_groups` setting to
the appropriate auto group based on the trust level
* Otherwise just set `personal_message_enabled_groups` to
11 which is the TL1 auto group
After follow-up PRs to plugins using these old settings, we will be
able to drop the old settings from core, in the meantime I've added
DEPRECATED notices to their descriptions and added them
to the deprecated site settings list.
This commit also introduces a `_map` shortcut method definition
for all `group_list` site settings, e.g. `SiteSetting.personal_message_enabled_groups`
also has `SiteSetting.personal_message_enabled_groups_map` available,
which automatically splits the setting by `|` and converts it into
an array of integers.
See https://meta.discourse.org/t/discourse-email-messages-are-incorrectly-threaded/233499
for thorough reasoning.
This commit changes how we generate Message-IDs and do email
threading for emails sent from Discourse. The main changes are
as follows:
* Introduce an outbound_message_id column on Post that
is either a) filled with a Discourse-generated Message-ID
the first time that post is used for an outbound email
or b) filled with an original Message-ID from an external
mail client or service if the post was created from an
incoming email.
* Change Discourse-generated Message-IDs to be more consistent
and static, in the format `discourse/post/:post_id@:host`
* Do not send References or In-Reply-To headers for emails sent
for the OP of topics.
* Make sure that In-Reply-To is filled with either a) the OP's
Message-ID if the post is not a direct reply or b) the parent
post's Message-ID
* Make sure that In-Reply-To has all referenced post's Message-IDs
* Make sure that References is filled with a chain of Message-IDs
from the OP down to the parent post of the new post.
We also are keeping X-Discourse-Post-Id and X-Discourse-Topic-Id,
headers that we previously removed, for easier visual debugging
of outbound emails.
Finally, we backfill the `outbound_message_id` for posts that have
a linked `IncomingEmail` record, using the `message_id` of that record.
We do not need to do that for posts that don't have an incoming email
since they are backfilled at runtime if `outbound_message_id` is missing.
The `add_column` `limit` parameter has no effect on a postgres `text` column. Instead we can perform the check in ActiveRecord.
We never expect this condition to be hit - users cannot control this value. It's just a safety net.
Adds limits to location and website fields at model and DB level
to match the bio_raw field limits. A limit cannot be added at the
DB level for bio_raw because it is a postgres text field.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
We previously had a system which would generate a 10x10px preview of images and add their URLs in a data-small-upload attribute. The client would then use that as the background-image of the `<img>` element. This works reasonably well on fast connections, but on slower connections it can take a few seconds for the placeholders to appear. The act of loading the placeholders can also break or delay the loading of the 'real' images.
This commit replaces the placeholder logic with a new approach. Instead of a 10x10px preview, we use imagemagick to calculate the average color of an image and store it in the database. The hex color value then added as a `data-dominant-color` attribute on the `<img>` element, and the client can use this as a `background-color` on the element while the real image is loading. That means no extra HTTP request is required, and so the placeholder color can appear instantly.
Dominant color will be calculated:
1. When a new upload is created
2. During a post rebake, if the dominant color is missing from an upload, it will be calculated and stored
3. Every 15 minutes, 25 old upload records are fetched and their dominant color calculated and stored. (part of the existing PeriodicalUpdates job)
Existing posts will continue to use the old 10x10px placeholder system until they are next rebaked
If a user was granted a trust level, joined a group that granted a trust
level and left the group, the trust level was reset. This commit tries
to restore the last known trust level before joining the group by
looking into staff logs.
This commit also migrates old :change_trust_level user history records
to use previous_value and new_value fields.
* FEATURE: Add case-sensitivity flag to watched_words
Currently, all watched words are matched case-insensitively. This flag
allows a watched word to be flagged for case-sensitive matching.
To allow allow for backwards compatibility the flag is set to false by
default.
* FEATURE: Support case-sensitive creation of Watched Words via API
Extend admin creation and upload of Watched Words to support case
sensitive flag. This lays the ground work for supporting
case-insensitive matching of Watched Words.
Support for an extra column has also been introduced for the Watched
Words upload CSV file. The new column structure is as follows:
word,replacement,case_sentive
* FEATURE: Enable case-sensitive matching of Watched Words
WordWatcher's word_matcher_regexp now returns a list of regular
expressions instead of one case-insensitive regular expression.
With the ability to flag a Watched Word as case-sensitive, an action
can have words of both sensitivities.This makes the use of the global
Regexp::IGNORECASE flag added to all words problematic.
To get around platform limitations around the use of subexpression level
switches/flags, a list of regular expressions is returned instead, one for each
case sensitivity.
Word matching has also been updated to use this list of regular expressions
instead of one.
* FEATURE: Use case-sensitive regular expressions for Watched Words
Update Watched Words regular expressions matching and processing to handle
the extra metadata which comes along with the introduction of
case-sensitive Watched Words.
This allows case-sensitive Watched Words to matched as such.
* DEV: Simplify type casting of case-sensitive flag from uploads
Use builtin semantics instead of a custom method for converting
string case flags in uploaded Watched Words to boolean.
* UX: Add case-sensitivity details to Admin Watched Words UI
Update Watched Word form to include a toggle for case-sensitivity.
This also adds support for, case-sensitive testing and matching of Watched Word
in the admin UI.
* DEV: Code improvements from review feedback
- Extract watched word regex creation out to a utility function
- Make JS array presence check more explicit and readable
* DEV: Extract Watched Word regex creation to utility function
Clean-up work from review feedback. Reduce code duplication.
* DEV: Rename word_matcher_regexp to word_matcher_regexp_list
Since a list is returned now instead of a single regular expression,
change `word_matcher_regexp` to `word_matcher_regexp_list` to better communicate
this change.
* DEV: Incorporate WordWatcher updates from upstream
Resolve conflicts and ensure apply_to_text does not remove non-word characters in matches
that aren't at the beginning of the line.
When viewing a topic, we execute two queries to fetch the topic's
public topic timer and slow mode timer. The former query happens to be
able to use a unique index but the latter has to do a seq scan which is
slow. The query itself is not expensive but since viewing a topic is a
hot path, the little cuts add up overtime and the query itself
contributes significantly to the load of the database.
* FIX: Rejected emails should not be cleaned up before their logs
If we delete the rejected emails before we delete their associated logs
we will receive 404 errors trying to inspect an email message for that
log.
* don't add a blank line
* test for max value as well
* pr cleanup and add migration
* Fix failing test
It's already included in the `ignored_columns` list in the group model. 03ffb0bf27/app/models/group.rb (L9)
Also, removed the `MigrateGroupFlairImages` onceoff job and spec.
At some point in the past we decided to rename the 'regular' notification state of topics/categories to 'normal'. However, some UI copy was missed when the initial renaming was done so this commit changes the spots that were missed to the new name.
When sending emails with delivery_method_options -> return_response
set to true, the SMTP sending code inside Mail will return the SMTP
response when calling deliver! for mail within the app. This commit
ensures that Email::Sender captures this response if it is returned
and stores it against the EmailLog created for the sent email.
A follow up PR will make this visible within the admin email UI.
`selectable_avatars_urls` contains invalid data (it's a backup from 20200810194943_change_selectable_avatars_site_setting.rb)
This migration is deliberately backdated so that it runs before `20220330160747_copy_site_settings_uploads_to_upload_references`
Follow up to 9db8f00b3d,
the theme_settings.value field is not an integer and so
can be '', we need to account for this in the migration
otherwise we get this error:
> PG::InvalidTextRepresentation: ERROR: invalid input syntax for type integer: ""
This table holds associations between uploads and other models. This can be used to prevent removing uploads that are still in use.
* DEV: Create upload_references
* DEV: Use UploadReference instead of PostUpload
* DEV: Use UploadReference for SiteSetting
* DEV: Use UploadReference for Badge
* DEV: Use UploadReference for Category
* DEV: Use UploadReference for CustomEmoji
* DEV: Use UploadReference for Group
* DEV: Use UploadReference for ThemeField
* DEV: Use UploadReference for ThemeSetting
* DEV: Use UploadReference for User
* DEV: Use UploadReference for UserAvatar
* DEV: Use UploadReference for UserExport
* DEV: Use UploadReference for UserProfile
* DEV: Add method to extract uploads from raw text
* DEV: Use UploadReference for Draft
* DEV: Use UploadReference for ReviewableQueuedPost
* DEV: Use UploadReference for UserProfile's bio_raw
* DEV: Do not copy user uploads to upload references
* DEV: Copy post uploads again after deploy
* DEV: Use created_at and updated_at from uploads table
* FIX: Check if upload site setting is empty
* DEV: Copy user uploads to upload references
* DEV: Make upload extraction less strict
In f00e282067 we added this
DELETE query to delete duplicate for_topic bookmarks, we
just need this further refinement to the WHERE clause to
avoid deleting post bookmarks.
This commit migrates all bookmarks to be polymorphic (using the
bookmarkable_id and bookmarkable_type) columns. It also deletes
all the old code guarded behind the use_polymorphic_bookmarks setting
and changes that setting to true for all sites and by default for
the sake of plugins.
No data is deleted in the migrations, the old post_id and for_topic
columns for bookmarks will be dropped later on.
This migration is failing to acquire a lock under some production conditions. We're only performing one action, so removing the transaction is safe and may help to resolve the issue.
Currently the only way to allow tagging on pms is to use the `allow_staff_to_tag_pms` site setting. We are removing that site setting and replacing it with `pm_tags_allowed_for_groups` which will allow for non staff tagging. It will be group based permissions instead of requiring the user to be staff.
If the existing value of `allow_staff_to_tag_pms` is `true` then we include the `staff` groups as a default for `pm_tags_allowed_for_groups`.
We have not used anything related to bookmarks for PostAction
or UserAction records since 2020, bookmarks are their own thing
now. Deleting all this is just cleaning up old cruft.
We're adding this column now in preparation for a future commit(s) that will
redesign the avatar/notifications menu. The reason the column is added in a
separate commit is because the redesign changes are going to be complex with a
high risk of getting (temporarily) reverted and if they included a database
migration, they wouldn't revert cleanly/easily.
Internal ticket: t65045.
Truly testing for JSON validity would require defining a new postgres function. Checking just the first character should take care of all the cases of invalid historic data that we've seen.
In the old custom_field-based system, it was possible for a url to be both 'downloaded' and 'broken'. The new table enforces uniqueness, so we need to drop invalid data.
On some installations, this would fail with 'index row size exceeds btree version 4 maximum'. This commit replaces the (post_id, url)` index with a `(post_id, md5(url))` index, which is much more space efficient.
This will make future changes to the 'pull hotlinked images' system easier. This commit should not introduce any functional change.
For now, the old post_custom_field data is kept in the database. This will be dropped in a future commit.
* hidden siteSetting to enable experimental sidebar
* user preference to enable experimental sidebar
* `experimental_sidebar_enabled` attribute for current user
* Empty glimmer component for Sidebar
* FEATURE: Let sites add a sitemap.xml file.
This PR adds the same features discourse-sitemap provides to core. Sitemaps are only added to the robots.txt file if the `enable_sitemap` setting is enabled and `login_required` disabled.
After merging discourse/discourse-sitemap#34, this change will take priority over the sitemap plugin because it will disable itself. We're also using the same sitemaps table, so our migration won't try to create it
again using `if_not_exists: true`.
There is no need to wait until after the deploy for this cleanup. In fact, running it later will mean there could be a window of a few minutes during which the site is broken.
The only requirement is that it runs after the broken `20220401130745_create_category_required_tag_groups` migration.
Followup to 39a6de3d73
The category table's required_tag_group_id contained references to deleted tag groups, which we copied to the new table. The new serializer tries to get the associated tag group name but fails because the tag group is nil.
This PR adds an inner join in the original migration to make sure tag groups still exist and adds a new post-migration to fix already migrated sites.
Previously we only supported a single 'required tag group' for a category. This commit allows admins to specify multiple required tag groups, each with their own minimum tag count.
A new category_required_tag_groups database table replaces the existing columns on the categories table. Data is automatically migrated.
As we are gradually moving to having a polymorphic
bookmarkable relationship on the Bookmark table,
we need to make the post_id column nullable to be
able to develop and test the new columns, and
for cutover/migration purposes later as well.
This commit promotes all post_deploy migrations which existed in Discourse v2.7.13 (timestamp <= 20210328233843)
This reduces the likelihood of issues relating to migration run order
Also fixes a couple of typos in `script/promote_migrations`
This commit is a redo of2f1ddadff7dd47f824070c8a3f633f00a27aacde
which we reverted because it blew up an internal CI check. I looked
into it, and it happened because the old migration to add the bookmark
columns still existed, and those columns were dropped in a post migrate,
so the two migrations to add the columns were conflicting before
the post migrate was run.
------
This commit only includes the creation of the new columns and index,
and does not add any triggers, backfilling, or new data.
A backfill will be done in the final PR when we switch this over.
Intermediate PRs will look something like this:
Add an experimental site setting for using polymorphic bookmarks,
and make sure in the places where bookmarks are created or updated
we fill in the columns. This setting will be used in subsequent
PRs as well.
Listing and searching bookmarks based on polymorphic associations
Creating post and topic bookmarks using polymorphic associations,
and changing special for_topic logic to just rely on the Topic
bookmarkable_type
Querying bookmark reminders based on polymorphic associations
Make sure various other areas like importers, bookmark guardian,
and others all rely on the associations
Prepare plugins that rely on the Bookmark model to use polymorphic
associations
The final core PR will remove all the setting gates and switch over
to using the polymorphic associations, backfill the bookmarks
table columns, and ignore the old post_id and for_topic colummns.
Then it will just be a matter of dropping the old columns down the
line.
This commit is a redo of e21c640a3c
which we reverted to not include half-done work in a release.
This commit is slightly different though, in that it only includes
the creation of the new columns and index, and does not add any
triggers, backfilling, or new data.
A backfill will be done in the final PR when we switch this over.
Intermediate PRs will look something like this:
1. Add an experimental site setting for using polymorphic bookmarks,
and make sure in the places where bookmarks are created or updated
we fill in the columns. This setting will be used in subsequent
PRs as well.
2. Listing and searching bookmarks based on polymorphic associations
3. Creating post and topic bookmarks using polymorphic associations,
and changing special for_topic logic to just rely on the Topic
bookmarkable_type
4. Querying bookmark reminders based on polymorphic associations
5. Make sure various other areas like importers, bookmark guardian,
and others all rely on the associations
6. Prepare plugins that rely on the Bookmark model to use polymorphic
associations
The final core PR will remove all the setting gates and switch over
to using the polymorphic associations, backfill the bookmarks
table columns, and ignore the old post_id and for_topic colummns.
Then it will just be a matter of dropping the old columns down the
line.
The meaning of reminder_at and reminder_last_sent_at changed after
commit 6d422a8033. A bookmark reminder
will fire only if reminder_last_sent_at is null, but before that it
fired everytime reminder_at was set. This is no longer true because
sometimes reminder_at continues to exist even after a reminder fired.
The user can select what happens with a bookamrk after it expires. New
option allow bookmark's reminder to be kept even after it has expired.
After a bookmark's reminder notification is created, the reminder date
will be highlighted in red until the user resets the reminder date.
User can do that using the new Clear Reminder button from the dropdown.
The search_ignore_accents site setting can be used to make the search
indexer remove the accents before indexing the content. The unaccent
function from PostgreSQL is better than Ruby's unicode_normalize(:nfkd).
This URL was originally updated in 89cb537fae. However, some sites are not using the proxy, and have configured their forum to hotlink images directly to avatars.discourse.org.
We intend to shut down this domain in favor of `avatars.discourse-cdn.com`, so this migration will re-write any matching site setting values and queue affected posts for rebaking.
This allows text editors to use correct syntax coloring for the heredoc sections.
Heredoc tag names we use:
languages: SQL, JS, RUBY, LUA, HTML, CSS, SCSS, SH, HBS, XML, YAML/YML, MF, ICS
other: MD, TEXT/TXT, RAW, EMAIL
* FEATURE: upload an avatar option for uploading avatars with selectable avatars
Allow staff or users at or above a trust level to upload avatars even when the site
has selectable avatars enabled.
Everyone can still pick from the list of avatars. The option to upload is shown
below the selectable avatar list.
refactored boolean site setting into an enum with the following values:
disabled: No selectable avatars enabled (default)
everyone: Show selectable avatars, and allow everyone to upload custom avatars
tl1: Show selectable avatars, but require tl1+ and staff to upload custom avatars
tl2: Show selectable avatars, but require tl2+ and staff to upload custom avatars
tl3: Show selectable avatars, but require tl3+ and staff to upload custom avatars
tl4: Show selectable avatars, but require tl4 and staff to upload custom avatars
staff: Show selectable avatars, but only allow staff to upload custom avatars
no_one: Show selectable avatars. No users can upload custom avatars
Co-authored-by: Régis Hanol <regis@hanol.fr>
This commit makes sure that the email log's bounce_error_code
conforms to the SMTP error code RFC on save, so that
it is always in the format X.X.X or XXX without any
additional string details. Also included is a migration
to fix this issue for past records.
We added this constraint in 5bd55acf83
but it is causing problems in hosted sites and is catching the
issue too far down the line. This commit removes the constraint
for now, and also fixes an issue found with PostDestroyer
which wasn't using the UserStatCountUpdater when updating post_count
and thus was causing negative numbers to occur.
Whenever we got a bounced email in the Email::Receiver we
previously would just set bounced: true on the EmailLog and
discard the status/diagnostic code. This commit changes this
flow to store the bounce error code (defined in the RFC at
https://www.iana.org/assignments/smtp-enhanced-status-codes/smtp-enhanced-status-codes.xhtml)
not just in the Email::Receiver, but also via webhook events
from other mail services and from SNS.
This commit does not surface the bounce error in the UI,
we can do that later if necessary.
Follow up to 88a8584348. Sets
the baked version of all posts with custom emoji and a secure
media URL in the cooked content to 0. Then our periodic rebake
posts job will rebake them to apply the fix in the linked
commit. This only matters on sites with secure media enabled.
* FEATURE: Add external_id to topics
This commit allows for topics to be created and fetched by an
external_id. These changes are API only for now as there aren't any
front changes.
* add annotations
* add external_id to this spec
* Several PR feedback changes
- Add guardian to find topic
- 403 is returned for not found as well now
- add `include_external_id?`
- external_id is now case insensitive
- added test for posts_controller
- added test for topic creator
- created constant for max length
- check that it redirects to the correct path
- restrain external id in routes file
* remove puts
* fix tests
* only check for external_id in webhook if exists
* Update index to exclude external_id if null
* annotate
* Update app/controllers/topics_controller.rb
We need to check whether the topic is present first before passing it to the guardian.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
* Apply suggestions from code review
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
This commit allows group SMTP emails to be sent with a
different from email address that has been set up as an
alias in the email provider. Emails from the alias will
be grouped correctly using Message-IDs in the mail client,
and replies to the alias go into the correct group inbox.
Ensures that `UserStat#post_count` and `UserStat#topic_count` does not
go below 0. When it does like it did now, we tend to have bugs in our
code since we're usually coding with the assumption that the count isn't
negative.
In order to support the constraints, our post and topic fabricators in
tests will now automatically increment the count for the respective
user's `UserStat` as well. We have to do this because our fabricators
bypasss `PostCreator` which holds the responsibility of updating `UserStat#post_count` and
`UserStat#topic_count`.
* Chinese segmenetation will continue to rely on cppjieba
* Japanese segmentation will use our port of TinySegmenter
* Korean currently does not rely on segmentation which was dropped in c677877e4f
* SiteSetting.search_tokenize_chinese_japanese_korean has been split
into SiteSetting.search_tokenize_chinese and
SiteSetting.search_tokenize_japanese respectively
Non-staff users are not allowed to see whisper so this change prevents
non-staff user from seeing a like count that does not make sense to
them. In the future, we might consider adding another like count column
for staff user.
Follow-up to 4492718864
It is too close to release of 2.8 for incomplete
feature shenanigans. Ignores and drops the columns and drops
the trigger/function introduced in
e21c640a3c.
Will pick this feature back up post-release.
We are planning on attaching bookmarks to more and
more other models, so it makes sense to make a polymorphic
relationship to handle this. This commit adds the new
columns and backfills them in the bookmark table, and
makes sure that any new bookmark changes fill in the columns
via DB triggers.
This way we can gradually change the frontend and backend
to use these new columns, and eventually delete the
old post_id and for_topic columns in `bookmarks`.
The `fancy_title` column in the `topics` table currently has a constraint that limits the column to 400 characters. We need to remove that constraint because it causes some automatic topics/PMs from the system to fail when using Discourse in locales that need more than 400 characters to the translate the content of those automatic messages.
Internal ticket: t58030.
This commit removes the enable_experimental_backup_uploader site
setting and the flags in backups-index.hbs to make the uppy
backup uploader the main one from now on.
A follow-up commit will delete the old backup uploader code and
also remove resumable.js from the project.
We were checking for the existence of the column in any schema, including the `backup` schema. This can cause 'column does not exist' errors. In fact, we should only be checking in the `public` schema.
This column was dropped in a previous commit, in post migrations.
Unfortunatly that causes smoke tests to fail as there is a period between
migration and post migrations where records can not be inserted into the
table.
This commit introduces a new site setting "google_oauth2_hd_groups". If enabled, group information will be fetched from Google during authentication, and stored in the Discourse database. These 'associated groups' can be connected to a Discourse group via the "Membership" tab of the group preferences UI.
The majority of the implementation is generic, so we will be able to add support to more authentication methods in the near future.
https://meta.discourse.org/t/managing-group-membership-via-authentication/175950
Old OnceOff job could perform pretty slowly on sites with millions of emails
New implementation operates in batches in a migration, minimizing locking.
Currently when a user creates posts that are moderated (for whatever
reason), a popup is displayed saying the post needs approval and the
total number of the user’s pending posts. But then this piece of
information is kind of lost and there is nowhere for the user to know
what are their pending posts or how many there are.
This patch solves this issue by adding a new “Pending” section to the
user’s activity page when there are some pending posts to display. When
there are none, then the “Pending” section isn’t displayed at all.
This commit adds token_hash and scopes columns to email_tokens table.
token_hash is a replacement for the token column to avoid storing email
tokens in plaintext as it can pose a security risk. The new scope column
ensures that email tokens cannot be used to perform a different action
than the one intended.
To sum up, this commit:
* Adds token_hash and scope to email_tokens
* Reuses code that schedules critical_user_email
* Refactors EmailToken.confirm and EmailToken.atomic_confirm methods
* Periodically cleans old, unconfirmed or expired email tokens
When this setting is turned on, it will check that normalized emails
are unique. Normalized emails are emails without any dots or plus
aliases.
This setting can be used to block use of aliases of the same email
address.
In b8c8909a9d, we introduced a regression
where users may have had their `UserStat.first_unread_pm_at` set
incorrectly. This commit introduces a migration to reset `UserStat.first_unread_pm_at` back to
`User#created_at`.
Follow-up to b8c8909a9d.
Instead of using image-uploader, which relies on the old
UploadMixin, we can now use the uppy-image-uploader which
uses the new UppyUploadMixin which is stable enough and
supports both regular XHR uploads and direct S3 uploads,
controlled by a site setting (default to XHR).
At some point it may make sense to rename uppy-image-uploader
back to image-uploader, once we have gone through plugins
etc. and given a bit of deprecation time period.
This commit also fixes `for_private_message`, `for_site_setting`,
and `pasted` flags not being sent via uppy uploads onto the
UploadCreator, both via regular XHR uploads and also through
external/multipart uploads.
The uploaders changed are:
* site setting images
* badge images
* category logo
* category background
* group flair
* profile background
* profile card background
It allows saving local date to calendar.
Modal is giving option to pick between ics and google. User choice can be remembered as a default for the next actions.
* DEV: Remove HTML setting type and sanitization logic.
We concluded that we don't want settings to contain HTML, so I'm removing the setting type and sanitization logic. Additionally, we no longer allow the global-notice text to contain HTML.
I searched for usages of this setting type in the `all-the-plugins` repo and found none, so I haven't added a migration for existing settings.
* Mark Global notices containing links as HTML Safe.
* PERF: Improve database query perf when loading topics for a category.
Instead of left joining the `topics` table against `categories` by filtering with `categories.id`,
we can improve the query plan by filtering against `topics.category_id`
first before joining which helps to reduce the number of rows in the
topics table that has to be joined against the other tables and also
make better use of our existing index.
The following is a before and after of the query plan for a category
with many subcategories.
Before:
```
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
Limit (cost=1.28..747.09 rows=30 width=12) (actual time=85.502..2453.727 rows=30 loops=1)
-> Nested Loop Left Join (cost=1.28..566518.36 rows=22788 width=12) (actual time=85.501..2453.722 rows=30 loops=1)
Join Filter: (category_users.category_id = topics.category_id)
Filter: ((topics.category_id = 11) OR (COALESCE(category_users.notification_level, 1) <> 0) OR (tu.notification_level > 1))
-> Nested Loop Left Join (cost=1.00..566001.58 rows=22866 width=20) (actual time=85.494..2453.702 rows=30 loops=1)
Filter: ((COALESCE(tu.notification_level, 1) > 0) AND ((topics.category_id <> 11) OR (topics.pinned_at IS NULL) OR ((t
opics.pinned_at <= tu.cleared_pinned_at) AND (tu.cleared_pinned_at IS NOT NULL))))
Rows Removed by Filter: 1
-> Nested Loop (cost=0.57..528561.75 rows=68606 width=24) (actual time=85.472..2453.562 rows=31 loops=1)
Join Filter: ((topics.category_id = categories.id) AND ((categories.topic_id <> topics.id) OR (categories.id = 1
1)))
Rows Removed by Join Filter: 13938306
-> Index Scan using index_topics_on_bumped_at on topics (cost=0.42..100480.05 rows=715549 width=24) (actual ti
me=0.010..633.015 rows=464623 loops=1)
Filter: ((deleted_at IS NULL) AND ((archetype)::text <> 'private_message'::text))
Rows Removed by Filter: 105321
-> Materialize (cost=0.14..36.04 rows=30 width=8) (actual time=0.000..0.002 rows=30 loops=464623)
-> Index Scan using categories_pkey on categories (cost=0.14..35.89 rows=30 width=8) (actual time=0.006.
.0.040 rows=30 loops=1)
Index Cond: (id = ANY ('{11,53,57,55,54,56,112,94,107,115,116,117,97,95,102,103,101,105,99,114,106,1
13,104,98,100,96,108,109,110,111}'::integer[]))
-> Index Scan using index_topic_users_on_topic_id_and_user_id on topic_users tu (cost=0.43..0.53 rows=1 width=16) (a
ctual time=0.004..0.004 rows=0 loops=31)
Index Cond: ((topic_id = topics.id) AND (user_id = 1103877))
-> Materialize (cost=0.28..2.30 rows=1 width=8) (actual time=0.000..0.000 rows=0 loops=30)
-> Index Scan using index_category_users_on_user_id_and_last_seen_at on category_users (cost=0.28..2.29 rows=1 width
=8) (actual time=0.004..0.004 rows=0 loops=1)
Index Cond: (user_id = 1103877)
Planning Time: 1.359 ms
Execution Time: 2453.765 ms
(23 rows)
```
After:
```
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=1.28..438.55 rows=30 width=12) (actual time=38.297..657.215 rows=30 loops=1)
-> Nested Loop Left Join (cost=1.28..195944.68 rows=13443 width=12) (actual time=38.296..657.211 rows=30 loops=1)
Filter: ((categories.topic_id <> topics.id) OR (topics.category_id = 11))
Rows Removed by Filter: 29
-> Nested Loop Left Join (cost=1.13..193462.59 rows=13443 width=16) (actual time=38.289..657.092 rows=59 loops=1)
Join Filter: (category_users.category_id = topics.category_id)
Filter: ((topics.category_id = 11) OR (COALESCE(category_users.notification_level, 1) <> 0) OR (tu.notification_level > 1))
-> Nested Loop Left Join (cost=0.85..193156.79 rows=13489 width=20) (actual time=38.282..657.059 rows=59 loops=1)
Filter: ((COALESCE(tu.notification_level, 1) > 0) AND ((topics.category_id <> 11) OR (topics.pinned_at IS NULL) OR ((topics.pinned_at <= tu.cleared_pinned_at) AND (tu.cleared_pinned_at IS NOT NULL))))
Rows Removed by Filter: 1
-> Index Scan using index_topics_on_bumped_at on topics (cost=0.42..134521.06 rows=40470 width=24) (actual time=38.267..656.850 rows=60 loops=1)
Filter: ((deleted_at IS NULL) AND ((archetype)::text <> 'private_message'::text) AND (category_id = ANY ('{11,53,57,55,54,56,112,94,107,115,116,117,97,95,102,103,101,105,99,114,106,113,104,98,100,96,108,109,110,111}'::integer[])))
Rows Removed by Filter: 569895
-> Index Scan using index_topic_users_on_topic_id_and_user_id on topic_users tu (cost=0.43..1.43 rows=1 width=16) (actual time=0.003..0.003 rows=0 loops=60)
Index Cond: ((topic_id = topics.id) AND (user_id = 1103877))
-> Materialize (cost=0.28..2.30 rows=1 width=8) (actual time=0.000..0.000 rows=0 loops=59)
-> Index Scan using index_category_users_on_user_id_and_last_seen_at on category_users (cost=0.28..2.29 rows=1 width=8) (actual time=0.004..0.004 rows=0 loops=1)
Index Cond: (user_id = 1103877)
-> Index Scan using categories_pkey on categories (cost=0.14..0.17 rows=1 width=8) (actual time=0.001..0.001 rows=1 loops=59)
Index Cond: (id = topics.category_id)
Planning Time: 1.633 ms
Execution Time: 657.255 ms
(22 rows)
```
* PERF: Optimize index on topics bumped_at.
Replace `index_topics_on_bumped_at` index with a partial index on `Topic#bumped_at` filtered by archetype since there is already another index that covers private topics.
We don't actually use the reminder_type for bookmarks anywhere;
we are just storing it. It has no bearing on the UI. It used
to be relevant with the at_desktop bookmark reminders (see
fa572d3a7a)
This commit marks the column as readonly, ignores it, and removes
the index, and it will be dropped in a later PR. Some plugins
are relying on reminder_type partially so some stubs have been
left in place to avoid errors.
This new column will be used to indicate that a bookmark
is at the topic level. The first post of a topic can be
bookmarked twice after this change -- with for_topic set
to true and with for_topic set to false.
A later PR will use this column for logic to bookmark the
topic, and then topic-level bookmark links will take you
to the last unread post in the topic.
See also 22208836c5
We don't need no stinkin' denormalization! This commit ignores
the topic_id column on bookmarks, to be deleted at a later date.
We don't really need this column and it's better to rely on the
post.topic_id as the canonical topic_id for bookmarks, then we
don't need to remember to update both columns if the bookmarked
post moves to another topic.
This is necessary to allow for large file uploads via
the direct S3 upload mechanism, as we convert the external
file to an Upload record via ExternalUploadManager once
it is complete.
This will allow for files larger than 2,147,483,647 bytes (2.14GB)
to be referenced in the uploads table.
This is a table locking migration, but since it is not as highly
trafficked as posts, topics, or users, the disruption should be minimal.
This pull request introduces the endpoints required, and the JavaScript functionality in the `ComposerUppyUpload` mixin, for direct S3 multipart uploads. There are four new endpoints in the uploads controller:
* `create-multipart.json` - Creates the multipart upload in S3 along with an `ExternalUploadStub` record, storing information about the file in the same way as `generate-presigned-put.json` does for regular direct S3 uploads
* `batch-presign-multipart-parts.json` - Takes a list of part numbers and the unique identifier for an `ExternalUploadStub` record, and generates the presigned URLs for those parts if the multipart upload still exists and if the user has permission to access that upload
* `complete-multipart.json` - Completes the multipart upload in S3. Needs the full list of part numbers and their associated ETags which are returned when the part is uploaded to the presigned URL above. Only works if the user has permission to access the associated `ExternalUploadStub` record and the multipart upload still exists.
After we confirm the upload is complete in S3, we go through the regular `UploadCreator` flow, the same as `complete-external-upload.json`, and promote the temporary upload S3 into a full `Upload` record, moving it to its final destination.
* `abort-multipart.json` - Aborts the multipart upload on S3 and destroys the `ExternalUploadStub` record if the user has permission to access that upload.
Also added are a few new columns to `ExternalUploadStub`:
* multipart - Whether or not this is a multipart upload
* external_upload_identifier - The "upload ID" for an S3 multipart upload
* filesize - The size of the file when the `create-multipart.json` or `generate-presigned-put.json` is called. This is used for validation.
When the user completes a direct S3 upload, either regular or multipart, we take the `filesize` that was captured when the `ExternalUploadStub` was first created and compare it with the final `Content-Length` size of the file where it is stored in S3. Then, if the two do not match, we throw an error, delete the file on S3, and ban the user from uploading files for N (default 5) minutes. This would only happen if the user uploads a different file than what they first specified, or in the case of multipart uploads uploaded larger chunks than needed. This is done to prevent abuse of S3 storage by bad actors.
Also included in this PR is an update to vendor/uppy.js. This has been built locally from the latest uppy source at d613b849a6. This must be done so that I can get my multipart upload changes into Discourse. When the Uppy team cuts a proper release, we can bump the package.json versions instead.
This adds a few different things to allow for direct S3 uploads using uppy. **These changes are still not the default.** There are hidden `enable_experimental_image_uploader` and `enable_direct_s3_uploads` settings that must be turned on for any of this code to be used, and even if they are turned on only the User Card Background for the user profile actually uses uppy-image-uploader.
A new `ExternalUploadStub` model and database table is introduced in this pull request. This is used to keep track of uploads that are uploaded to a temporary location in S3 with the direct to S3 code, and they are eventually deleted a) when the direct upload is completed and b) after a certain time period of not being used.
### Starting a direct S3 upload
When an S3 direct upload is initiated with uppy, we first request a presigned PUT URL from the new `generate-presigned-put` endpoint in `UploadsController`. This generates an S3 key in the `temp` folder inside the correct bucket path, along with any metadata from the clientside (e.g. the SHA1 checksum described below). This will also create an `ExternalUploadStub` and store the details of the temp object key and the file being uploaded.
Once the clientside has this URL, uppy will upload the file direct to S3 using the presigned URL. Once the upload is complete we go to the next stage.
### Completing a direct S3 upload
Once the upload to S3 is done we call the new `complete-external-upload` route with the unique identifier of the `ExternalUploadStub` created earlier. Only the user who made the stub can complete the external upload. One of two paths is followed via the `ExternalUploadManager`.
1. If the object in S3 is too large (currently 100mb defined by `ExternalUploadManager::DOWNLOAD_LIMIT`) we do not download and generate the SHA1 for that file. Instead we create the `Upload` record via `UploadCreator` and simply copy it to its final destination on S3 then delete the initial temp file. Several modifications to `UploadCreator` have been made to accommodate this.
2. If the object in S3 is small enough, we download it. When the temporary S3 file is downloaded, we compare the SHA1 checksum generated by the browser with the actual SHA1 checksum of the file generated by ruby. The browser SHA1 checksum is stored on the object in S3 with metadata, and is generated via the `UppyChecksum` plugin. Keep in mind that some browsers will not generate this due to compatibility or other issues.
We then follow the normal `UploadCreator` path with one exception. To cut down on having to re-upload the file again, if there are no changes (such as resizing etc) to the file in `UploadCreator` we follow the same copy + delete temp path that we do for files that are too large.
3. Finally we return the serialized upload record back to the client
There are several errors that could happen that are handled by `UploadsController` as well.
Also in this PR is some refactoring of `displayErrorForUpload` to handle both uppy and jquery file uploader errors.
This commit adds the number of drafts a user has next to the "Draft"
label in the user preferences menu and activity tab. The count is
updated via MessageBus when a draft is created or destroyed.
When configured, all topics in the category inherits the slow mode
duration from the category's default.
Note that currently there is no way to remove the slow mode from the
topics once it has been set.
Both of the commits in this PR are meant to fix the problem of invalid
option being shown in the flair chooser. An invalid option can be shown
if at some point it was a valid one - a group with a flair that was
later changed by an admin and flair was removed. The other option an
invalid option can be selected is if the user had a primary group when
the migration ran and copied the same value to the flair_group_id
column.
* FIX: Set flair_group_id only if group has flair
Follow up to 4ba93aac66.
* FIX: Do not show invalid option in flair chooser
If selected flair group became unavailable because the flair was removed
then the option would still be selected and visible as an ID only.
This is a follow up to commit 87c1e98571
which introduced different fields for primary and flair groups. Before
that, primary group was used as a flair group too.
User flair was given by user's primary group. This PR separates the
two, adds a new field to the user model for flair group ID and users
can select their flair from user preferences now.
Since disable_ddl_transaction! is disabled for this migration, it needs to be idempotent. Any error during the migration (e.g. a timeout) will cause ActiveRecord to fail the migration, and try again on the next run. If the index had already been created during the first run, then an 'already exists' error will be raised, with no way to recover.
Unfortunately an [ActiveRecord bug](https://github.com/rails/rails/pull/41490) prevents us from using `if_not_exists: true` alongside `algorithm: :concurrently`, so we have to drop to raw SQL.
The duration column has been ignored since the commit
4af77f1e38
for topic_timers, we use duration_minutes instead.
Also removing the duration key from Topic.set_or_create_timer. The only
plugin to use this was discourse-solved, which doesn't use it any
longer
since
c722b94a97
In the previous commit 5222247
we added a topic_id column to EmailLog. This simply backfills it in
batches. The next PR will get rid of the topic method defined on EmailLog in favour
of belongs_to.
* FEATURE: Staff can receive pending user reminders more frequently.
We now express the "pending_users_reminder_delay" in minutes instead of hours so staff can have finer control over the delay.
We need to keep in mind that the reminders could still take up to 20 minutes, even when using a lower value. We send them from a scheduled job.
* Migrate to a new site setting for the reminders delay
ATM it only implements server side of it, as my need is for automation purposes. However it should probably be added in the UI too as it's unexpected to have pinned_until and no bannered_until.
Having a large number of post-deploy migrations running out-of-numerical-sequence with pre-deploy migrations can be problematic. For example, if we have the sequence
- db/migrate/2017... - add column
- db/post_migrate/2018... - drop the column
- db/migrate/2021... - add the same column again
It will work fine in numerical order. But if you run the pre-deploy migrations **followed by** the post-deploy migrations, you will not get the same result.
Our post-deploy system is designed to allow for seamless upgrades of Discourse. However, it is reasonable for us to only support this totally seamless experience for a limited period of time. This commit moves all post_deploy migrations which are more than 1 year old (i.e. more than 2 major Discourse versions ago) into the regular pre-deploy migrations directory. This limits the impact of any edge cases caused by out-of-numerical-sequence migrations.
This adds the following columns to EmailLog:
* cc_addresses
* cc_user_ids
* topic_id
* raw
This is to bring the EmailLog table closer in parity to
IncomingEmail so it can be better utilized for Group SMTP
and IMAP mailing.
The raw column contains the full content of the outbound email,
but _only_ if the new hidden site setting
enable_raw_outbound_email_logging is enabled. Most sites do not
need it, and it's mostly required for IMAP and SMTP sending.
In the next pull request, there will be a migration to backfill
topic_id on the EmailLog table, at which point we can remove the
topic fallback method on EmailLog.
The first thing we needed here was an enum rather than a boolean to determine how a directory_column was created. Now we have `automatic`, `user_field` and `plugin` directory columns.
This plugin API is assuming that the plugin has added a migration to a column to the `directory_items` table.
This was created to be initially used by discourse-solved. PR with API usage - https://github.com/discourse/discourse-solved/pull/137/
Adds a new `smtp_group_id` column to `EmailLog` which is filled in if the mail `from_address` matches a group's `email_username`. This is for easier debugging, so we know which emails have been sent via group SMTP.
Over the years we have found that a few communities never discovered tags.
Instead of having them default off we now have them default on, ensuring
that everyone finds out about them.
Co-authored-by: Dan Ungureanu <dan@ungureanu.me>
Users who use encoded slugs on their sites sometimes run into 500 error when pasting a link to another topic in a post. The problem happens when generating a backward "reflection" link that would appear in a linked topic. Link URL restricted on the database level to 500 chars in length. At first glance, it should work since we have a restriction on topic title length.
But it doesn't work when a site uses encoded slugs, like here (take a look at the URL). The link to a topic, in this case, can be much longer than 500 characters.
By the way, an error happens only when generating a "reflection" link and doesn't happen with a direct link, we truncate that link. It works because, in this case, the original long link is still present in the post body and can be used for navigation. But we can't do the same for backward "reflection" links (without rewriting their implementation), the whole link must be saved to the database.
The simplest and cleanest solution will be just to remove the restriction on the database level. Abuse is impossible here since we are already protected by the restriction on topic title length. There aren’t performance benefits in using length-constrained columns in Postgres, in fact, length-constrained columns need a few extra CPU cycles to check the length when storing data.
It was not clear that replace watched words can be used to replace text
with URLs. This introduces a new watched word type that makes it easier
to understand.
Refactors `TrustLevel` and moves translations from server to client
Additional changes:
* "staff" and "admin" wasn't translatable in site settings
* it replaces a concatenated string with a translation
* uses translation for trust levels in users_by_trust_level report
* adds a DB migration to rename keys of translation overrides affected by this commit
The excerpt field in the database is constrained to 1000 chars in length. To support this constraint we added a restriction that the topic_excerpt_maxlength setting must be between 0 and 999.
Unfortunately, sometimes it doesn’t work because:
- topic_excerpt_maxlength restricts the length of a visible to user excerpt. But we HTML-escape text before saving. If an excerpt contains & it’ll be &. One character for the user but 5 characters to save to the database. So if topic_excerpt_maxlength is set to 999 it’s not so hard to have an excerpt of for example 1003 characters in length and run into this issue.
- It’s possible to define a custom excerpt for a topic. Such excerpts bypass check for a length. So if the user defines a too long custom excerpt he will run into this issue.
Removing the constraint on the database level solves the problem. But we still need the constraint for topic_excerpt_maxlength on the setting page, because too long excerpts would make UI wonky.