Discourse automatically sends a private message after backup or
restore finished. The private message used to contain the log inline
even when it was very long. A very long log can create issues because
the length of the post will be over the maximum allowed length of a
post. When that happens, Discourse will try to create an upload with
the logs. If that fails, it will trim the log and inline it.
Inlining secure images with the same name was not possible because they
were indexed by filename. If an email contained two files with the same
name, only the first image was used for both of them. The other file
was still attached to the email.
When the Reply-To header is present for incoming emails we
want to use it instead of the from address. This is usually the
case when forwarding an email via a mailing list into Discourse.
For now we are only using the Reply-To header if the email has
been forwarded via Google Groups, which is why we are checking the
X-Original-From header too. In future we may want to use the Reply-To
header in more cases.
Searching in a category looked only one level down, ignoring the site
setting max_category_nesting. The user interface did not support the
third level of categories and did not display them in the "Categorized"
input of the advanced search options.
* FEATURE: Onebox can match engines based on the content_type
`FinalDestination` now returns the `content_type` of a resolved URL.
`Oneboxer` passes this value to `Onebox` itself. Onebox engines can now specify a `matches_content_type` regex of content_types that the engine can handle, regardless of the URL.
`ImageOnebox` will match URLs with a content type of `image/png`, `jpg`, `gif`, `bmp`, `tif`, etc.
This will allow images that exist at a URL without a file type extension to be correctly rendered, assuming a valid `content_type` is returned.
When a post is created, the draft sequence is increased and then older
drafts are automatically executing a raw SQL query. This skipped the
Draft model callbacks and did not update user's draft count.
I fixed another problem related to a raw SQL query from Draft.cleanup!
method.
We shouldn't be checking if a user is allowed to do an action in the logger. We should be checking it just before we perform the action. In fact, guardians in the logger can make things even worse in case of a security bug. Let's say we forgot to check user's permissions before performing some action, but we still have a call to the guardian in the logger. In this case, a user would perform the action anyway, and this action wouldn't even be logged!
I've checked all cases and I confirm that we're safe to delete this calls from the logger.
I've added two calls to guardians in admin/user_controller. We didn't have security bugs there, because regular users can't access admin/... routes at all. But it's good to have calls to guardian in these methods anyway, neighboring methods have them.
This adds a few different things to allow for direct S3 uploads using uppy. **These changes are still not the default.** There are hidden `enable_experimental_image_uploader` and `enable_direct_s3_uploads` settings that must be turned on for any of this code to be used, and even if they are turned on only the User Card Background for the user profile actually uses uppy-image-uploader.
A new `ExternalUploadStub` model and database table is introduced in this pull request. This is used to keep track of uploads that are uploaded to a temporary location in S3 with the direct to S3 code, and they are eventually deleted a) when the direct upload is completed and b) after a certain time period of not being used.
### Starting a direct S3 upload
When an S3 direct upload is initiated with uppy, we first request a presigned PUT URL from the new `generate-presigned-put` endpoint in `UploadsController`. This generates an S3 key in the `temp` folder inside the correct bucket path, along with any metadata from the clientside (e.g. the SHA1 checksum described below). This will also create an `ExternalUploadStub` and store the details of the temp object key and the file being uploaded.
Once the clientside has this URL, uppy will upload the file direct to S3 using the presigned URL. Once the upload is complete we go to the next stage.
### Completing a direct S3 upload
Once the upload to S3 is done we call the new `complete-external-upload` route with the unique identifier of the `ExternalUploadStub` created earlier. Only the user who made the stub can complete the external upload. One of two paths is followed via the `ExternalUploadManager`.
1. If the object in S3 is too large (currently 100mb defined by `ExternalUploadManager::DOWNLOAD_LIMIT`) we do not download and generate the SHA1 for that file. Instead we create the `Upload` record via `UploadCreator` and simply copy it to its final destination on S3 then delete the initial temp file. Several modifications to `UploadCreator` have been made to accommodate this.
2. If the object in S3 is small enough, we download it. When the temporary S3 file is downloaded, we compare the SHA1 checksum generated by the browser with the actual SHA1 checksum of the file generated by ruby. The browser SHA1 checksum is stored on the object in S3 with metadata, and is generated via the `UppyChecksum` plugin. Keep in mind that some browsers will not generate this due to compatibility or other issues.
We then follow the normal `UploadCreator` path with one exception. To cut down on having to re-upload the file again, if there are no changes (such as resizing etc) to the file in `UploadCreator` we follow the same copy + delete temp path that we do for files that are too large.
3. Finally we return the serialized upload record back to the client
There are several errors that could happen that are handled by `UploadsController` as well.
Also in this PR is some refactoring of `displayErrorForUpload` to handle both uppy and jquery file uploader errors.
In badge queries for 'First Share' and 'Nice/Good/Great Share' badges,
check that the user exists.
For 'Nice+ Share' badges, also grant badges if the number of shares is
equal to the threshhold count to better match the descriptions.
The cache_fullpath for the Stylesheet::Manager was the same for
every test runner in a parallel test environment, so when other
specs or other places e.g. the stylesheets_controller_spec ran
rm -rf Stylesheet::Manager.cache_fullpath this caused errors
for other specs running that went through the
Stylesheet::Manager::Builder#compile path, causing the error
```
Errno::ENOENT:
No such file or directory @ rb_sysopen
```
Also fixed the stylesheet_controller which was interpolating Rails.root + CACHE_PATH
itself instead of just using Stylesheet::Manager.cache_fullpath
Prior to this fix, post whisperer in personal messages are revealed in
the topic's participants list even though non-staff users are unable to
see the whisper.
Using an invalid value was allowed. This commit tries to automatically
fix the color by adding missing # symbol or will show an error to the
user if it is not possible and it is not a CSS color either.
Not specifying an `Accept-Language` should be equivalent to specifying an `Accept-Language` of `*`, however some webservers seem to prefer it if we are explicit about being able to handle a response of content in any language.
There was a bunch of warnings repeated over and over during spec runs:
```
/var/www/discourse/lib/tasks/release_note.rake:3: warning: already initialized constant DATE_REGEX
/var/www/discourse/lib/tasks/release_note.rake:3: warning: previous definition of DATE_REGEX was here
/var/www/discourse/lib/tasks/release_note.rake:5: warning: already initialized constant CHANGE_TYPES
/var/www/discourse/lib/tasks/release_note.rake:5: warning: previous definition of CHANGE_TYPES was here
```
When the Forever option is selected for suspending a user, the user is suspended for 1000 years. Without customizing the site’s text, this time period is displayed to the user in the suspension email that is sent to the user, and if the user attempts to log back into the site. Telling someone that they have been suspended for 1000 years seems likely to come across as a bad attempt at humour.
This PR special case messages when a user suspended or silenced forever.
This change largely targets dev users, but it could potentially change
behaviour in production.
Jamie Wilson & I debugged a problem where "should not be larger than the
maximum thumbnail size" would fail due to timeouts.
On our systems, on ImageMagick 7.1.0-2, with inkscape installed, IM would
attempt to rasterise the svg then check the resulting filesize, causing the
test to timeout.
As of now, we haven't found a way to cause this to behave better, but have a
workaround in that forcing IM to use the internal renderer (`MSVG:`) seems to
make it perform the same on development workstations as it does in our docker
container.
Configuring staged users to watch categories and tags is a way to sign
them up to get many emails. These emails may be unwanted and get marked
as spam, hurting the site's email deliverability.
Users can opt-in to email notifications by logging on to their
account and configuring their own preferences.
If staff need to be able to configure these preferences on behalf of
staged users, the "allow changing staged user tracking" site setting
can be enabled. Default is to not allow it.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
* DEV: Improve rake `release_note:generate` date handling
A commit-ish value like HEAD@{2021-01-01} is based on the **local state** of HEAD on that date. It does not use dates attached to commits.
Instead, the rake task now detects date-like strings and supplies them to `git log` via the `--after` and `--before` flags
* Skip printing plugin when there are no changes found
A list of skipped plugins is printed on a single line at the end of the output
After every new random record created using the `dev:populate` rake task a new Discourse event will be triggered. So the plugins can modify the records if needed.
Fixes two issues:
- ignores invalid XML in custom icon sprite SVG file (and outputs an error if sprite was uploaded via admin UI)
- clears SVG sprite cache when deleting an `icons-sprite` upload in a theme
By default, Twitter will return the URL for the avatar image of the tweet poster as the `og:image` value.
However, if the `user_generated` attribute is true, we should not use this as the avatar URL as this will be an URL of an image in the tweet itself (e.g., an image belonging to a tweeted news story).
If user had a staged account and logged in using a third party service
a different username was suggested. This change will try to use the
username given by the authentication provider first, then the current
staged username and last suggest a new one.
The date shown in topic timeline was one day later if the post at that
position was made near midnight. This happened because the days number
was rounded down.
User flair was given by user's primary group. This PR separates the
two, adds a new field to the user model for flair group ID and users
can select their flair from user preferences now.
This TODO comment has existed for 8 years. Sort must be working just
fine or we would have prioritized fixing it.
Removing this comment as a tiny step toward keeping our codebase nice
and tidy.
This PR adds uppy to the project with a custom JS build and the shims needed to import it into our JS code. We need a custom build of Uppy because we do not use webpack for our JS modules/build. The only way to get what you want from Uppy is to use the webpack modules or to include the entire Uppy project including all plugins in a single JS file. This way we can just use the plugins we actually want. Future PRs will actually use Uppy!
Take 2 of https://github.com/discourse/discourse/pull/13466.
Fixes a few issues with the original PR:
- color definition stylesheet target now includes the theme id, to avoid themes set to use the default color scheme loading the same stylesheet
- changes the internal cache key for color definition stylesheet to reset the pre-existing cache
`bin/rake annotate` is an alias of `bin/annotate --models`
`bin/rake annotate:clean` generates annotations by using a temporary, freshly migrated database. This should help us to produce more consistent annotations, even if development databases have been polluted by plugin migrations.
A GitHub actions task is also added which generates annotations on a clean database, and raises an error if they differ from the committed annotations.
We renamed the site setting for this long ago, but there
were a few places left in the code base where "ninja edit"
needed to be turned into "grace period". Doing this here
to avoid combatative language.
We changed (https://github.com/discourse/discourse/pull/13407) behaviour of the topic level bookmark button recently. That PR made the button be opening the edit bookmark modal when there is only one bookmark on the topic instead of just removing that bookmark as it was before.
This PR fixes the next problems that weren't taken into account in the previous PR:
1. Everything should work fine even on very big topics when a bookmarked post is unloaded from the post stream. I've added code that loads the post we need and makes everything work as expected
2. When at least one bookmark on the topic has a reminder, we should always be showing the icon with a clock on the topic level bookmark button
3. We should show correct tooltips for the topic level bookmark button
This PR makes several changes to the group SMTP email contents to make it look more like a support inbox message.
* Remove the context posts, they only add clutter to the email and replies
* Display email addresses of staged users instead of odd generated usernames
* Add a "please reply above this line" message to sent emails
This PR backtracks a fair bit on this one https://github.com/discourse/discourse/pull/13220/files.
Instead of sending the group SMTP email for each user via `UserNotifications`, we are changing to send only one email with the existing `Jobs::GroupSmtpEmail` job and `GroupSmtpMailer`. We are changing this job and mailer along with `PostAlerter` to make the first topic allowed user the `to_address` for the email and any other `topic_allowed_users` to be the CC address on the email. This is to cut down on emails sent via SMTP, which is subject to daily limits from providers such as Gmail. We log these details in the `EmailLog` table now.
In addition to this, we have changed `PostAlerter` to no longer rely on incoming email email addresses for sending the `GroupSmtpEmail` job. This was unreliable as a user's email could have changed in the meantime. Also it was a little overcomplicated to use the incoming email records -- it is far simpler to reason about to just use topic allowed users.
This also adds a fix to include cc_addresses in the EmailLog.addressed_to_user scope.
We are a few versions behind on this gem. We need to update it
for S3 multipart uploads. In the current version we are using, we
cannot do this:
```ruby
Discourse.store.s3_helper.object(key).presigned_url(:upload_part, part_number: 1, upload_id: multipart_upload_id)
```
The S3 client raises an error, saying the operation is undefined. Once
I updated the gem this operation works as expected and returns a
presigned URL for the upload_part operation.
Also remove use of Aws::S3::FileUploader::FIFTEEN_MEGABYTES.
This was part of a private API and should not have been used.
The `themes:isolated_test` rake task will now unset all `DISCOURSE_*` env variables if `UNSET_DISCOURSE_ENV_VARS` env var is set and will also spin up a temporary redis server so the unicorn web server that's spun up for the tests doesn't leak into the "main" redis server.
Previously, we were storing custom svg sprite paths in the cache. This is a problem because sprites in themes get stored as uploads, and the returned paths were files in the temporary download cache which could sometimes be cleaned up, resulting in a broken cache.
I previously tried to fix this by skipping the missing files and clearing the cache, but that didn't work out well with CDNs. This PR stores the contents of the files in the custom_svg_sprites cache to avoid the problem of missing temp files.
Also, plugin custom icons are only included if the plugin is enabled.
In #12841, we started setting the ReviewableQueuedPost's target and topic after approving it instead of storing them in the payload. As a result, the reviewable_counts query started to include queued posts.
When a category is set to require approval, every post has an associated reviewable. Pointing that each post has an associated queued post is not necessary in this case, so I added a WHERE clause to skip them.
Having a large number of post-deploy migrations running out-of-numerical-sequence with pre-deploy migrations can be problematic. For example, if we have the sequence
- db/migrate/2017... - add column
- db/post_migrate/2018... - drop the column
- db/migrate/2021... - add the same column again
It will work fine in numerical order. But if you run the pre-deploy migrations **followed by** the post-deploy migrations, you will not get the same result.
Our post-deploy system is designed to allow for seamless upgrades of Discourse. However, it is reasonable for us to only support this totally seamless experience for a limited period of time. This commit moves all post_deploy migrations which are more than 1 year old (i.e. more than 2 major Discourse versions ago) into the regular pre-deploy migrations directory. This limits the impact of any edge cases caused by out-of-numerical-sequence migrations.
This adds the following columns to EmailLog:
* cc_addresses
* cc_user_ids
* topic_id
* raw
This is to bring the EmailLog table closer in parity to
IncomingEmail so it can be better utilized for Group SMTP
and IMAP mailing.
The raw column contains the full content of the outbound email,
but _only_ if the new hidden site setting
enable_raw_outbound_email_logging is enabled. Most sites do not
need it, and it's mostly required for IMAP and SMTP sending.
In the next pull request, there will be a migration to backfill
topic_id on the EmailLog table, at which point we can remove the
topic fallback method on EmailLog.
The other processing operations, such as fixing orientation or cropping,
can in rare cases increase the size of the uploaded image. Running the
downsize step after all these operations should create the best image
possible.
We had checks for the chrome binary in 3 different places
for tests and only one of them checked for google-chrome-stable,
which is problematic for Arch linux users (there are dozens of us!)
This PR moves all the code to one place and references it instead
of copying and pasting.
Before this change, calling `StyleSheet::Manager.stylesheet_details`
for the first time resulted in multiple queries to the database. This is
because the code was modelled in a way where each `Theme` was loaded
from the database one at a time.
This PR restructures the code such that it allows us to load all the
theme records in a single query. It also allows us to eager load the
required associations upfront. In order to achieve this, I removed the
support of loading multiple themes per request. It was initially added
to support user selectable theme components but the feature was never
completed and abandoned because it wasn't a feature that we thought was
worth building.
We already reject email replies to public topics via `SiteSetting.disallow_reply_by_email_after_days` and raising the `OldDestinationError`. This PR introduces similar behaviour for group inboxes, but without the rejection, and **only when SMTP is enabled for the group**.
If a reply is sent via email and the post is older than `SiteSetting.disallow_reply_by_email_after_days` days ago, then we create a new topic instead of making a reply in the old one and link back to the original topic. This is done to prevent long running group inbox discussions.
If a user posted a URL that appeared inside a Onebox, then the user
got a duplicate link notice. This was fixed by skipping those links in
Ruby.
If a user posted a URL that was Oneboxes and contained other links that
appeared in previous posts, then the user got a duplicate link notice.
This was fixed by skipping those links in JavaScript.
Before this fix we would display this exception:
```
Discourse::InvalidParameters:
value
```
After this fix we will display:
```
Discourse::InvalidParameters:
Invalid `x` value for `s3_region`
```
When we are emailing people from a group inbox, we are having
a PM conversation with them, as a support account would. In this
case mailing list headers do not make sense. It is not like a forum
topic where you may have tens or hundreds of participants -- it is a
conversation between the group and a small handful of people
directly contacting the group, often just one person.
The only header left in tact was List-Unsubsribe which is important
for letting people opt out to notifications.
* FIX: Allow SVG uploads if dimensions are a fraction of a unit
`UploadCreator` counts the number of pixels in an file to determine if it is valid. `pixels` is calculated by multiplying the width and height of the image, as determined by FastImage.
SVG files can have their width/height expressed in a variety of different units of measurement. For example, ‘px’, ‘in’, ‘cm’, ‘mm’, ‘pt’, ‘pc’, etc are all valid within SVG files. If an image has a width of `0.5in`, FastImage may interpret this as being a width of `0`, meaning it will report the `size` as being `0`.
However, we don’t need to concern ourselves with the number of ‘pixels’ in a SVG files, as that is irrelevant for this file format, so we can skip over the check for `pixels == 0` when processing this file type.
* DEV: Speed up getting SVG dimensions
The `-ping` flag prevents the entire image from being rasterized before a result is returned. See:
https://imagemagick.org/script/command-line-options.php#ping
The purpose of this is to allow us to catch regressions for a feature we've built recently that allows theme tests to run in production. We recently had a regression that we didn't notice for days, so to prevent that from happening again we'll use this in our internal CI pipelines.
The first thing we needed here was an enum rather than a boolean to determine how a directory_column was created. Now we have `automatic`, `user_field` and `plugin` directory columns.
This plugin API is assuming that the plugin has added a migration to a column to the `directory_items` table.
This was created to be initially used by discourse-solved. PR with API usage - https://github.com/discourse/discourse-solved/pull/137/
Profiling showed that we were roughly 10% of a request time creating all
the ActiveRecord objects for categories in the `Site` model on a site with 61 categories.
Instead of querying for the categories each time based on which categories the user can see,
we can just preload all of the categories upfront and filter out the
categories that the user can not see.
When dismissing new topics for the Tracked filter, the dismiss was
limited to 30 topics which is the default per page count for TopicQuery.
This happened even if you specified which topic IDs you were
selectively dismissing. This PR fixes that bug, and also moves
the per_page_count into a DEFAULT_PER_PAGE_COUNT for the TopicQuery
so it can be stubbed in tests.
Also moves the unused stub_const method into the spec helpers
for cases like this; it is much better to handle this in one place
with an ensure. In a follow up PR I will clean up other specs that
do the same thing and make them use stub_const.
Dir.glob does not guarantee file order and can change when ran on different machines.
This means that running asset precompilation on the exact same codebase will output
different content hashes.
Uploading lots of small files can be made significantly faster by parallelizing the `s3.put_object` calls. In testing, an UPLOAD_CONCURRENCY of 10 made a large restore 10x faster. An UPLOAD_CONCURRENCY of 20 made the same restore 18x faster.
This commit is careful to parallelize as little as possible, to reduce the chance of concurrency issues. In the worker threads, no database transactions are performed. All modification of shared objects is controlled with a mutex.
Unfortunately we do not have any existing tests for the `ToS3Migration` class. This change has been tested with a large site backup (120k uploads totalling 45GB)
When we call Bookmark.cleanup! we want to make sure that
topic_user.bookmarked is updated for topics linked to the
bookmarks that were deleted. Also when PostDestroyer calls
destroy and recover. We have a job for this already --
SyncTopicUserBookmarked -- so we just utilize that.
There are 2 changes in this PR:
1) Add a new environment variable called `DISCOURSE_SKIP_CSS_WATCHER` to disable our stylesheet watcher, and make the `qunit:test` rake task set this variable on the Unicorn/Rails server it spins up to disable our stylesheet watcher when running the tests because it doesn't really need it.
2) Print more Chrome logs (such as network/security errors) to the console.
Since we use the event to perform additional validations on the file, we should check if it added any errors to the upload before saving it. This change makes the UploadCreator more consistent since we no longer have to rely on exceptions.
Adds a new `smtp_group_id` column to `EmailLog` which is filled in if the mail `from_address` matches a group's `email_username`. This is for easier debugging, so we know which emails have been sent via group SMTP.
Fixes our backend spec suite in GitHub Actions CI. For more information about the Docker issue see: https://github.com/docker/for-linux/issues/1015
(It's possible that error could also happen in dev/production, though thankfully that hasn't happened yet afaik)
When replying to a user_private_message email originating from
a group PM that does _not_ have a reply key (e.g. when replying
directly to the group's SMTP address), we were mistakenly linking
the new post created from the reply to the OP and the user who
created the topic, based on the first IncomingEmail message ID in
the topic, rather than using the correct reply to user and post number
that the user actually replied to.
We now use the In-Reply-To header to look up the corresponding EmailLog
record when the user who replied was sent a user_private_message email,
and use the post from that as the reply_to_user/post.
This also removes superfluous filtering of incoming_email records. After
already filtering by message_id and then addressed_to_user (which only
returns incoming emails where the to, from, or cc address includes any
of the user's emails), we were filtering again but in the ruby code for
the exact same conditions. After removing this all existing tests still
pass.
If force_https is enabled all resource (including markdown preview and so on) will be accessed using HTTPS
If for any reason you attempt to link to non HTTPS reachable content content may appear broken
When `Theme#all_theme_variables` returns an empty array, we were running
a pointless query in `StyleSheet::Manager#uploads_digest`.
`SELECT "sha1" FROM "theme_fields" INNER JOIN "uploads" ON
"uploads"."id" = "theme_fields"."upload_id" WHERE 1=0`
Previously, the `transformed.blah` shortcut could only be used in top-level hbs statements like {{transformed.blah}}. When attempting to use it in a sub-expression like `{{concat "hello" transformed.world}}`, it would raise a "transformed is not defined" error.
This commit updates the shortcut logic to make `transformed.blah` and `attrs.blah` work consistently in all hbs expressions.
Co-authored-by: Jordan Vidrine <jordan@jordanvidrine.com>
Generating the client settings json involves santizing all string based
site settings. This is expensive as per our profile in production (~120ms) and one request after each deploy has
to pay this penalty.