Previously we would use the date the post was updated at as the grant date
this caused confusion.
This also tidies up the badges sql file which was using outdated patterns
for multi line strings.
A race condition issue is possible when multiple thread/processes are calling this method.
`ls` prints out to stderr "cannot access '...': No such file or directory" if any of the files it's currently trying to list are being removed by the `xargs rm -rf` in an another process. That doesn't affect the result, but it did raise an error before this change.
Tested on a production instance where the original issue was observed.
Co-Authored-By: Régis Hanol <regis@hanol.fr>
This allows us to use `sourceURL` which otherwise does not work. In the
future we hope to have proper source maps in development mode and
disable this again.
When pull_hotlinked_images tried to run on posts with secure media (which had already been downloaded from external sources) we were getting a 404 when trying to download the image because the secure endpoint doesn't allow anon downloads.
Also, we were getting into an infinite loop of pull_hotlinked_images because the job didn't consider the secure media URLs as "downloaded" already so it kept trying to download them over and over.
In this PR I have also refactored secure-media-upload URL checks and mutations into single source of truth in Upload, adding a SECURE_MEDIA_ROUTE constant to check URLs against too.
* DEV: Add a fake Mutex that for concurrency testing with Fibers
* DEV: Support running in sleep order in concurrency tests
* FIX: A separate FallbackHandler should be used for each redis pair
This commit refactors the FallbackHandler and Connector:
* There were two different ways to determine whether the redis master
was up. There is now one way and it is the responsibility of the
new RedisStatus class.
* A background thread would be created whenever `verify_master` was
called unless the thread already existed. The thread would
periodically check the status of the redis master. However, checking
that a thread is `alive?` is an ineffective way of determining
whether it will continue to check the redis master in the future
since the thread may be in the process of winding down.
Now, this thread is created when the recorded master status goes from
up to down. Since this thread runs the only part of the code that is
able to bring the recorded status up again, we ensure that only one
thread is probing the redis master at a time and that there is always
a thread probing redis master when it is recorded as being down.
* Each time the status of the redis master was checked periodically, it
would spawn a new thread and immediately join on it. I assume this
happened to isolate the check from the current execution, but since
the join rethrows exceptions in the parent thread, this was not
effective.
* The logic for falling back was spread over the FallbackHandler and
the Connector. The connector is now a dumb object that delegates
responsibility for determining the status of redis to the
FallbackHandler.
* Previously, failing to connect to a master redis instance when it was
not recorded as down would raise an exception. Now, this exception is
passed to `Discourse.warn_exception` and the connection is made to
the slave.
This commit introduces the FallbackHandlers singleton:
* It is responsible for holding the set of FallbackHandlers.
* It adds callbacks to the fallback handlers for when a redis master
comes up or goes down. Main redis and message bus redis may exist on
different or the same redis hosts and so these callbacks may all
exist on the same FallbackHandler or on separate ones.
These objects are tested using fake concurrency provided by the
Concurrency module:
* An `around(:each)` hook is used to cause each test to run inside a
Scenario so that the test body, mocking cleanup and `after(:each)`
callbacks are run in a different Fiber.
* Therefore, holting the execution of the Execution abruptly (so that
the fibers aren't run to completion), prevents the mocking cleaning
and `after(:each)` callbacks from running. I have tried to prevent
this by recovering from all exceptions during an Execution.
* FIX: Create frozen copies of passed in config where possible
* FIX: extract start_reset method and remove method used by tests
Co-authored-by: Daniel Waterworth <me@danielwaterworth.com>
Add TopicUploadSecurityManager to handle post moves. When a post moves around or a topic changes between categories and public/private message status the uploads connected to posts in the topic need to have their secure status updated, depending on the security context the topic now lives in.
For consistency this PR introduces using custom markdown and short upload:// URLs for video and audio uploads, rather than just treating them as links and relying on the oneboxer. The markdown syntax for videos is ![file text|video](upload://123456.mp4) and for audio it is ![file text|audio](upload://123456.mp3).
This is achieved in discourse-markdown-it by modifying the rules for images in mardown-it via md.renderer.rules.image. We return HTML instead of the token when we encounter audio or video after | and the preview renders that HTML. Also when uploading an audio or video file we insert the relevant markdown into the composer.
When we were pulling hotlinked images for oneboxes in the CookedPostProcessor, we were using the direct S3 URL, which returned a 403 error and thus did not set widths and heights of the images. We now cook the URL first based on whether the upload is secure before handing off to FastImage.
The `sourceURL` directive must be on the same line as the thing it's
referencing. This patch allows it to work again in development mode
because each Javascript file ends up in its own `define(...)` line.
It will strip out any trailing whitespace and put the `sourceURL`
comment on the same line and everything seems to work.
group membership and `CategoryUser` notification level should be
respected to determine whether to notify staged users about activity in
private categories, instead of only ever generating notifications for staged
users' own topics (which has been the behaviour since
0c4ac2a7bc)
* enqueue spam/dmarc failing emails instead of hiding
* add translations for dmarc/spam enqueued reasons
* unescape quote
* if email_in_authserv_id is blank return gray for all emails
On some customer forums we are randomly getting a "You must select a valid user" error when sending a PM even when all parameters seem to be OK. This is an attempt to track it down with more data.
ReviewableScore#types extend the PostActionTypes with their own, storing the result inside a class variable. To avoid overwriting an existing flag, we need to calculate the next flag ID using these types instead of the PostAction ones. Since we first call the score types to calculate the id, this list gets memoized, leaving us with an outdated list.
To fix this, we now reload ReviewableScore#types after replacing flags.
Custom emoji, profile background, and card background were being set to secure, which we do not want as they are always in a public context and result in a 403 error from the ACL if linked directly.
### General Changes and Duplication
* We now consider a post `with_secure_media?` if it is in a read-restricted category.
* When uploading we now set an upload's secure status straight away.
* When uploading if `SiteSetting.secure_media` is enabled, we do not check to see if the upload already exists using the `sha1` digest of the upload. The `sha1` column of the upload is filled with a `SecureRandom.hex(20)` value which is the same length as `Upload::SHA1_LENGTH`. The `original_sha1` column is filled with the _real_ sha1 digest of the file.
* Whether an upload `should_be_secure?` is now determined by whether the `access_control_post` is `with_secure_media?` (if there is no access control post then we leave the secure status as is).
* When serializing the upload, we now cook the URL if the upload is secure. This is so it shows up correctly in the composer preview, because we set secure status on upload.
### Viewing Secure Media
* The secure-media-upload URL will take the post that the upload is attached to into account via `Guardian.can_see?` for access permissions
* If there is no `access_control_post` then we just deliver the media. This should be a rare occurrance and shouldn't cause issues as the `access_control_post` is set when `link_post_uploads` is called via `CookedPostProcessor`
### Removed
We no longer do any of these because we do not reuse uploads by sha1 if secure media is enabled.
* We no longer have a way to prevent cross-posting of a secure upload from a private context to a public context.
* We no longer have to set `secure: false` for uploads when uploading for a theme component.
Some specs use psql to test database restores and dropping the table after the test needs to happen outside of rspec because of transactions. The previous attempt lead to some changes to be stored in the test database.
FIX: raised a proper NotFound exception when filtering groups by username with invalid username.
FIX: properly filter the groups based on current user visibility when viewing another user's groups.
DEV: Guardian.can_see_group?(group) is now using Guardian.can_see_groups(groups) instead of duplicating the same code.
FIX: spec for groups_controller#index when group directory is disabled for logged in user.
FIX: groups_controller.sortable specs to actually test all sorting combinations.
DEV: s/response_body/body/g for slightly shorter spec code.
FIX: rewrote the "view another user's groups" specs to test all group_visibility and members_group_visibility combinations.
DEV: Various refactoring for cleaner and more consistent code.
An archive containing lots of small files could trigger an error even though the amount of decompressed data was way below the maximum allowed size. This happened because the decompression algorithm used the chunk size for calculating the remaining size instead of the actual size of the decompressed chunk.
The QUnit rake task starts a server in test mode. We need a tweak to allow dynamic CSP hostnames in test mode. This tweak is already present in development mode.
To allow CSP to work, the browser host/port must match what the server sees. Therefore we need to disable the enforce_hostname middleware in test mode. To keep rspec and production as similar as possible, we skip enforce_hostname using an environment variable.
Also move the qunit rake task to use unicorn, for consistency with development and production.
* Add a rake task to disable secure media. This sets all uploads to `secure: false`, changes the upload ACL to public, and rebakes all the posts using the uploads to make sure they point to the correct URLs. This is in a transaction for each upload with the upload being updated the last step, so if the task fails it can be resumed.
* Also allow viewing media via the secure url if secure media is disabled, redirecting to the normal CDN url, because otherwise media links will be broken while we go and rebake all the posts + update ACLs