Also removes "appEventsCache". (and reduces the reported test memory usage by ~33%)
There's no longer any need to remove appEvent listeners in application-instance initializers' `teardown`, as app instances are recreated before each test (in both legacy and ember cli envs)
Uppy and Resumable slice up their chunks differently, which causes a difference
in this algorithm. Let's take a 131.6MB file (137951695 bytes) with a 5MB (5242880 bytes)
chunk size. For resumable, there are 26 chunks, and uppy there are 27. This is
controlled by forceChunkSize in resumable which is false by default. The final
chunk size is 6879695 (chunk size + remainder) whereas in uppy it is 1636815 (just remainder).
This means that the current condition of uploaded_file_size + current_chunk_size >= total_size
is hit twice by uppy, because it uses a more correct number of chunks. This
can be solved for both uppy and resumable by checking the _previous_ chunk
number * chunk_size as the uploaded_file_size.
An example of what is happening before that change, using the current
chunk number to calculate uploaded_file_size.
chunk 26: resumable: uploaded_file_size (26 * 5242880) + current_chunk_size (6879695) = 143194575 >= total_size (137951695) ? YES
chunk 26: uppy: uploaded_file_size (26 * 5242880) + current_chunk_size (5242880) = 141557760 >= total_size (137951695) ? YES
chunk 27: uppy: uploaded_file_size (27 * 5242880) + current_chunk_size (1636815) = 143194575 >= total_size (137951695) ? YES
An example of what this looks like after the change, using the previous
chunk number to calculate uploaded_file_size:
chunk 26: resumable: uploaded_file_size (25 * 5242880) + current_chunk_size (6879695) = 137951695 >= total_size (137951695) ? YES
chunk 26: uppy: uploaded_file_size (25 * 5242880) + current_chunk_size (5242880) = 136314880 >= total_size (137951695) ? NO
chunk 27: uppy: uploaded_file_size (26 * 5242880) + current_chunk_size (1636815) = 137951695 >= total_size (137951695) ? YES
Intersection observer callback can be called after the component gets destroyed:
```
Assertion Failed: calling set on destroyed object: <@ember/component:ember6019>.docked = false
at assert (ember:37774:17)
at _set2 (ember:17304:46)
at Class.set (ember:29529:29)
at Class._intersectionHandler (discourse/app/components/topic-progress:135:16)
at Backburner._run (ember:56389:25)
at Backburner._join (ember:56365:21)
at Backburner.join (ember:56082:19)
at join (ember:42874:28)
at IntersectionObserver.eval (ember:42978:19)
```
This api allows to add a dropdown at the bottom of a topic, note that this API is mobile only for now.
Also included in the commit:
- various doc fixes
- adding tests for both buttons and dropdowns APIs
- uses thrown instead of @ember/error to ensure execution is halted when incorrect parameters are given
Same issue as 28b00dc6fc, the
Mocha::ExpectationError inherits from Exception instead
of StandardError so RspecErrorTracker does not show the
actual failed expectation in request specs, the status of
the response is just 500 with no further detail.
It makes much more sense for these to be GlobalSettings, since, in
multisite clusters, only the default site's settings would be respected.
Co-authored-by: David Taylor <david@taylorhq.com>
This commit adds the RailsMultisite middleware in test mode when Rails.configuration.multisite is true. This allows for much more realistic integration testing. The `multisite_spec.rb` file is rewritten to avoid needing to simulate a middleware stack.
* Revert "DEV: increase lock timeout for multisite migration (#14831)"
This partially reverts commit 337ef60303.
We need to revert the mutex around `db:status:json` because the mutex is not available unless the rails environment is loaded which the `db:status:json` doesn't load before the mutex. We can't load the environment before entering the mutex because the mutex is meant to prevent other instances of the task from loading a rails environment while the database is migrating.
Co-authored-by: David Taylor <david@taylorhq.com>
Co-authored-by: David Taylor <david@taylorhq.com>
This PR introduces a new `enable_experimental_backup_uploads` site setting (default false and hidden), which when enabled alongside `enable_direct_s3_uploads` will allow for direct S3 multipart uploads of backup .tar.gz files.
To make multipart external uploads work with both the S3BackupStore and the S3Store, I've had to move several methods out of S3Store and into S3Helper, including:
* presigned_url
* create_multipart
* abort_multipart
* complete_multipart
* presign_multipart_part
* list_multipart_parts
Then, S3Store and S3BackupStore either delegate directly to S3Helper or have their own special methods to call S3Helper for these methods. FileStore.temporary_upload_path has also removed its dependence on upload_path, and can now be used interchangeably between the stores. A similar change was made in the frontend as well, moving the multipart related JS code out of ComposerUppyUpload and into a mixin of its own, so it can also be used by UppyUploadMixin.
Some changes to ExternalUploadManager had to be made here as well. The backup direct uploads do not need an Upload record made for them in the database, so they can be moved to their final S3 resting place when completing the multipart upload.
This changeset is not perfect; it introduces some special cases in UploadController to handle backups that was previously in BackupController, because UploadController is where the multipart routes are located. A subsequent pull request will pull these routes into a module or some other sharing pattern, along with hooks, so the backup controller and the upload controller (and any future controllers that may need them) can include these routes in a nicer way.
PERF: Update like count in visible posts without an extra GET per like
Currently when a user is reading a topic and some post in it receive a
like from another user, the Ember app will be notified via MessageBus
and issue a GET to `/posts/{id}` to get the new like count. This worked
fine for us until today, but it can easily create a self-inflicted DDoS
when a topic with a large number of visitors gets a large number of
likes, since we will issue `visitors * likes` GET requests requests.
This patch optimizes this flow, by sending the new like count down in
the MessageBus notification, removing any need for the extra request.
It shouldn't cause any drift on the count because we send down the full
count instead of the difference too.
Possible follow-ups could include handling like removal.