Adds uppy upload functionality behind a
enable_experimental_composer_uploader site setting (default false,
and hidden).
When enabled this site setting will make the composer-editor-uppy
component be used within composer.hbs, which in turn points to
a ComposerUploadUppy mixin which overrides the relevant
functions from ComposerUpload. This uppy uploader has parity
with all the features of jQuery file uploader in the original
composer-editor, including:
progress tracking
error handling
number of files validation
pasting files
dragging and dropping files
updating upload placeholders
upload markdown resolvers
processing actions (the only one we have so far is the media optimization
worker by falco, this works)
cancelling uploads
For now all uploads still go via the /uploads.json endpoint, direct
S3 support will be added later.
Also included in this PR are some changes to the media optimization
service, to support uppy's different file data structures, and also
to make the promise tracking and resolving more robust. Currently
it uses the file name to track promises, we can switch to something
more unique later if needed.
Does not include custom upload handlers, that will come
in a later PR, it is a tricky problem to handle.
Also, this new functionality will not be used in encrypted PMs because
encrypted PM uploads rely on custom upload handlers.
On iOS 15 beta, if you select the camera app when uploading an image
and try to upload a freshly taken picture, from the second picture
onwards the resize WASM operation will return an array filled with
zeroes.
Since every 4th byte is alpha, and at this step we are only dealing with
non-transparent images this a O(1) way to detect that the bug was hit.
(On normal images, all 4th bytes are 255 at this point)
Also adds a "catch-all" when the original image became too small to try
to accomodate other bugs of the same type. By default we only trigger
this whole operation on images over 1MB, so if the end result is <20KB
something weird did happen. Throwing here will let the upload continue
using the original file, so nothing is lost and the user can continue.
There are some hard limits in browser Canvas implementations, that will
throw a runtime exception when crossed. Since those limits are platform
dependent, the best we can do is catch it and back off from trying to
optimize a problematic file.
For example, a 60MB PNG can be processed fine by Chrome but Firefox will
fail trying to extract the ImageData from the CanvasRenderingContext2D
with NS_ERROR_FAILURE.
Also cleans up the media-optimization-utils and add post-resize size logs
To prevent opaque cache files, now all the CDN files will be requested in 'cors' mode if the cdn_cors_enabled global setting is enabled. Before enabling the setting, should enable the cors in the CDN server by adding the response header `access-control-allow-origin: *` or `access-control-allow-origin: https://discourse.example.com.`
And other external file requests other than CDN will not be cached if the response type is opaque.
- frowning was using slighty_frowning
- slightly_frowning was using frowning
- grinning_face_with_smiling_eyes was not defined
- fronwing_face_with_open_mouth was not defined
This reverts commit e3de45359f.
We need to improve out strategy by adding a cache breaker with this change ... some assets on CDNs and clients may have incorrect CORS headers which can cause stuff to break.
The dark-mode-friendly SVG mask for the wizard's background image
introduced in 8fcfb9586c does not work with
CDNs, because CORS restrictions apply to SVG masks.
It would be complicated to modify CDN access origin rules for this one
specific assets, so instead, this PR moves the contents of the SVG file
inside the stylesheet.