Previously, `loadLibs` was called inside the `optimize` function of
the media-optimization-worker, which meant that it could be hit
multiple times causing load errors (as seen in b69c2f7311)
This commit moves that call to a specific message handler (the `install` message)
for the service worker, and refactors the service for the media-optimization-worker
to wait for this installation to complete before continuing with processing
image optimizations.
This way, we know for sure based on promises and worker messages
that the worker is installed and has all required libraries
loaded before we continue on with attempting any processing. The
change made in b69c2f7311 is no
longer needed with this commit.
When we are calling the loadLibs function, which in turn calls:
importScripts(settings.mozjpeg_script);
importScripts(settings.resize_script);
For the media-optimization-worker service worker, we are getting
an error in Firefox, which balks at wasm_bindgen, a global
variable defined with let, being redefined when the module loads.
This causes image processing to fail in Firefox when more than one
image is uploaded at a time.
The solution to this is to just check whether the scripts are
already imported, and if so do not import them again.
Chrome doesn't seem to care about this variable redefinition
and does not error, and it seems to be expected behaviour that
the script can be loaded multiple times (see https://github.com/w3c/ServiceWorker/issues/1041)
Short URLs were resolved before diffHTML was loaded and content was
swapped by it, which meant that no URLs were found and the URLs remained
unsolved. This caused image elements to be blank.
* DEV: Updated diffHTML to 1.0.0-beta.20
This change only applies when uppy is calling the media-optimization-worker.
Since the old way of calling the worker via jQuery file uploader will
be removed soon, there is no point coming up with some random string
to use in place of the file name for the promise resolvers there, we
can live with this for now.
Adds uppy upload functionality behind a
enable_experimental_composer_uploader site setting (default false,
and hidden).
When enabled this site setting will make the composer-editor-uppy
component be used within composer.hbs, which in turn points to
a ComposerUploadUppy mixin which overrides the relevant
functions from ComposerUpload. This uppy uploader has parity
with all the features of jQuery file uploader in the original
composer-editor, including:
progress tracking
error handling
number of files validation
pasting files
dragging and dropping files
updating upload placeholders
upload markdown resolvers
processing actions (the only one we have so far is the media optimization
worker by falco, this works)
cancelling uploads
For now all uploads still go via the /uploads.json endpoint, direct
S3 support will be added later.
Also included in this PR are some changes to the media optimization
service, to support uppy's different file data structures, and also
to make the promise tracking and resolving more robust. Currently
it uses the file name to track promises, we can switch to something
more unique later if needed.
Does not include custom upload handlers, that will come
in a later PR, it is a tricky problem to handle.
Also, this new functionality will not be used in encrypted PMs because
encrypted PM uploads rely on custom upload handlers.
On iOS 15 beta, if you select the camera app when uploading an image
and try to upload a freshly taken picture, from the second picture
onwards the resize WASM operation will return an array filled with
zeroes.
Since every 4th byte is alpha, and at this step we are only dealing with
non-transparent images this a O(1) way to detect that the bug was hit.
(On normal images, all 4th bytes are 255 at this point)
Also adds a "catch-all" when the original image became too small to try
to accomodate other bugs of the same type. By default we only trigger
this whole operation on images over 1MB, so if the end result is <20KB
something weird did happen. Throwing here will let the upload continue
using the original file, so nothing is lost and the user can continue.
There are some hard limits in browser Canvas implementations, that will
throw a runtime exception when crossed. Since those limits are platform
dependent, the best we can do is catch it and back off from trying to
optimize a problematic file.
For example, a 60MB PNG can be processed fine by Chrome but Firefox will
fail trying to extract the ImageData from the CanvasRenderingContext2D
with NS_ERROR_FAILURE.
Also cleans up the media-optimization-utils and add post-resize size logs
To prevent opaque cache files, now all the CDN files will be requested in 'cors' mode if the cdn_cors_enabled global setting is enabled. Before enabling the setting, should enable the cors in the CDN server by adding the response header `access-control-allow-origin: *` or `access-control-allow-origin: https://discourse.example.com.`
And other external file requests other than CDN will not be cached if the response type is opaque.
- frowning was using slighty_frowning
- slightly_frowning was using frowning
- grinning_face_with_smiling_eyes was not defined
- fronwing_face_with_open_mouth was not defined
This reverts commit e3de45359f.
We need to improve out strategy by adding a cache breaker with this change ... some assets on CDNs and clients may have incorrect CORS headers which can cause stuff to break.