After adding an empty state banner to the user bookmarks page, we have found the bug. Steps to reproduce:
- Go to the user bookmarks page
- Search for something that doesn’t exist in bookmarks
- Click again Bookmarked on the sidebar or View All Bookmarks on the user menu again
Previously we would store every FakeRequest object for all tests, resulting in many hundreds/thousands of objects in the `handledRequests` array.
This commit ensures all pretender state is reset between tests.
- There's no need to pass `filter` to `user-notifications-large`. The component doesn't use it.
- Rename css class to avoid confusion (this div has nothing to-do with the Select Kit)
- Remove duplicated declarations in test fixtures
This is `console.log`'d to the browser console. run-qunit will print this to stdout. testem will not, so a custom reporter is implemented to print this message.
The `--enable-precise-memory-info` is added so that chrome provides high-resolution memory information. This API is not supported by firefox. The logic will degrade gracefully.
Note this commit is also adding support for teardown in pre-initializers just like we have for initializers.
Co-authored-by: Jarek Radosz <jradosz@gmail.com>
Co-authored-by: David Taylor <david@taylorhq.com>
We were using multiple methods to check which environment we're running in. This commit switches us to use the isLegacyEmber helper consistently. This should be a no-op, but makes the code much easier to read
Under Ember CLI, we create a new application instance for each test. We were not correctly destroying it after the test, causing many references to be maintaned (e.g. at the end of a test run, `Ember.Namespace.NAMESPACES` would have an entry for each application instance).
Calling `destroy` on the application instance tidies up these references, and is one step towards fixing our test memory leak problem. Unfortunately there still seem to be other references being held to the application, so this commit is not a total fix.
The all inboxes was introduced in
016efeadf6 but we decided to roll it back
for performance reasons. The main performance challenge here is that PG
has to basically loop through all the PMs that a user is allowed to view
before being able to order by `Topic#bumped_at`. The all inboxes was not
planned as part of the new/unread filter so we've decided not to tackle
the performance issue for the upcoming release.
Follow-up to 016efeadf6
As sharing has some hover behavior, it was looking slightly clunky with fast edit changing position. Putting sharing at the last position will reduce this effect.
When the loading spinner is removed (e.g. via the loading-slider component), the subcategory list view will persist, even when no longer required. This is because we were conditionally rendering the list into the `header-list-container` outlet. When the condition was false, we were doing nothing. Instead, we should use `disconectOutlet` to make sure the content is removed from the DOM.
Firefox does not return a PerformanceMeasure object when using
performance.mark and performance.measure, even though MDN says it
should https://developer.mozilla.org/en-US/docs/Web/API/Performance/measure#return_value
So for now, we disable the upload instrumentation with a test
to see if a PerformanceMeasure (or anything really) is returned.
When creating a reply after already navigating out of the
topic (e.g. open the reply composer, go to a different topic,
then create the post), the _removeDeleteOnOwnerReplyBookmarks
function was erroring because it relied on the topic model
being present.
We can skip this function altogether if the topic model is _not_
present, because the PostCreator already takes care of deleting
bookmarks with the on_owner_reply auto_delete_preference. The
_removeDeleteOnOwnerReplyBookmarks function just cleans up the
in-memory post stream and topic model.
This commit allows for measuring the time taken for
individual uploads via the new uppy interfaces, only
if the enable_upload_debug_mode site setting is enabled.
Also in this PR, for upload errors with a specific message
locally, we return the real message to show in the modal
instead of the upload.failed message so the developer
does not have to dig around in logs.
The file size error messages for max_image_size_kb and
max_attachment_size_kb are shown to the user in the KB
format, regardless of how large the limit is. Since we
are going to support uploading much larger files soon,
this KB-based limit soon becomes unfriendly to the end
user.
For example, if the max attachment size is set to 512000
KB, this is what the user sees:
> Sorry, the file you are trying to upload is too big (maximum
size is 512000KB)
This makes the user do math. In almost all file explorers that
a regular user would be familiar width, the file size is shown
in a format based on the maximum increment (e.g. KB, MB, GB).
This commit changes the behaviour to output a humanized file size
instead of the raw KB. For the above example, it would now say:
> Sorry, the file you are trying to upload is too big (maximum
size is 512 MB)
This humanization also handles decimals, e.g. 1536KB = 1.5 MB
This commit also hides a number of options which are not used during Discourse development.
Change have been tested on both the legacy `/qunit` route, and the Ember CLI `/tests` route.
This adds support for `qunit_skip_core`, `qunit_skip_plugins` and `qunit_single_plugin` parameters on the Ember CLI `/tests` route using the `addModuleExcludeMatcher` API. Legacy support is maintained for the `/qunit` route.
".search-menu" matches the parent element of the element that was
previously selected. This is a better choice because it offers some
flexibility over the DOM structure without breaking the keyboard
shortcuts.
Instead of going to the OP of the topic for topic-level bookmarks
(which are bookmarks where for_topic is true) when clicking on the
bookmark in the quick access menu or on the user bookmark list,
this commit takes the user to the last unread post in
the topic instead. This should be generally more useful than landing
on the unchanging OP.
To make this work nicely, I needed to add the last_read_post_number to
the BookmarkQuery based on the TopicUser association. It should not add
too much extra weight to the query, because it is limited to the user
that we are fetching bookmarks for.
Also fixed an issue where the bookmark serializer highest_post_number was
not taking into account whether the user was staff, which is when we
should use highest_staff_post_number instead.
Allows creating a bookmark with the `for_topic` flag introduced in d1d2298a4c set to true. This happens when clicking on the Bookmark button in the topic footer when no other posts are bookmarked. In a later PR, when clicking on these topic-level bookmarks the user will be taken to the last unread post in the topic, not the OP. Only the OP can have a topic level bookmark, and users can also make a post-level bookmark on the OP of the topic.
I had to do some pretty heavy refactors because most of the bookmark code in the JS topics controller was centred around instances of Post JS models, but the topic level bookmark is not centred around a post. Some refactors were just for readability as well.
Also removes some missed reminderType code from the purge in 41e19adb0d
We want to be able to skip plugins from doing any work under
certain conditions, and to be able raise their own errors if
a file being uploaded is completely incompatible with the concept
of the plugin if it is enabled. For example, the UppyChecksum plugin
is happy to skip hashing large files, but the UppyUploadEncrypt
plugin from discourse-encrypt relies on the file being encrypted
to do anything with the upload, so it is considered a blocking
error if the user uploads a file that is too large.
This improves the base functions available in uppy-plugin-base and
extendable-uploader to handle this, as well as introducing a
HUGE_FILE_THRESHOLD_BYTES variable which represents 100MB in bytes,
matching the ExternalUploadManager::DOWNLOAD_LIMIT on the
server side.
discourse-encrypt to take advantage of this new functionality will
follow in discourse/discourse-encrypt#141
We want to be able to skip plugins from doing any work under
certain conditions, and to be able raise their own errors if
a file being uploaded is completely incompatible with the concept
of the plugin if it is enabled. For example, the UppyChecksum plugin
is happy to skip hashing large files, but the UppyUploadEncrypt
plugin from discourse-encrypt relies on the file being encrypted
to do anything with the upload, so it is considered a blocking
error if the user uploads a file that is too large.
This improves the base functions available in uppy-plugin-base and
extendable-uploader to handle this, as well as introducing a
HUGE_FILE_THRESHOLD_BYTES variable which represents 100MB in bytes,
matching the ExternalUploadManager::DOWNLOAD_LIMIT on the
server side.
discourse-encrypt to take advantage of this new functionality will
follow in https://github.com/discourse/discourse-encrypt/pull/141
- do not reduce opacity of disabled buttons if they are loading
- replace ‘|’ by single quotes not double quotes
- always start from index 0
- reduces amount of work by checking row's length
- apply quotefix to fallback
- do not add 1 to caretposition if index is 0
The algorithm will now do the following:
- split selection to retain only first line
- removes possible "* "
- check for first inclusion
- fallback to first row if nothing found
We don't actually use the reminder_type for bookmarks anywhere;
we are just storing it. It has no bearing on the UI. It used
to be relevant with the at_desktop bookmark reminders (see
fa572d3a7a)
This commit marks the column as readonly, ignores it, and removes
the index, and it will be dropped in a later PR. Some plugins
are relying on reminder_type partially so some stubs have been
left in place to avoid errors.
This commit sets `tap_failed_tests_only` to `true` in our testem config, so now only the failing tests will show in our GitHub CI Ember test runs, which saves developers from having to hunt through all of the passing tests using GitHub's janky console output scrollback.
There was a check for closed code blocks (which had both opening and
closing markups), but it did not work for the case when the text ends
in an open code block.
Administrators can use second factor to confirm granting admin access
without using email. The old method of confirmation via email is still
used as a fallback when second factor is unavailable.
The previous excerpt was a simple truncated raw message. Starting with
this commit, the raw content of the draft is cooked and an excerpt is
extracted from it. The logic for extracting the excerpt mimics the the
`ExcerptParser` class, but does not implement all functionality, being
a much simpler implementation.
The two draft controllers have been merged into one and the /draft.json
route has been changed to /drafts.json to be consistent with the other
route names.
This is my second try at this. The first b246a63a59 raised an issue
with the event delegation not working because the topic id changed.
This adds support for delegating events to dynamic keys by passing a
function where a static key would normally be needed. This means that
each timeline will have its own unique state key and events will only
delegate to the proper topic.
Translations are often multi-line. Using a regular `<input>` doesn't allow newlines, so if you try to edit a multiline theme translation, all the line breaks will be removed.
This commit updates the theme translations UI to use `<textarea>`, just like the core translation editing UI.
allowUpload can be false for the composer if there are no
allowed file extensions. This causes the _bindMobileUploadButton
code to fail because the button does not get rendered in the
template if !allowUpload. This commit changes composer-editor
to only bind upload functionality if allowUpload.
We've observed an error where the back button is displayed improperly in
the topic timeline. It's unfortunately been hard to reproduce but we
suspect it's related to leftover state when re-rendering.
This fix optimistically tries to fix the error by introducing the
topic's id to the unique key the widgets use for state. We can deploy
this and keep an eye out for the bug in the future.
This fixes an error when trying to upload a profile
background image for the user card when the
enable_direct_s3_uploads setting was true:
> Failed to execute 'send' on 'XMLHttpRequest': The object's state must be OPENED.
This was fixed in the upstream commit by the uppy devs:
5937bf2127
When a user archives a personal message, they are redirected back to the
inbox and will refresh the list of the topics for the given filter.
Publishing an event to the user results in an incorrect incoming message
because the list of topics has already been refreshed.
This does mean that if a user has two tabs opened, the non-active tab
will not receive the incoming message but at this point we do not think
the technical trade-offs are worth it to support this feature. We
basically have to somehow exclude a client from an incoming message
which is not easy to do.
Follow-up to fc1fd1b416
This abstracts interaction with uppy for uppy plugin classes
into base classes for Preprocessor plugins, so anyone
making these uppy plugins doesn't have to think as much about uppy
underneath the hood. This also makes the logging and validation
nicer, and provides a more consistent way to emit progress and
completion events.
In a future commit, we will introduce another base class for
`UploadUploaderPlugin` which will be used to be able to hijack
the upload process to go to a different provider (e.g. for discourse-video)
Short URLs were resolved before diffHTML was loaded and content was
swapped by it, which meant that no URLs were found and the URLs remained
unsolved. This caused image elements to be blank.
* DEV: Updated diffHTML to 1.0.0-beta.20
Watched words of type 'replace' or 'link' replaced the text inside
mentions or hashtags too, which broke these. These types of watched
words must skip any match that has an @ or # before it.
At this point in time, we do not think supporting unread and new when an
admin is looking at another user's messages is worth supporting.
Follow-up to fc1fd1b416
There are a few fixes at play here:
1) We were still not initializing objects to the correct types.
2) If a debounce timed out, it was returning a string instead of an
array which was not appropriately handled.
3) In testing mode we never cancel the search promise for stability.
In order to include the new/unread count in the browse more message
under suggested topics, a couple of technical changes have to be made.
1. `PrivateMessageTopicTrackingState` is now auto-injected which is
similar to how it is done for `TopicTrackingState`. This is done so
we don't have to attempt to pass the `PrivateMessageTopicTrackingState`
object multiple levels down into the suggested-topics component. While
the object is auto-injected, we only fetch the initial state and start
tracking when the relevant private messages routes has been hit and only
when a private message's suggested topics is loaded. This is
done as we do not want to add the extra overhead of fetching the inital
state to all page loads but instead wait till the private messages
routes are hit.
2. Previously, we would stop tracking once the `user-private-messages`
route has been deactivated. However, that is not ideal since
navigating out of the route and back means we send an API call to the
server each time. Since `PrivateMessageTopicTrackingState` is kept in
sync cheaply via messageBus, we can just continue to track the state
even if the user has navigated away from the relevant stages.
The smoke test has been failing with the error:
```
TypeError: Cannot read properties of undefined (reading 'Core')
```
Since de20c46077
and 9873a942e3 this error has been occurring,
possibly now because Uppy is required by a plugin. Adding uppy.js into
the require list for theme_qunit_vendor.js fixes the issue.
This new interface will be used explicitly to add upload
preprocessors in the form of uppy plugins. These will be
run for each upload in the composer (dependent on the logic
of the plugin itself), before the UppyChecksum plugin is
finally run.
Since discourse-encrypt uses the existing addComposerUploadHandler
API for essentially preprocessing an upload and not uploading it
to a different place, it will be the first plugin to use this interface,
along with the register-media-optimization-upload-processor initializer
in core.
Related https://github.com/discourse/discourse-encrypt/pull/131.
Improves the create account modal for screen readers by doing the following:
* Making the `modal-alert` section into an `aria-role="alert"` region and making it show and hide using height instead of display:none so screen readers pick it up. Made a change so the field-related error messages are always shown beneath the field.
* Add `aria-invalid` and `aria-describedby` attributes to each field in the modal, so the screen reader will read out the error hint on error. This necessitated an Ember component extension to allow both the `aria-*` attributes to be bound and to render on `{{input}}`.
* Moved the social login buttons to the right in the HTML structure so they are not read out first.
* Added `aria-label` attributes to the login buttons so they can have different content for screen readers.
* In some cases for modals, the title that should be used for the `aria-labelledby` attribute is within the modal content and not the discourse-modal-title title. This introduces a new titleAriaElementId property to the d-modal component that is then used by the create-account modal to read out the title
------
This is the same as e0d2de73d8 but
fixes the Ember-input-component-extension to use the public
Ember components TextField and TextArea instead of the private
TextSupport so the extension works in both normal Ember and
Ember CLI.
Improves the create account modal for screen readers by doing the following:
* Making the `modal-alert` section into an `aria-role="alert"` region and making it show and hide using height instead of display:none so screen readers pick it up. Made a change so the field-related error messages are always shown beneath the field.
* Add `aria-invalid` and `aria-describedby` attributes to each field in the modal, so the screen reader will read out the error hint on error. This necessitated an Ember component extension to allow both the `aria-*` attributes to be bound and to render on `{{input}}`.
* Moved the social login buttons to the right in the HTML structure so they are not read out first.
* Added `aria-label` attributes to the login buttons so they can have different content for screen readers.
* In some cases for modals, the title that should be used for the `aria-labelledby` attribute is within the modal content and not the discourse-modal-title title. This introduces a new titleAriaElementId property to the d-modal component that is then used by the create-account modal to read out the
This adds a new property, `pluginId` which you can pass to `modifyClass`
which prevent the class from being modified over and over again.
This also includes a fix for polls which was leaking state between tests
which this new functionality exposed.
When using ComposerUpload and/or ComposerUploadUppy, we were
always calling bindMobileUploadButton. However with more composer-like
interfaces being developed, we need this to be optional, as not
everywhere will have a separate mobile upload button to bind to.
Also makes it so the composer extending the ComposerUpload mixins is
responsible for explicitly unbinding the mobile upload button if
it needs to.
* DEV: Use named parameters for dir-span helper
Follow up to: e50a5c0c73
In order to improve code clarity this change introduces named parameters
for the dir-span helper. This is specifically for the new `htmlSafe`
parameter which you can use instead of just passing in a boolean if the
strings you are passing in have already been escaped.
Before: `{{dir-span category.description false}}`
After: `{{dir-span category.description htmlSafe=true}}`
* Set default value for params arg
PresenceChannel aims to be a generic system for allow the server, and end-users, to track the number and identity of users performing a specific task on the site. For example, it might be used to track who is currently 'replying' to a specific topic, editing a specific wiki post, etc.
A few key pieces of information about the system:
- PresenceChannels are identified by a name of the format `/prefix/blah`, where `prefix` has been configured by some core/plugin implementation, and `blah` can be any string the implementation wants to use.
- Presence is a boolean thing - each user is either present, or not present. If a user has multiple clients 'present' in a channel, they will be deduplicated so that the user is only counted once
- Developers can configure the existence and configuration of channels 'just in time' using a callback. The result of this is cached for 2 minutes.
- Configuration of a channel can specify permissions in a similar way to MessageBus (public boolean, a list of allowed_user_ids, and a list of allowed_group_ids). A channel can also be placed in 'count_only' mode, where the identity of present users is not revealed to end-users.
- The backend implementation uses redis lua scripts, and is designed to scale well. In the future, hard limits may be introduced on the maximum number of users that can be present in a channel.
- Clients can enter/leave at will. If a client has not marked itself 'present' in the last 60 seconds, they will automatically 'leave' the channel. The JS implementation takes care of this regular check-in.
- On the client-side, PresenceChannel instances can be fetched from the `presence` ember service. Each PresenceChannel can be used entered/left/subscribed/unsubscribed, and the service will automatically deduplicate information before interacting with the server.
- When a client joins a PresenceChannel, the JS implementation will automatically make a GET request for the current channel state. To avoid this, the channel state can be serialized into one of your existing endpoints, and then passed to the `subscribe` method on the channel.
- The PresenceChannel JS object is an ember object. The `users` and `count` property can be used directly in ember templates, and in computed properties.
- It is important to make sure that you `unsubscribe()` and `leave()` any PresenceChannel objects after use
An example implementation may look something like this. On the server:
```ruby
register_presence_channel_prefix("site") do |channel|
next nil unless channel == "/site/online"
PresenceChannel::Config.new(public: true)
end
```
And on the client, a component could be implemented like this:
```javascript
import Component from "@ember/component";
import { inject as service } from "@ember/service";
export default Component.extend({
presence: service(),
init() {
this._super(...arguments);
this.set("presenceChannel", this.presence.getChannel("/site/online"));
},
didInsertElement() {
this.presenceChannel.enter();
this.presenceChannel.subscribe();
},
willDestroyElement() {
this.presenceChannel.leave();
this.presenceChannel.unsubscribe();
},
});
```
With this template:
```handlebars
Online: {{presenceChannel.count}}
<ul>
{{#each presenceChannel.users as |user|}}
<li>{{avatar user imageSize="tiny"}} {{user.username}}</li>
{{/each}}
</ul>
```
Uppy V2 includes the S3 multipart batch presigning change
we contributed in d613b849a6
so we need to upgrade it. This also brings both package.json
files into line and accounts for the renaming of Plugin
to BasePlugin in Uppy.
This has been tested and is working locally for both
regular Ember and Ember CLI, for uploads.json
XHR uploads and for direct S3 uploads (single and multipart).
This mixin needs to be shared between the composer and composer-like
user interfaces. This commit makes it so the events and the underlying
data model is configurable by the component extending the ComposerUploadUppy
mixin.
Also removes two MessageBus unsubscribe calls which were unnecessary.
The generate_presigned_put endpoint for direct external uploads
(such as the one for the uppy-image-uploader) records allowed
S3 metadata values on the uploaded object. We use this to store
the sha1-checksum generated by the UppyChecksum plugin, for later
comparison in ExternalUploadManager.
However, we were not doing this for the create_multipart endpoint,
so the checksum was never captured and compared correctly.
Also includes a fix to make sure UppyChecksum is the last preprocessor to run.
It is important that the UppyChecksum preprocessor is the last one to
be added; the preprocessors are run in order and since other preprocessors
may modify the file (e.g. the UppyMediaOptimization one), we need to
checksum once we are sure the file data has "settled".
The user-topic-list template is also in use in other places when we want to improve blank page syndrome, so this PR is a preparation for that changes as well.
- uses tagName=""
- removes user property which is not being used
- extract utility functions
- better wording for boolean properties
- initializes all properties
- uses @action
- uses optional chaining
- other minor changes
This rolls uppy back to the previous bundle that was used,
which will break multipart functionality (which is not yet
enabled anywhere).
No other upload functionality should be affected by this change,
it will be as if d295a16dab had
not been merged.
Whenever we `subscribe` to something there should be an equivalent
`unsubscribe` and this implements it for `LogsNotice`.
In the future we should make this closer to what Ember expects a Service
to be, but at least it's properly cleaning up after itself now.
See the previous commit d66b258b0e as
well.
If enable_upload_debug_mode is true, we do not want to abort the
direct S3 upload, because that will delete the file on S3 and prevent
further inspection of any errors that have come up.
There are certain design decisions that were made in this commit.
Private messages implements its own version of topic tracking state because there are significant differences between regular and private_message topics. Regular topics have to track categories and tags while private messages do not. It is much easier to design the new topic tracking state if we maintain two different classes, instead of trying to mash this two worlds together.
One MessageBus channel per user and one MessageBus channel per group. This allows each user and each group to have their own channel backlog instead of having one global channel which requires the client to filter away unrelated messages.
This pull request introduces the endpoints required, and the JavaScript functionality in the `ComposerUppyUpload` mixin, for direct S3 multipart uploads. There are four new endpoints in the uploads controller:
* `create-multipart.json` - Creates the multipart upload in S3 along with an `ExternalUploadStub` record, storing information about the file in the same way as `generate-presigned-put.json` does for regular direct S3 uploads
* `batch-presign-multipart-parts.json` - Takes a list of part numbers and the unique identifier for an `ExternalUploadStub` record, and generates the presigned URLs for those parts if the multipart upload still exists and if the user has permission to access that upload
* `complete-multipart.json` - Completes the multipart upload in S3. Needs the full list of part numbers and their associated ETags which are returned when the part is uploaded to the presigned URL above. Only works if the user has permission to access the associated `ExternalUploadStub` record and the multipart upload still exists.
After we confirm the upload is complete in S3, we go through the regular `UploadCreator` flow, the same as `complete-external-upload.json`, and promote the temporary upload S3 into a full `Upload` record, moving it to its final destination.
* `abort-multipart.json` - Aborts the multipart upload on S3 and destroys the `ExternalUploadStub` record if the user has permission to access that upload.
Also added are a few new columns to `ExternalUploadStub`:
* multipart - Whether or not this is a multipart upload
* external_upload_identifier - The "upload ID" for an S3 multipart upload
* filesize - The size of the file when the `create-multipart.json` or `generate-presigned-put.json` is called. This is used for validation.
When the user completes a direct S3 upload, either regular or multipart, we take the `filesize` that was captured when the `ExternalUploadStub` was first created and compare it with the final `Content-Length` size of the file where it is stored in S3. Then, if the two do not match, we throw an error, delete the file on S3, and ban the user from uploading files for N (default 5) minutes. This would only happen if the user uploads a different file than what they first specified, or in the case of multipart uploads uploaded larger chunks than needed. This is done to prevent abuse of S3 storage by bad actors.
Also included in this PR is an update to vendor/uppy.js. This has been built locally from the latest uppy source at d613b849a6. This must be done so that I can get my multipart upload changes into Discourse. When the Uppy team cuts a proper release, we can bump the package.json versions instead.
Steps to reproduce:
1. Go to activity/bookmarks
2. Search for something that isn’t in your bookmarks, so you get no results
3. Navigate away and then click "Bookmarked" on the sidebar or open the user menu and click the View All Bookmarks button on the bottom of the bookmarks tab, and you get the message "You haven't bookmarked anything yet".
This commit fixes the problem. We have a controller with a query parameter q that contains a search query. And we also have a property searchTerm that is bound to the search box on the page and mirrors the value in q. We were using a value from searchTerm when querying the server, but ember controllers are singletons so the searchTerm value persisted between page visits and leaded to this bug.
To make things work properly, we should be using the value from q everywhere except two places when we copy a value from q to searchTerm and vice versa.
Major changes included:
- better support for screen readers
- trapping focus in modals
- better tabbing order in composer
- alerts on no content found/number of items found
- better autofocus in modals
- mini-tag-chooser is now a multi-select component
- each multi-select-component will now display selection on one row
Plugin API is allowing to add small action codes dedicated to groups.
This will be used by assign-plugin when topic is assigned or unassigned from group.
When resetting the preprocessor status states, we weren't using
the same default state as when the preprocessor status state is
first initialized with an associated plugin. This commit brings
the two into alignment, fixing a bug where if you cancelled an
upload then tried a new one the "Processing Upload" message would
never change to "Uploading... X", so any subsequent uploads were
uncancellable.
Since the state was not being reset correctly, the properties that
were supposed to be numbers ended up as `undefined`, so when calling
prop-- or prop++, they turned into NaN.
This change only applies when uppy is calling the media-optimization-worker.
Since the old way of calling the worker via jQuery file uploader will
be removed soon, there is no point coming up with some random string
to use in place of the file name for the promise resolvers there, we
can live with this for now.
When we encountered an error with the media-optimization-worker,
we stopped the worker, which made it so further messages were not
received when optimizing images in parallel. Removed this based
on an option.
Also added more debugging lines to help track down issues.