Previously, accessing the Rails app directly in development mode would give you assets from our 'legacy' Ember asset pipeline. The only way to run with Ember CLI assets was to run ember-cli as a proxy. This was quite limiting when working on things which are bypassed when using the ember-cli proxy (e.g. changes to `application.html.erb`). Also, since `ember-auto-import` introduced chunking, visiting `/theme-qunit` under Ember CLI was failing to include all necessary chunks.
This commit teaches Sprockets about our Ember CLI assets so that they can be used in development mode, and are automatically collected up under `/public/assets` during `assets:precompile`. As a bonus, this allows us to remove all the custom manifest modification from `assets:precompile`.
The key changes are:
- Introduce a shared `EmberCli.enabled?` helper
- When ember-cli is enabled, add ember-cli `/dist/assets` as the top-priority Rails asset directory
- Have ember-cli output a `chunks.json` manifest, and teach `preload_script` to read it and append the correct chunks to their associated `afterFile`
- Remove most custom ember-cli logic from the `assets:precompile` step. Instead, rely on Rails to take care of pulling the 'precompiled' assets into the `public/assets` directory. Move the 'renaming' logic to runtime, so it can be used in development mode as well.
- Remove fingerprinting from `ember-cli-build`, and allow Rails to take care of things
Long-term, we may want to replace Sprockets with the lighter-weight Propshaft. The changes made in this commit have been made with that long-term goal in mind.
tldr: when you visit the rails app directly, you'll now be served the current ember-cli assets. To keep these up-to-date make sure either `ember serve`, or `ember build --watch` is running. If you really want to load the old non-ember-cli assets, then you should start the server with `EMBER_CLI_PROD_ASSETS=0`. (the legacy asset pipeline will be removed very soon)
Sometimes, the 'message' portion of an exception isn't enough to work out what's happening. In these cases, including the exception class name can help with debugging.
Also:
* Remove an unused method (#fill_email)
* Replace a method that was used just once (#generate_username) with `SecureRandom.alphanumeric`
* Remove an obsolete dev puma `tmp/restart` file logic
Under some conditions, these varied responses could lead to cache poisoning, hence the 'security' label.
Previously the Rails application would serve JSON data in place of HTML whenever Ember CLI requested an `application.html.erb`-rendered page. This commit removes that logic, and instead parses the HTML out of the standard response. This means that Rails doesn't need to customize its response for Ember CLI.
Discourse automatically sends a private message after backup or
restore finished. The private message used to contain the log inline
even when it was very long. A very long log can create issues because
the length of the post will be over the maximum allowed length of a
post. When that happens, Discourse will try to create an upload with
the logs. If that fails, it will trim the log and inline it.
Generating the client settings json involves santizing all string based
site settings. This is expensive as per our profile in production (~120ms) and one request after each deploy has
to pay this penalty.
The XML parsing of SVGs is done whenever the cache expires or on the
first load after a reboot. In one of our production instance, parsing
ranges from 30ms to 70ms which is not ideal. Instead, we've decided to
make a small memory trade off here by memoizing the core SVGs once on
boot to avoid parsing of the SVG files during the duration of a request.
The memozied hash will take up 57440 bytes or 0.05744 megabytes in size.
In production, each Unicorn child process will currently hold the
default locale in memory on first load. Instead, we should preload it in
the Unicorn master process so that the memory is immediately shared when
forking.
Also, the translations are only memoized on first load now and is
adding considerable overhead to the first few requests after a fresh
boot.
This commit makes a few changes to improve boot time in development environments. It will have no effect on production boot times.
- Skip the SchemaCache warmup. In development mode, the SchemaCache is refreshed every time there is a code change, so warmup is of limited use.
- Skip warming up PrettyText. This adds ~2s to each web worker's boot time. The vast majority of requests do not use PrettyText, so it is more efficient to defer its warmup until it's needed
- Skip the intentional 1 second pause during Unicorn worker forking. The comment (which also exists in Unicorn's documentation) suggests this works around a Unix signal handling bug, but I haven't been able to locate any more information. Skipping it in dev will significantly speed up boot. If we start to see issues, we can revert this change.
On my machine, this improves `/bin/unicorn` boot time from >10s to ~4s
* DEV: Give a nicer error when `--proxy` argument is missing
* DEV: Improve Ember CLI's bootstrap logic
Instead of having Ember CLI know which URLs to proxy or not, have it try
the URL with a special header `HTTP_X_DISCOURSE_EMBER_CLI`. If present,
and Discourse thinks we should bootstrap the application, it will
instead stop rendering and return a HTTP HEAD with a response header
telling Ember CLI to bootstrap.
In other words, any time Rails would otherwise serve up the HTML for the
Ember app, it stops and says "no, you do it."
* DEV: Support asset filters by path using a new options object
Without this, Ember CLI's bootstrap would not get the assets it wants
because the path it was requesting was different than the browser path.
This adds an optional request header to fix it.
So far this is only used by the styleguide.
Followup to 5deda5ef3e
The first argument to `Open3.capture3` can be an environment variable hash. In this case, we need to insert the `timeout` command after the env hash.
Previously certain images may lead to convert / identify to run for unreasonable
amounts of time
This adds a maximum amount of time these commands can run prior to forcing
them to stop
After the search term is parsed for advanced search filters, the term may
become empty. Later, the same term will be passed to Discourse.route_for
which will raise an ArgumentError.
> URI(nil)
ArgumentError: bad argument (expected URI object or URI string)
- Bump rails_failover for new per-backend callback feature
- If the master backend fails over, make all sites readonly. And vice-versa for fallback
- If a single backend fails over, make that individual site readonly. And vice-versa for fallback
- When a single backend fails, also check connection to the master backend
This reverts commit e3de45359f.
We need to improve out strategy by adding a cache breaker with this change ... some assets on CDNs and clients may have incorrect CORS headers which can cause stuff to break.
DEV: Replace instances of Discourse.base_uri with Discourse.base_path
This is clearer because the base_uri is actually just a path prefix. This continues the work started in 555f467.
In the near future, we will be swtiching to PG headlines to generate the
search blurb. As such, we need to replace audio and video links in the
raw data used for headline generation. This also means that we avoid
replacing links each time we need to generate the blurb.
We have the `# frozen_string_literal: true` comment on all our
files. This means all string literals are frozen. There is no need
to call #freeze on any literals.
For files with `# frozen_string_literal: true`
```
puts %w{a b}[0].frozen?
=> true
puts "hi".frozen?
=> true
puts "a #{1} b".frozen?
=> true
puts ("a " + "b").frozen?
=> false
puts (-("a " + "b")).frozen?
=> true
```
For more details see: https://samsaffron.com/archive/2018/02/16/reducing-string-duplication-in-ruby
* Remove some `.es6` from comments where it does not matter
* Use a post processor for transpilation
This will allow us to eventually use the directory structure to
transpile rather than the extension.
* FIX: Some errors and clean up in confirm-new-email
It would throw an error if the webauthn element wasn't present.
Also I changed things so that no-module is not explicitly
referenced.
* Remove `no-module`
Instead we allow a magic comment: `// discourse-skip-module` to prevent
the asset pipeline from creating a module.
* DEV: Enable babel transpilation based on directory
If it's in `app/assets/javascripts/dicourse` it will be transpiled
even without the `.es6` extension.
* REFACTOR: Remove Tilt/ES6ModuleTranspiler
On startup, (including when starting a rails console) we manipule a
collection of plugin files. Writing these files is done in multiple
observable steps, which presents opportunities for race conditions and
causes temporary corruption.
This commit uses the write, fsync and rename trick to atomically
overwrite these files instead, but reads them first to avoid unnecessary
writes.
c457d3bf was a previous attempt to fix the same problem.
Previously we had many places in the app that called `hostname` to get
hostname of a server. This commit replaces the pattern in 2 ways
1. We cache the result in `Discourse.os_hostname` so it is only ever called once
2. We prefer to use Socket.gethostname which avoids making a shell command
This improves performance as we are not spawning hostname processes throughout
the app lifetime
Discourse.cache is a more consistent method to use and offers clean fallback
if you are skipping redis
This is part of a larger change that both optimizes Discoruse.cache and omits
use of setex on $redis in favor of consistently using discourse cache
Bench does reveal that use of Rails.cache and Discourse.cache is 1.25x slower
than redis.setex / get so a re-implementation will follow prior to porting
We were having issues in development mode where the JS code had errors due to a bad cache. When starting a server in development mode in bin/unicorn we now get the git sha of the discourse HEAD and get a git sha of all plugins, and store them in a file. If the sha has changed then we delete tmp/cache to refresh the assets cache.
This makes it easy to run multiple commands with the same keyword arguments. The main use is for using `chdir` across multiple commands. The `Dir.chdir` method is not concurrency safe because it switches the working directory of the entire process.
This was not causing any known issue, because the system user ID is always the same across all sites. However, we should cache this on a per-site basis to be safe.
If the setting is turned on, then the user will receive information
about the subject: if it was deleted or requires some special access to
a group (only if the group is public). Otherwise, the user will receive
a generic #404 error message. For now, this change affects only the
topics and categories controller.
This commit also tries to refactor some of the code related to error
handling. To make error pages more consistent (design-wise), the actual
error page will be rendered server-side.
Using popups is becoming increasingly rare. Full page redirects are already used on mobile, and for some providers. This commit removes all logic related to popup authentication, leaving only the full page redirect method.
For more info, see https://meta.discourse.org/t/do-we-need-popups-for-login/127988
We preload to ensure as much memory as possible is reused from unicorn master
to various workers using copy-on-write (sidekiq, unicorn)
This migrates the preloading code into the Discourse module for easier
reuse and adds 3 notable preloading changes
1. We attempt to localize a string on each site, ensuring we warmup
the i18n
2. We preload all our templates (compiling .erb to class)
3. We warm-up our search tokenizer which uses cppjieba which is a large
memory consumer, this will only cause a warmup on CJK sites or sites with
the special site setting enabled.
* DEV: group_list site settings should store IDs instead of group names
* Ship site setting to know when we should migrate group_list settings
* Migrate existing group_list site settings
* Bump migration timestamp and don't set null when migrating is not possible.
And don't load javascript assets if plugin is disabled.
* precompile auto generated plugin js assets
* SPEC: remove spec test functions
* remove plugin js from test_helper
Co-Authored-By: Régis Hanol <regis@hanol.fr>
* DEV: using equality is slightly easier to read than inequality
Co-Authored-By: Régis Hanol <regis@hanol.fr>
* DEV: use `select` method instead of `find_all` for readability
Co-Authored-By: Régis Hanol <regis@hanol.fr>
This can cause unbound CPU usage in some cases, and excessive logging in other cases. This commit moves redis readonly information into the local process, but maintains the DistributedCache for postgres readonly state.
In sidekiq, jobs are run in multiple threads within the same process. `cd` affects the entire process, so can cause unexpected issues in other running jobs.
Sometimes we would like to create a base image without any DB access, this
assists in creating custom base images with custom plugins that already
includes `public/assets`
Following this change set you can run:
```
SPROCKETS_CONCURRENT=1 DONT_PRECOMPILE_CSS=1 SKIP_DB_AND_REDIS=1 RAILS_ENV=production bin/rake assets:precompile
```
Then it is straight forward to create a base image without needing a DB or
Redis.
This reduces chances of errors where consumers of strings mutate inputs
and reduces memory usage of the app.
Test suite passes now, but there may be some stuff left, so we will run
a few sites on a branch prior to merging
If you turn it on now, default all users to approved since they were
previously. Also support approving a user that doesn't have a reviewable
record (it will be created first.)
This also includes a refactor to move class method calls to
`DiscourseEvent` into an initializer. Otherwise the load order of
classes makes a difference in the test environment and some settings
might be triggered and others not, randomly.
Previously jobs would fail silently in test mode. Now they will raise the exception and cause the relevant test to fail. This identified a few broken tests, which I will fix in a followup commit
- Plugin developers using OpenID2.0 should migrate to OAuth2 or OIDC. OpenID2.0 APIs will be removed in v2.4.0
- For sites requiring Yahoo login, it can be implemented using the OpenID Connect plugin: https://meta.discourse.org/t/103632
For more information, see https://meta.discourse.org/t/113249
Includes support for flags, reviewable users and queued posts, with REST API
backwards compatibility.
Co-Authored-By: romanrizzi <romanalejandro@gmail.com>
Co-Authored-By: jjaffeux <j.jaffeux@gmail.com>
Previously we relied on the provider name matching the name of the icon. Now icon names are explicitly set. Plugin providers which do not define an icon will get the default "sign-in-alt" icon
tar exits with status 1 when uploads are modified or deleted by a sidekiq job, so we need to treat it like status 0.
According to the documentation it should be safe to ignore status 1 ("Some files differ"):
> If tar was given `--create', `--append' or `--update' option, this exit code means that some files were changed while being archived and so the resulting archive does not contain the exact copy of the file set.
Status 2 ("Fatal error") still results in an exception.