* user_profiles - `location`
* users - `date_of_birth`
* topics - `pinned_at`, `pinned_until`, `pinned_globally`
This also include changes to correctly import PMs. Currently PM topics
are skipped because of a check in `import_users` step which requires `category_id`
to be present.
This commit updates `script/bench.rb` to only support Unicorn as the web
server. We don't intend to run Puma in production anytime soon so it is
pointless for us to maintain Puma related code.
This SQL tries to insert as much data as possible into the `user_stats` table by either calculating or by approximating stats based on existing. It also fixes an error in the calculation of `reply_count`which mistakenly contained all posts, not just replies.
This change also disables some steps in the `import:ensure_consistency` rake task by setting the `SKIP_USER_STATS` env variable. Otherwise, the rake task will overwrite the calculated data in the `user_stats` table with inaccurate data. I'm not changing or removing the logic from the rake task yet because other bulk import scripts seem to depend on it.
* FEATURE: Import into `category_users` table
* FIX: Failed to import `user_options` unless `timezone` was set
* FIX: Prevent reusing original `id` from intermediate DB in `user_fields`
* FEATURE: Order posts by `post_nuber` if available
* FEATURE: Allow `[mention]` placeholder to reference users by"id" or "name" (username)
* FEATURE: Support `[quote]` placeholders in posts
* FEATURE: Support `[link]` placeholders in posts
* FEATURE: Support all kinds of permalinks and remove support for `old_relative_url`
* PERF: Speed up pre-cooking by removing DB lookups
* Print instructions when the `sqlite3` gem can't be loaded
* Use `display_filename` instead of `filename` if available
* Support uploading for a multisite
Why this change?
In https://www.postgresql.org/docs/current/non-durability.html, it is
recommended to turn off `synchronous_commit` in environments where
durability is not important. The `start_test_db.rb` script is mainly
used in the CI environment where durability is not important at all.
Previously only Sidekiq was allowed to generate more than one optimized image at the same time per machine. This adds an easy mechanism to allow the same in rake tasks and other tools.
The regen_ember_5_lockfile script was actually just duplicating the ember3 lockfile without changes 🤦♂️. This commit fixes that, and updates the ember-version-enforcement workflow to detect lockfile issues in future.
Why this change?
`github.job` returns the `job_id` per the docs but it doesn't actually
return the id of the job but instead returns the job's name strangely.
Per https://github.com/orgs/community/discussions/8945, there is no way
to get the `job_id` from the existing contexts in the actions run.
Therefore, we have to hit Github's API to fetch it. Not ideal but no
way around this.
Before this commit, this output is possible:
```
Rewriting all occurrences of STRING1 to STRING2
THIS TASK WILL REWRITE DATA, ARE YOU SURE (type YES)
WILL RUN ON ALL 1 DBS
```
Which, when run from a script, might lead one to believe that YES was
automatically inserted into STDIN and the script is continuing.
Turns out this isn't the case so the obvious expectation is broken.
This commit swaps the order of those last lines to make it clear that
the script is blocked on input.
Notable changes:
* Imports a lot more tables from core and plugins
* site settings
* uploads with necessary upload references
* groups and group members
* user profiles
* user options
* user fields & values
* muted users
* user notes (plugin)
* user followers (plugin)
* user avatars
* tag groups and tags
* tag users (notification settings for tags / user)
* category permissions
* polls with options and votes
* post votes (plugin)
* solutions (plugin)
* gamification scores (plugin)
* events (plugin)
* badges and badge groupings
* user badges
* optimized images
* topic users (notification settings for topics)
* post custom fields
* permalinks and permalink normalizations
* It creates the `migration_mappings` table which is used to store the mapping for a handful of imported tables
* Detects duplicate group names and renames them
* Pre-cooking for attachments, images and mentions
* Outputs instructions when gems are missing
* Supports importing uploads from a DB generated by `uploads_importer.rb`
* Checks that all required plugins exists and enables them if needed
* A couple of optimizations and additions in `import.rake`
This script preprocesses all uploads within a intermediate DB (output of converters) and uploads those files to S3. It does the same for optimized images. This speeds up migrations when you have to run them multiple times, because you only have to preprocess and upload the files once.
This script is very hacky and mostly undocumented for now. That will change in the future.
This commit introduces the scaffolding for us to easily switch between Ember 3.28 and Ember 5 on the `main` branch of Discourse. Unfortunately, there is no built-in system to apply this kind of flagging within yarn / ember-cli. There are projects like `ember-try` which are designed for running against multiple version of a dependency, but they do not allow us to 'lock' dependency/sub-dependency versions, and are therefore unsuitable for our use in production.
Instead, we will be maintaining two root `package.json` files, and two `yarn.lock` files. For ember-3, they remain as-is. For ember5, we use a yarn 'resolution' to override the version for ember-source across the entire yarn workspace.
To allow for easy switching with minimal diff against the repository, `package.json` and `yarn.lock` are symlinks which point to `package-ember3.json` and `yarn-ember3.lock` by default. To switch to Ember 5, we can run `script/switch ember version 5` to update the symlinks to point to `package-ember5.json` and `package-ember3.json` respectively. In production, and when using `bin/ember-cli` for development, the ember version can also be upgraded using the `EMBER_VERSION=5` environment variable.
When making changes to dependencies, these should be made against the default `ember3` versions, and then `script/regen_ember_5_lockfile` should be used to regenerate `yarn-ember5.lock` accordingly. A new 'Ember Version Lockfiles' GitHub workflow will automate this process on Dependabot PRs.
When running a local environment against Ember 5, the two symlink changes will show up as git diffs. To avoid us accidentally committing/pushing that change, another GitHub workflow is introduced which checks the default Ember version and raises an error if it is greater than v3.
Supporting two ember versions simultaneously obviously carries significant overhead, so our aim will be to get themes/plugins updated as quickly as possible, and then drop this flag.
The priority field in an SRV RR indicates a preferential order at which
the underlying targets should be utilised. We need to prefer healthy
services in order of priority, where 0 is highest.
Prior to this commit, we relied on whatever order the
dnsclient.getresources method returned. As it turns out, this assumption
is incorrect. The order returned is likely whatever order the system
resolver received DNS responses in, which may not be ordered according
to the spec.
This introduces a ResolvedAddress type which holds the priority value
for SRV targets, or a stand-in priority of zero for A/AAAA RRs. This
type is used as a return value from the underlying name resolution
routines in Name and SRVName.
In this manner, all ordering by priority and resolved time can be
performed directly within the ResolverCache class and calling code can
continue to be none-the-wiser.
Before sorting, we still ensure that we only consider targets with a
priority within the given threshold as previously implemented.
See t/115911.
These updates significantly improve IDE tooling for imports across the Discourse core codebase, and also for framework packages. The `@types/ember-*` packages are a temporary solution until we get onto Ember 5, which ships its types in the main package.
The previous approach of having jsconfig files in each package directory did work, but once you start adding all the possible interlinks between them, we hit the file count limit of VSCode's tooling (because it counts every file for every jsconfig its referenced in). Having one file at the root means that a single file can apply to all core packages and plugins.
Long-term, to get the same functionality for all themes/plugins, we may need to look at building/publishing a Discourse types package which can be added to theme/plugin package.json files for development purposes.
Discourse stopped using PostUpload in 9db8f00b3d. Since then, these importers have been writing to the table, but any data was totally unused. This commit updates the easy cases to use UploadReference, and adds an error to the discourse_merger import script, which needs more significant work.
Discourse core now builds and runs with Embroider! This commit adds
the Embroider-based build pipeline (`USE_EMBROIDER=1`) and start
testing it on CI.
The new pipeline uses Embroider's compat mode + webpack bundler to
build discourse code, and leave everything else (admin, wizard,
markdown-it, plugins, etc) exactly the same using the existing
Broccoli-based build as external bundles (<script> tags), passed
to the build as `extraPublicTress` (which just means they get
placed in the `/public` folder).
At runtime, these "external" bundles are glued back together with
`loader.js`. Specifically, the external bundles are compiled as
AMD modules (just as they were before) and registered with the
global `loader.js` instance. They expect their `import`s (outside
of whatever is included in the bundle) to be already available in
the `loader.js` runtime registry.
In the classic build, _every_ module gets compiled into AMD and
gets added to the `loader.js` runtime registry. In Embroider,
the goal is to do this as little as possible, to give the bundler
more flexibility to optimize modules, or omit them entirely if it
is confident that the module is unused (i.e. tree-shaking).
Even in the most compatible mode, there are cases where Embroider
is confident enough to omit modules in the runtime `loader.js`
registry (notably, "auto-imported" non-addon NPM packages). So we
have to be mindful of that an manage those dependencies ourselves,
as seen in #22703.
In the longer term, we will look into using modern features (such
as `import()`) to express these inter-dependencies.
This will only be behind a flag for a short period of time while we
perform some final testing. Within the next few weeks, we intend
to enable by default and remove the flag.
---------
Co-authored-by: David Taylor <david@taylorhq.com>
## What is the context here?
The `docker.rake` Rakefile contains Rake tasks that are meant to be run
in the `discourse/discourse_test:release` Docker image. For example, we
have the `docker:test` Rake task that makes it easier to run the test
suite for a particular Discourse commit.
Why are we introducing a `docker:test:setup` Rake task?
While we have the `docker:test` Rake task, it is very limited in the
test commands that can be executed. It is very useful for automated
testing but not very useful for running tests in the development
environment. Therefore, we are introducing a `docker:test:setup` rake
task that can be used to set up the test environment for running tests.
The envisioned example usage is something like this:
```
docker run -d --name=discourse_test --entrypoint=/sbin/boot discourse/discourse_test:release
docker exec -u discourse:discourse discourse_test ruby script/docker_test.rb --no-tests
docker exec -u discourse:discourse discourse_test bundle exec rake docker:test:setup
docker exec -u discourse:discourse discourse_test bundle exec rspec <path to file>
```
This commit adds some system specs to test uploads with
direct to S3 single and multipart uploads via uppy. This
is done with minio as a local S3 replacement. We are doing
this to catch regressions when uppy dependencies need to
be upgraded or we change uppy upload code, since before
this there was no way to know outside manual testing whether
these changes would cause regressions.
Minio's server lifecycle and the installed binaries are managed
by the https://github.com/discourse/minio_runner gem, though the
binaries are already installed on the discourse_test image we run
GitHub CI from.
These tests will only run in CI unless you specifically use the
CI=1 or RUN_S3_SYSTEM_SPECS=1 env vars.
For a history of experimentation here see https://github.com/discourse/discourse/pull/22381
Related PRs:
* https://github.com/discourse/minio_runner/pull/1
* https://github.com/discourse/minio_runner/pull/2
* https://github.com/discourse/minio_runner/pull/3
It's very simple import script and currently imports only the following content:
* Users
* Messages as Discourse topics/posts
* Attachments
Each channel can be mapped to a category and tags. It uses regular expressions to convert formatted messages ("rich text") into Markdown used by Discourse. In the future we could convert the `blocks` attribute from each message into Markdown instead of applying regular expressions on the `text` attribute.
Various migration scripts define a normalize_raw method to do custom processing of post contents before storing it in the Post.raw and other fields.
They normally do not handle nil inputs, but it's a relatively common occurrence in data dumps.
Since this method is used from various points in the migration script, as it stands, the experience of using a migration script is that it will fail multiple times at different points, forcing you to fix the data or apply logic hacks every time then restarting.
This PR generalizes handling of nil input by returning a <missing> string.
Pros:
no more messy repeated crashes + restarts
consistency
Cons:
it might hide data issues
OTOH we can't print a warning on that method because it will flood the console since it's called from inside loops.
* FIX: zendesk import script: support nil inputs in normalize_raw
* FIX: return '<missing>' instead of empty string; do it for all methods