The `posts` relation on `Topic` is not ordered. Using `Topic.posts.first`
is basically the same as asking for a random post, it will depend on DB
order. This breaks on Topic merge and split for example.
Additionally, a huge problem with that is that it forces active record down
a slow path. `Topic.posts.first` is extremely slow on giant topics, since
it has no default ordering it appears AR materializes the entire set prior
to doing `first`.
This commit also illustrates the importance of testing, initially I only
fixed the second instance of the problem in `post_validator.rb` but testing
revealed that the problem was repeated at the top of the file.
Longer term we should consider a larger change of default ordering the posts
relations so people do not fall down this trap anymore.
- Themes can supply translation files in a format like `/locales/{locale}.yml`. These files should be valid YAML, with a single top level key equal to the locale being defined. For now these can only be defined using the `discourse_theme` CLI, importing a `.tar.gz`, or from a GIT repository.
- Fallback is handled on a global level (if the locale is not defined in the theme), as well as on individual keys (if some keys are missing from the selected interface language).
- Administrators can override individual keys on a per-theme basis in the /admin/customize/themes user interface.
- Theme developers should access defined translations using the new theme prefix variables:
JavaScript: `I18n.t(themePrefix("my_translation_key"))`
Handlebars: `{{theme-i18n "my_translation_key"}}` or `{{i18n (theme-prefix "my_translation_key")}}`
- To design for backwards compatibility, theme developers can check for the presence of the `themePrefix` variable in JavaScript
- As part of this, the old `{{themeSetting.setting_name}}` syntax is deprecated in favour of `{{theme-setting "setting_name"}}`
This commit introduces an ultra low priority queue for post rebakes. This
way rebakes can never interfere with regular sidekiq processing for cases
where we perform a large scale rebake.
Additionally it allows Post.rebake_old to be run with rate_limiter: false
to avoid triggering the limiter when rebaking. This is handy for cases
where you want to just force the full rebake and not wait for it to trickle
This corrects 2 issues:
First is a regression with d7c08e21 for some reason dependent :delete_all
respects default scopes where-as dependent :destroy bypasses it.
Secondly, we were keeping orphan user actions around on user destroy, this
ensures we remove all the user actions not only ones that originated by
the user.
So for example: if I like a post of user A we create a user action saying I
did that, but once user A is deleted we were not removing the action leading
to an orphan action in the database.
We use the `id` of the upload to calculate a `depth` partition in the
filename. This test would fail if your database had a higher seed
because the depth it was looking for was hard coded to 1.
The solution was to not save the records (which is faster anyway) and
specify the `id` of the upload to make the hash deterministic.
SiteSettingExtension triggers message bus which re-establishes a
DB connection in `SiteSettingExtension#process_message`. That happens
concurrently and a test that requires a connection to the db will
fail when the reconnection is happening.
This makes more sense than having the guardian take an accessor.
The logic belongs in the Serializer, where the JSON is calculated.
Also removed some of the DRYness in the spec. It's fewer lines
and made it easier to test the option on the serializer.
This commit introduces a new helper to enable transactional fixtures
when testing multisite. This would show up as tests that passed the
first time then failed the second time due to stale data being leftover.
Previously we depended on non Sidekiq specific mocking which is not the
official way of testing Sidekiq, this made these tests very fragile
New testing is more robust and complete
This allows us to run regular rebakes without starving the normal queue.
It additionally adds the ability to specify queue with `Jobs.enqueue` so
we can specifically queue a job with lower priority using the `queue` arg.
Previously we killed caching on old avatars cause we kept serving blank
this meant we would front many more avatar requests after a version change
This change ensures all old avatars do not cause a flood of requests on the
server
Before this patch, a high trust level user could flag something
and have an action be taken, as well as skipping the flag queue.
Now, if a TL3/TL4 cause an action, the flag will skip the minimum
visibility check and allow staff to review it.
This is a possible solution for https://meta.discourse.org/t/user-api-keys-specification/48536/19
This allows for user-api-key requests to not require a redirect url.
Instead, the encypted payload will just be displayed after creation ( which can be copied
pasted into an env for a CLI, for example )
Also: Show instructions when creating user-api-key w/out redirect
This adds a view to show instructions when requesting a user-api-key
without a redirect. It adds a erb template and json format.
Also adds a i18n user_api_key.instructions for server.en.yml
We have the periodical job that regularly will rebake old posts. This is
used to trickle in update to cooked markdown. The problem is that each rebake
can issue multiple background jobs (post process and pull hotlinked images)
Previously we had no per-cluster limit so cluster running 100s of sites could
flood the sidekiq queue with rebake related jobs.
New system introduces a hard limit of 300 rebakes per 15 minutes across a
cluster to ensure the sidekiq job is not dominated by this.
We also reduced `rebake_old_posts_count` to 80, which is a safer default.
This reverts commit 993f847a2c.
There is an edge case where the link click redirect fails when the URL has trailing slash. Need to figure out a better fix for this.
Previously we had no idea what algorithm generated thumbnails, this starts tracking the version.
We also bumped up the version to force all optimized images to be generated. This is important cause we recently introduced pngquant which results in much smaller images.
This feature ensures optimized images run via pngquant, this results extreme amounts of savings for resized images. Effectively the only impact is that the color palette on small resized images is reduced to 256.
To ensure safety we only apply this optimisation to images smaller than 500k.
This commit also makes a bunch of image specs less fragile.
Previously if upload had missing width and height we would calculate
on first use BUT we (me) forgot to save this to the database
This was particularly bad on home page cause category images (when old)
miss dimensions.
This allows fidelity in controlling excerpt (text that shows up when you pin a topic or link to it externally):
```
I am some text
[excerpt]
This is some **custom** markdown that should be the excerpt
[/excerpt]
More text
```
Previous solution relied on DIVs, unfortunately DIVs do not play well,
by design with mixing markdown unless you have a preceding newline eg:
```
<div class='hello'>
this will be treated properly as markdown
</div>
```
This extra newline is not desirable.
I am also considering adding
```
[div class=excerpt]
[/div]
```
This would offer lots of flexibility to themes and plugins that do not want the extra annoying newline.
As per the documentation for KEYS
```
Warning: consider KEYS as a command that should only be used in production environments with extreme care. It may ruin performance when it is executed against large databases. This command is intended for debugging and special operations, such as changing your keyspace layout.
```
Instead SCAN
```
Since these commands allow for incremental iteration, returning only a small number of elements per call, they can be used in production without the downside of commands like KEYS or SMEMBERS that may block the server for a long time (even several seconds) when called against big collections of keys or elements.
```
This generates a 10x10 PNG thumbnail for each lightboxed image.
If Image Lazy Loading is enabled (IntersectionObserver API) then
we'll load the low res version when offscreen. As the image scrolls
in we'll swap it for the high res version.
We use a WeakMap to track the old image attributes. It's much less
memory than storing them as `data-*` attributes and swapping them
back and forth all the time.
This validation makes sure that the s3_upload_bucket and the
s3_backup_bucket have different values. The backup bucket is
allowed to be a subfolder of the upload bucket. The other way
around is forbidden because the backup system searches by
prefix and would return all files stored within the backup
bucket and its subfolders.
* Dashboard doesn't timeout anymore when Amazon S3 is used for backups
* Storage stats are now a proper report with the same caching rules
* Changing the backup_location, s3_backup_bucket or creating and deleting backups removes the report from the cache
* It shows the number of backups and the backup location
* It shows the used space for the correct backup location instead of always showing used space on local storage
* It shows the date of the last backup as relative date
`SiteSerializer#is_readonly` is cached for an anonymous user so we have
to clear the cache when disabling readonly mode. Otherwise, the site may
appear to be in readonly mode for an extended period of time.
Historically due to https://meta.discourse.org/t/why-is-discourse-so-slow-on-android/8823
we decreased page sizes of both home page and topic page on android by half.
This was done on the server side and as a side effect and caused page sizes on android
to mismatch between Android and non Android.
Unfortunately about a year ago googlebot started pretending it is Android,
this cause Google to start indexing pages as what android would see. So
it saw double the amount of pages in the index as what exists on desktop.
This in turn caused double the amount of indexing work and a large amount
of broken links on long topics.
This fix removes all special behavior which is no longer needed due to
other performance work in Discourse including raw handlebars on home page
and virtual dom on topic pages.
I tested we do not need this on Blu Advance 5.0 it has 1.3 GHZ mediatec mt6580
This phone retails for around $50 USD.
If we decide long term that we want any hacks like this we will shift them
to the client side. It can just hold data in memory without rendering.
Previously the 'reconnect' process was a bit magic - IF you were already logged into discourse, and followed the auth flow, your account would be reconnected and you would be 'logged in again'.
Now, we explicitly check for a reconnect=true parameter when the flow is started, store it in the session, and then only follow the reconnect logic if that variable is present. Setting this parameter also skips the 'logged in again' step, which means reconnect now works with 2fa enabled.
Some URLs in browsers are non compliant and contain twos `#` this commit adds
special handling for this edge case by auto encoding any fragments containing `#`
Do not send an activation email to users invited via email. They
already confirmed their email address by clicking the invite link.
Users invited via link will need to confirm their email address before
they can login.
The wizard searches for:
* a topic that with the "is_welcome_topic" custom field
* a topic with the correct slug for the current default locale
* a topic with the correct slug for the English locale
* the oldest globally pinned topic
It gives up if it didn't find any of the above.
This was an indentation mistake introduced in 44eba0b. Pretty understandable, considering we are indented 8 levels deep in this method. Will follow-up with a refactor to improve this.
Previously the push notification code path was not tested for notification
collapsing. This happens if you get multiple replies to a topic you are
watching.
Previously we would notify on small actions if they were whispers
this inconsistently lead to all sorts of problems including
- collapsed "N replies" after assign
- empty push notifications
New behavior adds an api to explicitly send push notifications as well
if needed: create_notification_alert
Previously users could control excerpt with `<span class='excerpt'>`
in Markdown, this is somewhat limited for plugins that need to define this
across a section. This adds support for DIV as well
UserStat has some special logic to keep adding time read if repeat calls
are made in intervals less than 100 seconds. This is called regularly
when we update read timings on a topic.
We only need to cache this key in redis for 100 seconds, however previously
we would keep it forever, 1 key per user. This has potential of bloating
a very large amount of keys for no longer active users in redis.
Also added some helpful functionality for plugin developers:
- Raises RuntimeException if the auth provider has been registered too late
- Logs use of deprecated parameters
* FEATURE: allow plugins and themes to extend the default CSP
For plugins:
```
extend_content_security_policy(
script_src: ['https://domain.com/script.js', 'https://your-cdn.com/'],
style_src: ['https://domain.com/style.css']
)
```
For themes and components:
```
extend_content_security_policy:
type: list
default: "script_src:https://domain.com/|style_src:https://domain.com"
```
* clear CSP base url before each test
we have a test that stubs `Rails.env.development?` to true
* Only allow extending directives that core includes, for now
Changes to functionality
- Removed syncing of user metadata including gender, location etc.
These are no longer available to standard Facebook applications.
- Removed the remote 'revoke' functionality. No other providers have
it, and it does not appear to be standard practice in other apps.
- The 'facebook_no_email' event is no longer logged. The system can
cope fine with a missing email address.
Data is migrated to the new user_associated_accounts table.
facebook_user_infos can be dropped once we are confident the data has
been migrated successfully.
A generic implementation of Auth::Authenticator which stores data in the
new UserAssociatedAccount model. This should help significantly reduce the duplicated
logic across different auth providers.
After you visit a page in Rails an INFO is logged, this depending on
timing could land in the string or not
This changes the level to WARN which avoids the issue
Previously, the site setting was only effective on the client side of
things. Once the site setting was been reached, all oneboxes are not
rendered. This commit changes it such that the site setting is respected
both on the client and server side. The first N oneboxes are rendered and
once the limit has been reached, subsequent oneboxes will not be
rendered.
* Add missing icons to set
* Revert FA5 revert
This reverts commit 42572ff
* use new SVG syntax in locales
* Noscript page changes (remove login button, center "powered by" footer text)
* Cast wider net for SVG icons in settings
- include any _icon setting for SVG registry (offers better support for plugin settings)
- let themes store multiple pipe-delimited icons in a setting
- also replaces broken onebox image icon with SVG reference in cooked post processor
* interpolate icons in locales
* Fix composer whisper icon alignment
* Add support for stacked icons
* SECURITY: enforce hostname to match discourse hostname
This ensures that the hostname rails uses for various helpers always matches
the Discourse hostname
* load SVG sprite with pre-initializers
* FIX: enable caching on SVG sprites
* PERF: use JSONP for SVG sprites so they are served from CDN
This avoids needing to deal with CORS for loading of the SVG
Note, added the svg- prefix to the filename so we can quickly tell in
dev tools what the file is
* Add missing SVG sprite JSONP script to CSP
* Upgrade to FA 5.5.0
* Add support for all FA4.7 icons
- adds complete frontend and backend for renamed FA4.7 icons
- improves performance of SvgSprite.bundle and SvgSprite.all_icons
* Fix group avatar flair preview
- adds an endpoint at /svg-sprites/search/:keyword
- adds frontend ajax call that pulls icon in avatar flair preview even when it is not in subset
* Remove FA 4.7 font files
We were looking up each mention one by one without any form of caching and that results
in a problem somewhat similar to an N+1. When we have to do alot of DB
lookups, it also increased the time spent in the V8 context which may
eventually lead to a timeout. The change here makes it such that mention lookups only does a single
DB query per post that happens outside of the V8 context.
Previously in some cases we would queue logging of invalid post numbers
The impact would be we would miss logging an incoming link and would leak
an error.
This also adjusts the algorithm to expect
- 30% saving for JPEG conversion
AND
- Minimum of 75K bytes saved
The reasoning for increase of saving requirements is cause PNG may have been
uploaded unoptimized, 30% saving on PNG is very possible