Historically due to https://meta.discourse.org/t/why-is-discourse-so-slow-on-android/8823
we decreased page sizes of both home page and topic page on android by half.
This was done on the server side and as a side effect and caused page sizes on android
to mismatch between Android and non Android.
Unfortunately about a year ago googlebot started pretending it is Android,
this cause Google to start indexing pages as what android would see. So
it saw double the amount of pages in the index as what exists on desktop.
This in turn caused double the amount of indexing work and a large amount
of broken links on long topics.
This fix removes all special behavior which is no longer needed due to
other performance work in Discourse including raw handlebars on home page
and virtual dom on topic pages.
I tested we do not need this on Blu Advance 5.0 it has 1.3 GHZ mediatec mt6580
This phone retails for around $50 USD.
If we decide long term that we want any hacks like this we will shift them
to the client side. It can just hold data in memory without rendering.
This feature is used for defer loading of images and in future for post cloaking
This gives us a polyfill so we can safely use the feature in problem browsers
The polyfill supports "polling" but it does not appear we need it yet.
If we discover anything odd here, consider setting poll interval per:
https://github.com/w3c/IntersectionObserver/tree/master/polyfill
```
var io = new IntersectionObserver(callback);
io.POLL_INTERVAL = 100; // Time in milliseconds.
```
Keeping the mutation observer cause we often mutate the DOM
Previously the 'reconnect' process was a bit magic - IF you were already logged into discourse, and followed the auth flow, your account would be reconnected and you would be 'logged in again'.
Now, we explicitly check for a reconnect=true parameter when the flow is started, store it in the session, and then only follow the reconnect logic if that variable is present. Setting this parameter also skips the 'logged in again' step, which means reconnect now works with 2fa enabled.
Some URLs in browsers are non compliant and contain twos `#` this commit adds
special handling for this edge case by auto encoding any fragments containing `#`
The wizard searches for:
* a topic that with the "is_welcome_topic" custom field
* a topic with the correct slug for the current default locale
* a topic with the correct slug for the English locale
* the oldest globally pinned topic
It gives up if it didn't find any of the above.
Also added some helpful functionality for plugin developers:
- Raises RuntimeException if the auth provider has been registered too late
- Logs use of deprecated parameters
* FEATURE: allow plugins and themes to extend the default CSP
For plugins:
```
extend_content_security_policy(
script_src: ['https://domain.com/script.js', 'https://your-cdn.com/'],
style_src: ['https://domain.com/style.css']
)
```
For themes and components:
```
extend_content_security_policy:
type: list
default: "script_src:https://domain.com/|style_src:https://domain.com"
```
* clear CSP base url before each test
we have a test that stubs `Rails.env.development?` to true
* Only allow extending directives that core includes, for now
Changes to functionality
- Removed syncing of user metadata including gender, location etc.
These are no longer available to standard Facebook applications.
- Removed the remote 'revoke' functionality. No other providers have
it, and it does not appear to be standard practice in other apps.
- The 'facebook_no_email' event is no longer logged. The system can
cope fine with a missing email address.
Data is migrated to the new user_associated_accounts table.
facebook_user_infos can be dropped once we are confident the data has
been migrated successfully.
A generic implementation of Auth::Authenticator which stores data in the
new UserAssociatedAccount model. This should help significantly reduce the duplicated
logic across different auth providers.
Previously, the site setting was only effective on the client side of
things. Once the site setting was been reached, all oneboxes are not
rendered. This commit changes it such that the site setting is respected
both on the client and server side. The first N oneboxes are rendered and
once the limit has been reached, subsequent oneboxes will not be
rendered.
* Add missing icons to set
* Revert FA5 revert
This reverts commit 42572ff
* use new SVG syntax in locales
* Noscript page changes (remove login button, center "powered by" footer text)
* Cast wider net for SVG icons in settings
- include any _icon setting for SVG registry (offers better support for plugin settings)
- let themes store multiple pipe-delimited icons in a setting
- also replaces broken onebox image icon with SVG reference in cooked post processor
* interpolate icons in locales
* Fix composer whisper icon alignment
* Add support for stacked icons
* SECURITY: enforce hostname to match discourse hostname
This ensures that the hostname rails uses for various helpers always matches
the Discourse hostname
* load SVG sprite with pre-initializers
* FIX: enable caching on SVG sprites
* PERF: use JSONP for SVG sprites so they are served from CDN
This avoids needing to deal with CORS for loading of the SVG
Note, added the svg- prefix to the filename so we can quickly tell in
dev tools what the file is
* Add missing SVG sprite JSONP script to CSP
* Upgrade to FA 5.5.0
* Add support for all FA4.7 icons
- adds complete frontend and backend for renamed FA4.7 icons
- improves performance of SvgSprite.bundle and SvgSprite.all_icons
* Fix group avatar flair preview
- adds an endpoint at /svg-sprites/search/:keyword
- adds frontend ajax call that pulls icon in avatar flair preview even when it is not in subset
* Remove FA 4.7 font files
We were looking up each mention one by one without any form of caching and that results
in a problem somewhat similar to an N+1. When we have to do alot of DB
lookups, it also increased the time spent in the V8 context which may
eventually lead to a timeout. The change here makes it such that mention lookups only does a single
DB query per post that happens outside of the V8 context.
Previously in some cases we would queue logging of invalid post numbers
The impact would be we would miss logging an incoming link and would leak
an error.
This also adjusts the algorithm to expect
- 30% saving for JPEG conversion
AND
- Minimum of 75K bytes saved
The reasoning for increase of saving requirements is cause PNG may have been
uploaded unoptimized, 30% saving on PNG is very possible
This removes a monkey patch we no longer need since our containers require
2.5.2 or up for all Discourse installs.
If you are looking to deploy on 2.5.1 which is highly not recommended you
will need to figure out how to apply this diff.
* Prioritizes non-image uploads
* Does one remap per upload instead of 3 remaps previously
* Every 100 uploads migrated, do 2 remaps which fixes broken
URLs
* Exclude email_logs table from remap
* Cuts number of queries from 273 to 89
* Add some specs
* For a table with 500 posts, benchmarks locally shows a runtime
reduction from 0.046929135 to 0.032694705.