discourse/app/controllers/application_controller.rb

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

1087 lines
33 KiB
Ruby
Raw Normal View History

# frozen_string_literal: true
require "current_user"
2013-02-06 03:16:51 +08:00
class ApplicationController < ActionController::Base
include CurrentUser
include CanonicalURL::ControllerExtensions
include JsonError
include GlobalPath
include Hijack
include ReadOnlyMixin
include VaryHeader
2013-02-06 03:16:51 +08:00
attr_reader :theme_id
2013-02-06 03:16:51 +08:00
serialization_scope :guardian
protect_from_forgery
# Default Rails 3.2 lets the request through with a blank session
# we are being more pedantic here and nulling session / current_user
# and then raising a CSRF exception
def handle_unverified_request
# NOTE: API key is secret, having it invalidates the need for a CSRF token
unless is_api? || is_user_api?
super
clear_current_user
render plain: "[\"BAD CSRF\"]", status: 403
end
end
FEATURE: Replace `Crawl-delay` directive with proper rate limiting (#15131) We have a couple of site setting, `slow_down_crawler_user_agents` and `slow_down_crawler_rate`, that are meant to allow site owners to signal to specific crawlers that they're crawling the site too aggressively and that they should slow down. When a crawler is added to the `slow_down_crawler_user_agents` setting, Discourse currently adds a `Crawl-delay` directive for that crawler in `/robots.txt`. Unfortunately, many crawlers don't support the `Crawl-delay` directive in `/robots.txt` which leaves the site owners no options if a crawler is crawling the site too aggressively. This PR replaces the `Crawl-delay` directive with proper rate limiting for crawlers added to the `slow_down_crawler_user_agents` list. On every request made by a non-logged in user, Discourse will check the User Agent string and if it contains one of the values of the `slow_down_crawler_user_agents` list, Discourse will only allow 1 request every N seconds for that User Agent (N is the value of the `slow_down_crawler_rate` setting) and the rest of requests made within the same interval will get a 429 response. The `slow_down_crawler_user_agents` setting becomes quite dangerous with this PR since it could rate limit lots if not all of anonymous traffic if the setting is not used appropriately. So to protect against this scenario, we've added a couple of new validations to the setting when it's changed: 1) each value added to setting must 3 characters or longer 2) each value cannot be a substring of tokens found in popular browser User Agent. The current list of prohibited values is: apple, windows, linux, ubuntu, gecko, firefox, chrome, safari, applewebkit, webkit, mozilla, macintosh, khtml, intel, osx, os x, iphone, ipad and mac.
2021-11-30 17:55:25 +08:00
before_action :rate_limit_crawlers
before_action :check_readonly_mode
before_action :handle_theme
before_action :set_current_user_for_logs
before_action :set_mp_snapshot_fields
before_action :clear_notifications
around_action :with_resolved_locale
before_action :set_mobile_view
before_action :block_if_readonly_mode
before_action :authorize_mini_profiler
before_action :redirect_to_login_if_required
before_action :block_if_requires_login
before_action :preload_json
before_action :check_xhr
after_action :add_readonly_header
after_action :perform_refresh_session
after_action :dont_cache_page
after_action :conditionally_allow_site_embedding
after_action :ensure_vary_header
after_action :add_noindex_header,
if: -> { is_feed_request? || !SiteSetting.allow_index_in_robots_txt }
after_action :add_noindex_header_to_non_canonical, if: :spa_boot_request?
around_action :link_preload, if: -> { spa_boot_request? && GlobalSetting.preload_link_header }
2013-02-07 23:45:24 +08:00
HONEYPOT_KEY ||= "HONEYPOT_KEY"
CHALLENGE_KEY ||= "CHALLENGE_KEY"
layout :set_layout
def has_escaped_fragment?
SiteSetting.enable_escaped_fragments? && params.key?("_escaped_fragment_")
end
def show_browser_update?
@show_browser_update ||= CrawlerDetection.show_browser_update?(request.user_agent)
end
helper_method :show_browser_update?
2014-10-31 02:26:35 +08:00
def use_crawler_layout?
@use_crawler_layout ||=
request.user_agent && (request.media_type.blank? || request.media_type.include?("html")) &&
!%w[json rss].include?(params[:format]) &&
(
has_escaped_fragment? || params.key?("print") || show_browser_update? ||
CrawlerDetection.crawler?(request.user_agent, request.headers["HTTP_VIA"])
)
2014-10-31 02:26:35 +08:00
end
def perform_refresh_session
refresh_session(current_user) unless @readonly_mode
end
def immutable_for(duration)
response.cache_control[:max_age] = duration.to_i
response.cache_control[:public] = true
response.cache_control[:extras] = ["immutable"]
end
def dont_cache_page
if !response.headers["Cache-Control"] && response.cache_control.blank?
response.cache_control[:no_cache] = true
response.cache_control[:extras] = ["no-store"]
end
response.headers["Discourse-No-Onebox"] = "1" if SiteSetting.login_required
end
def conditionally_allow_site_embedding
response.headers.delete("X-Frame-Options") if SiteSetting.allow_embedding_site_in_an_iframe
end
def ember_cli_required?
Rails.env.development? && ENV["ALLOW_EMBER_CLI_PROXY_BYPASS"] != "1" &&
request.headers["X-Discourse-Ember-CLI"] != "true"
end
def application_layout
ember_cli_required? ? "ember_cli" : "application"
end
def set_layout
case request.headers["Discourse-Render"]
when "desktop"
return application_layout
when "crawler"
return "crawler"
end
use_crawler_layout? ? "crawler" : application_layout
end
class RenderEmpty < StandardError
end
class PluginDisabled < StandardError
end
2013-02-06 03:16:51 +08:00
rescue_from RenderEmpty do
with_resolved_locale { render "default/empty" }
2013-02-06 03:16:51 +08:00
end
rescue_from ArgumentError do |e|
if e.message == "string contains null byte"
raise Discourse::InvalidParameters, e.message
else
raise e
end
end
rescue_from PG::ReadOnlySqlTransaction do |e|
Discourse.received_postgres_readonly!
Rails.logger.error("#{e.class} #{e.message}: #{e.backtrace.join("\n")}")
rescue_with_handler(Discourse::ReadOnly.new) || raise
end
rescue_from ActionController::ParameterMissing do |e|
render_json_error e.message, status: 400
2013-02-06 03:16:51 +08:00
end
rescue_from Discourse::SiteSettingMissing do |e|
render_json_error I18n.t("site_setting_missing", name: e.message), status: 500
end
rescue_from ActionController::RoutingError, PluginDisabled do
rescue_discourse_actions(:not_found, 404)
end
# Handles requests for giant IDs that throw pg exceptions
rescue_from ActiveModel::RangeError do |e|
if e.message =~ /ActiveModel::Type::Integer/
rescue_discourse_actions(:not_found, 404)
else
raise e
end
end
rescue_from ActiveRecord::RecordInvalid do |e|
if request.format && request.format.json?
render_json_error e, type: :record_invalid, status: 422
else
raise e
end
end
rescue_from ActiveRecord::StatementInvalid do |e|
Discourse.reset_active_record_cache_if_needed(e)
raise e
end
# If they hit the rate limiter
rescue_from RateLimiter::LimitExceeded do |e|
retry_time_in_seconds = e&.available_in
FEATURE: Apply rate limits per user instead of IP for trusted users (#14706) Currently, Discourse rate limits all incoming requests by the IP address they originate from regardless of the user making the request. This can be frustrating if there are multiple users using Discourse simultaneously while sharing the same IP address (e.g. employees in an office). This commit implements a new feature to make Discourse apply rate limits by user id rather than IP address for users at or higher than the configured trust level (1 is the default). For example, let's say a Discourse instance is configured to allow 200 requests per minute per IP address, and we have 10 users at trust level 4 using Discourse simultaneously from the same IP address. Before this feature, the 10 users could only make a total of 200 requests per minute before they got rate limited. But with the new feature, each user is allowed to make 200 requests per minute because the rate limits are applied on user id rather than the IP address. The minimum trust level for applying user-id-based rate limits can be configured by the `skip_per_ip_rate_limit_trust_level` global setting. The default is 1, but it can be changed by either adding the `DISCOURSE_SKIP_PER_IP_RATE_LIMIT_TRUST_LEVEL` environment variable with the desired value to your `app.yml`, or changing the setting's value in the `discourse.conf` file. Requests made with API keys are still rate limited by IP address and the relevant global settings that control API keys rate limits. Before this commit, Discourse's auth cookie (`_t`) was simply a 32 characters string that Discourse used to lookup the current user from the database and the cookie contained no additional information about the user. However, we had to change the cookie content in this commit so we could identify the user from the cookie without making a database query before the rate limits logic and avoid introducing a bottleneck on busy sites. Besides the 32 characters auth token, the cookie now includes the user id, trust level and the cookie's generation date, and we encrypt/sign the cookie to prevent tampering. Internal ticket number: t54739.
2021-11-18 04:27:30 +08:00
response_headers = { "Retry-After": retry_time_in_seconds.to_s }
response_headers["Discourse-Rate-Limit-Error-Code"] = e.error_code if e&.error_code
with_resolved_locale do
render_json_error(
e.description,
type: :rate_limit,
status: 429,
extras: {
wait_seconds: retry_time_in_seconds,
time_left: e&.time_left,
},
FEATURE: Apply rate limits per user instead of IP for trusted users (#14706) Currently, Discourse rate limits all incoming requests by the IP address they originate from regardless of the user making the request. This can be frustrating if there are multiple users using Discourse simultaneously while sharing the same IP address (e.g. employees in an office). This commit implements a new feature to make Discourse apply rate limits by user id rather than IP address for users at or higher than the configured trust level (1 is the default). For example, let's say a Discourse instance is configured to allow 200 requests per minute per IP address, and we have 10 users at trust level 4 using Discourse simultaneously from the same IP address. Before this feature, the 10 users could only make a total of 200 requests per minute before they got rate limited. But with the new feature, each user is allowed to make 200 requests per minute because the rate limits are applied on user id rather than the IP address. The minimum trust level for applying user-id-based rate limits can be configured by the `skip_per_ip_rate_limit_trust_level` global setting. The default is 1, but it can be changed by either adding the `DISCOURSE_SKIP_PER_IP_RATE_LIMIT_TRUST_LEVEL` environment variable with the desired value to your `app.yml`, or changing the setting's value in the `discourse.conf` file. Requests made with API keys are still rate limited by IP address and the relevant global settings that control API keys rate limits. Before this commit, Discourse's auth cookie (`_t`) was simply a 32 characters string that Discourse used to lookup the current user from the database and the cookie contained no additional information about the user. However, we had to change the cookie content in this commit so we could identify the user from the cookie without making a database query before the rate limits logic and avoid introducing a bottleneck on busy sites. Besides the 32 characters auth token, the cookie now includes the user id, trust level and the cookie's generation date, and we encrypt/sign the cookie to prevent tampering. Internal ticket number: t54739.
2021-11-18 04:27:30 +08:00
headers: response_headers,
)
end
end
rescue_from Discourse::NotLoggedIn do |e|
if (request.format && request.format.json?) || request.xhr? || !request.get?
rescue_discourse_actions(:not_logged_in, 403, include_ember: true)
else
rescue_discourse_actions(:not_found, 404)
end
end
rescue_from Discourse::InvalidParameters do |e|
opts = { custom_message: "invalid_params", custom_message_params: { message: e.message } }
if (request.format && request.format.json?) || request.xhr? || !request.get?
rescue_discourse_actions(:invalid_parameters, 400, opts.merge(include_ember: true))
else
rescue_discourse_actions(:not_found, 400, opts)
end
end
rescue_from Discourse::NotFound do |e|
rescue_discourse_actions(
:not_found,
e.status,
check_permalinks: e.check_permalinks,
original_path: e.original_path,
custom_message: e.custom_message,
)
end
rescue_from Discourse::InvalidAccess do |e|
cookies.delete(e.opts[:delete_cookie]) if e.opts[:delete_cookie].present?
rescue_discourse_actions(
:invalid_access,
403,
include_ember: true,
custom_message: e.custom_message,
custom_message_params: e.custom_message_params,
group: e.group,
)
end
2014-02-13 12:37:28 +08:00
rescue_from Discourse::ReadOnly do
unless response_body
respond_to do |format|
format.json do
render_json_error I18n.t("read_only_mode_enabled"), type: :read_only, status: 503
end
format.html { render status: 503, layout: "no_ember", template: "exceptions/read_only" }
end
end
2014-02-13 12:37:28 +08:00
end
FEATURE: Centralized 2FA page (#15377) 2FA support in Discourse was added and grown gradually over the years: we first added support for TOTP for logins, then we implemented backup codes, and last but not least, security keys. 2FA usage was initially limited to logging in, but it has been expanded and we now require 2FA for risky actions such as adding a new admin to the site. As a result of this gradual growth of the 2FA system, technical debt has accumulated to the point where it has become difficult to require 2FA for more actions. We now have 5 different 2FA UI implementations and each one has to support all 3 2FA methods (TOTP, backup codes, and security keys) which makes it difficult to maintain a consistent UX for these different implementations. Moreover, there is a lot of repeated logic in the server-side code behind these 5 UI implementations which hinders maintainability even more. This commit is the first step towards repaying the technical debt: it builds a system that centralizes as much as possible of the 2FA server-side logic and UI. The 2 main components of this system are: 1. A dedicated page for 2FA with support for all 3 methods. 2. A reusable server-side class that centralizes the 2FA logic (the `SecondFactor::AuthManager` class). From a top-level view, the 2FA flow in this new system looks like this: 1. User initiates an action that requires 2FA; 2. Server is aware that 2FA is required for this action, so it redirects the user to the 2FA page if the user has a 2FA method, otherwise the action is performed. 3. User submits the 2FA form on the page; 4. Server validates the 2FA and if it's successful, the action is performed and the user is redirected to the previous page. A more technically-detailed explanation/documentation of the new system is available as a comment at the top of the `lib/second_factor/auth_manager.rb` file. Please note that the details are not set in stone and will likely change in the future, so please don't use the system in your plugins yet. Since this is a new system that needs to be tested, we've decided to migrate only the 2FA for adding a new admin to the new system at this time (in this commit). Our plan is to gradually migrate the remaining 2FA implementations to the new system. For screenshots of the 2FA page, see PR #15377 on GitHub.
2022-02-17 17:12:59 +08:00
rescue_from SecondFactor::AuthManager::SecondFactorRequired do |e|
FEATURE: Add 2FA support to the Discourse Connect Provider protocol (#16386) Discourse has the Discourse Connect Provider protocol that makes it possible to use a Discourse instance as an identity provider for external sites. As a natural extension to this protocol, this PR adds a new feature that makes it possible to use Discourse as a 2FA provider as well as an identity provider. The rationale for this change is that it's very difficult to implement 2FA support in a website and if you have multiple websites that need to have 2FA, it's unrealistic to build and maintain a separate 2FA implementation for each one. But with this change, you can piggyback on Discourse to take care of all the 2FA details for you for as many sites as you wish. To use Discourse as a 2FA provider, you'll need to follow this guide: https://meta.discourse.org/t/-/32974. It walks you through what you need to implement on your end/site and how to configure your Discourse instance. Once you're done, there is only one additional thing you need to do which is to include `require_2fa=true` in the payload that you send to Discourse. When Discourse sees `require_2fa=true`, it'll prompt the user to confirm their 2FA using whatever methods they've enabled (TOTP or security keys), and once they confirm they'll be redirected back to the return URL you've configured and the payload will contain `confirmed_2fa=true`. If the user has no 2FA methods enabled however, the payload will not contain `confirmed_2fa`, but it will contain `no_2fa_methods=true`. You'll need to be careful to re-run all the security checks and ensure the user can still access the resource on your site after they return from Discourse. This is very important because there's nothing that guarantees the user that will come back from Discourse after they confirm 2FA is the same user that you've redirected to Discourse. Internal ticket: t62183.
2022-04-13 20:04:09 +08:00
if request.xhr?
render json: { second_factor_challenge_nonce: e.nonce }, status: 403
else
redirect_to session_2fa_path(nonce: e.nonce)
end
FEATURE: Centralized 2FA page (#15377) 2FA support in Discourse was added and grown gradually over the years: we first added support for TOTP for logins, then we implemented backup codes, and last but not least, security keys. 2FA usage was initially limited to logging in, but it has been expanded and we now require 2FA for risky actions such as adding a new admin to the site. As a result of this gradual growth of the 2FA system, technical debt has accumulated to the point where it has become difficult to require 2FA for more actions. We now have 5 different 2FA UI implementations and each one has to support all 3 2FA methods (TOTP, backup codes, and security keys) which makes it difficult to maintain a consistent UX for these different implementations. Moreover, there is a lot of repeated logic in the server-side code behind these 5 UI implementations which hinders maintainability even more. This commit is the first step towards repaying the technical debt: it builds a system that centralizes as much as possible of the 2FA server-side logic and UI. The 2 main components of this system are: 1. A dedicated page for 2FA with support for all 3 methods. 2. A reusable server-side class that centralizes the 2FA logic (the `SecondFactor::AuthManager` class). From a top-level view, the 2FA flow in this new system looks like this: 1. User initiates an action that requires 2FA; 2. Server is aware that 2FA is required for this action, so it redirects the user to the 2FA page if the user has a 2FA method, otherwise the action is performed. 3. User submits the 2FA form on the page; 4. Server validates the 2FA and if it's successful, the action is performed and the user is redirected to the previous page. A more technically-detailed explanation/documentation of the new system is available as a comment at the top of the `lib/second_factor/auth_manager.rb` file. Please note that the details are not set in stone and will likely change in the future, so please don't use the system in your plugins yet. Since this is a new system that needs to be tested, we've decided to migrate only the 2FA for adding a new admin to the new system at this time (in this commit). Our plan is to gradually migrate the remaining 2FA implementations to the new system. For screenshots of the 2FA page, see PR #15377 on GitHub.
2022-02-17 17:12:59 +08:00
end
rescue_from SecondFactor::BadChallenge do |e|
render json: { error: I18n.t(e.error_translation_key) }, status: e.status_code
end
def redirect_with_client_support(url, options = {})
if request.xhr?
response.headers["Discourse-Xhr-Redirect"] = "true"
render plain: url
else
redirect_to url, options
end
end
def rescue_discourse_actions(type, status_code, opts = nil)
opts ||= {}
show_json_errors =
(request.format && request.format.json?) || (request.xhr?) ||
2017-09-28 16:09:27 +08:00
((params[:external_id] || "").ends_with? ".json")
if type == :not_found && opts[:check_permalinks]
url = opts[:original_path] || request.fullpath
permalink = Permalink.find_by_url(url)
# there are some cases where we have a permalink but no url
# cause category / topic was deleted
if permalink.present? && permalink.target_url
# permalink present, redirect to that URL
redirect_with_client_support permalink.target_url,
status: :moved_permanently,
allow_other_host: true
return
end
end
message = title = nil
with_resolved_locale(check_current_user: false) do
if opts[:custom_message]
title = message = I18n.t(opts[:custom_message], opts[:custom_message_params] || {})
else
message = I18n.t(type)
if status_code == 403
title = I18n.t("page_forbidden.title")
else
title = I18n.t("page_not_found.title")
end
end
end
error_page_opts = { title: title, status: status_code, group: opts[:group] }
2017-09-28 21:50:01 +08:00
if show_json_errors
opts = { type: type, status: status_code }
with_resolved_locale(check_current_user: false) do
# Include error in HTML format for topics#show.
if (request.params[:controller] == "topics" && request.params[:action] == "show") ||
(
request.params[:controller] == "categories" &&
request.params[:action] == "find_by_slug"
)
opts[:extras] = {
title: I18n.t("page_not_found.page_title"),
html: build_not_found_page(error_page_opts),
group: error_page_opts[:group],
}
end
end
render_json_error message, opts
else
begin
# 404 pages won't have the session and theme_keys without these:
current_user
handle_theme
rescue Discourse::InvalidAccess
return render plain: message, status: status_code
end
with_resolved_locale do
error_page_opts[:layout] = (opts[:include_ember] && @preloaded) ? "application" : "no_ember"
render html: build_not_found_page(error_page_opts)
end
end
2013-02-06 03:16:51 +08:00
end
# If a controller requires a plugin, it will raise an exception if that plugin is
# disabled. This allows plugins to be disabled programmatically.
def self.requires_plugin(plugin_name)
before_action do
raise PluginDisabled.new if Discourse.disabled_plugin_names.include?(plugin_name)
end
end
def set_current_user_for_logs
if current_user
Logster.add_to_env(request.env, "username", current_user.username)
2015-06-16 15:43:36 +08:00
response.headers["X-Discourse-Username"] = current_user.username
end
response.headers["X-Discourse-Route"] = "#{controller_name}/#{action_name}"
end
def set_mp_snapshot_fields
if defined?(Rack::MiniProfiler)
Rack::MiniProfiler.add_snapshot_custom_field("Application version", Discourse.git_version)
if Rack::MiniProfiler.snapshots_transporter?
Rack::MiniProfiler.add_snapshot_custom_field("Site", Discourse.current_hostname)
end
end
end
def clear_notifications
if current_user && !@readonly_mode
cookie_notifications = cookies["cn"]
notifications = request.headers["Discourse-Clear-Notifications"]
if cookie_notifications
if notifications.present?
notifications += ",#{cookie_notifications}"
else
notifications = cookie_notifications
end
end
if notifications.present?
notification_ids = notifications.split(",").map(&:to_i)
Notification.read(current_user, notification_ids)
2018-05-26 09:11:10 +08:00
current_user.reload
current_user.publish_notifications_state
2018-06-28 23:03:36 +08:00
cookie_args = {}
cookie_args[:path] = Discourse.base_path if Discourse.base_path.present?
2018-06-28 23:03:36 +08:00
cookies.delete("cn", cookie_args)
end
end
end
def with_resolved_locale(check_current_user: true)
if check_current_user &&
(
user =
begin
current_user
rescue StandardError
nil
end
)
locale = user.effective_locale
else
locale = Discourse.anonymous_locale(request)
locale ||= SiteSetting.default_locale
end
locale = SiteSettings::DefaultsProvider::DEFAULT_LOCALE if !I18n.locale_available?(locale)
I18n.ensure_all_loaded!
I18n.with_locale(locale) { yield }
2013-03-01 03:31:39 +08:00
end
2013-02-06 03:16:51 +08:00
def store_preloaded(key, json)
@preloaded ||= {}
2013-02-26 00:42:20 +08:00
# I dislike that there is a gsub as opposed to a gsub!
# but we can not be mucking with user input, I wonder if there is a way
# to inject this safety deeper in the library or even in AM serializer
2013-02-11 14:28:21 +08:00
@preloaded[key] = json.gsub("</", "<\\/")
2013-02-06 03:16:51 +08:00
end
# If we are rendering HTML, preload the session data
def preload_json
# We don't preload JSON on xhr or JSON request
return if request.xhr? || request.format.json?
2013-02-06 03:16:51 +08:00
# if we are posting in makes no sense to preload
return if request.method != "GET"
# TODO should not be invoked on redirection so this should be further deferred
2013-08-06 04:25:44 +08:00
preload_anonymous_data
2013-08-06 04:25:44 +08:00
if current_user
current_user.sync_notification_channel_position
preload_current_user_data
2013-02-06 03:16:51 +08:00
end
end
def set_mobile_view
session[:mobile_view] = params[:mobile_view] if params.has_key?(:mobile_view)
end
NO_THEMES = "no_themes"
NO_PLUGINS = "no_plugins"
NO_UNOFFICIAL_PLUGINS = "no_unofficial_plugins"
SAFE_MODE = "safe_mode"
LEGACY_NO_THEMES = "no_custom"
LEGACY_NO_UNOFFICIAL_PLUGINS = "only_official"
def resolve_safe_mode
return unless guardian.can_enable_safe_mode?
safe_mode = params[SAFE_MODE]
if safe_mode&.is_a?(String)
safe_mode = safe_mode.split(",")
request.env[NO_THEMES] = safe_mode.include?(NO_THEMES) || safe_mode.include?(LEGACY_NO_THEMES)
request.env[NO_PLUGINS] = safe_mode.include?(NO_PLUGINS)
request.env[NO_UNOFFICIAL_PLUGINS] = safe_mode.include?(NO_UNOFFICIAL_PLUGINS) ||
safe_mode.include?(LEGACY_NO_UNOFFICIAL_PLUGINS)
end
end
def handle_theme
return if request.format == "js"
resolve_safe_mode
return if request.env[NO_THEMES]
theme_id = nil
if (preview_theme_id = request[:preview_theme_id]&.to_i) &&
guardian.allow_themes?([preview_theme_id], include_preview: true)
theme_id = preview_theme_id
end
user_option = current_user&.user_option
if theme_id.blank?
ids, seq = cookies[:theme_ids]&.split("|")
id = ids&.split(",")&.map(&:to_i)&.first
if id.present? && seq && seq.to_i == user_option&.theme_key_seq.to_i
theme_id = id if guardian.allow_themes?([id])
end
end
if theme_id.blank?
ids = user_option&.theme_ids || []
theme_id = ids.first if guardian.allow_themes?(ids)
end
if theme_id.blank? && SiteSetting.default_theme_id != -1 &&
guardian.allow_themes?([SiteSetting.default_theme_id])
theme_id = SiteSetting.default_theme_id
end
2013-02-06 03:16:51 +08:00
@theme_id = request.env[:resolved_theme_id] = theme_id
end
2013-02-06 03:16:51 +08:00
def guardian
FEATURE: Add 2FA support to the Discourse Connect Provider protocol (#16386) Discourse has the Discourse Connect Provider protocol that makes it possible to use a Discourse instance as an identity provider for external sites. As a natural extension to this protocol, this PR adds a new feature that makes it possible to use Discourse as a 2FA provider as well as an identity provider. The rationale for this change is that it's very difficult to implement 2FA support in a website and if you have multiple websites that need to have 2FA, it's unrealistic to build and maintain a separate 2FA implementation for each one. But with this change, you can piggyback on Discourse to take care of all the 2FA details for you for as many sites as you wish. To use Discourse as a 2FA provider, you'll need to follow this guide: https://meta.discourse.org/t/-/32974. It walks you through what you need to implement on your end/site and how to configure your Discourse instance. Once you're done, there is only one additional thing you need to do which is to include `require_2fa=true` in the payload that you send to Discourse. When Discourse sees `require_2fa=true`, it'll prompt the user to confirm their 2FA using whatever methods they've enabled (TOTP or security keys), and once they confirm they'll be redirected back to the return URL you've configured and the payload will contain `confirmed_2fa=true`. If the user has no 2FA methods enabled however, the payload will not contain `confirmed_2fa`, but it will contain `no_2fa_methods=true`. You'll need to be careful to re-run all the security checks and ensure the user can still access the resource on your site after they return from Discourse. This is very important because there's nothing that guarantees the user that will come back from Discourse after they confirm 2FA is the same user that you've redirected to Discourse. Internal ticket: t62183.
2022-04-13 20:04:09 +08:00
# sometimes we log on a user in the middle of a request so we should throw
# away the cached guardian instance when we do that
if (@guardian&.user).blank? && current_user.present?
@guardian = Guardian.new(current_user, request)
end
@guardian ||= Guardian.new(current_user, request)
2013-02-06 03:16:51 +08:00
end
def current_homepage
current_user&.user_option&.homepage || SiteSetting.anonymous_homepage
end
def serialize_data(obj, serializer, opts = nil)
2013-02-06 03:16:51 +08:00
# If it's an array, apply the serializer as an each_serializer to the elements
serializer_opts = { scope: guardian }.merge!(opts || {})
if obj.respond_to?(:to_ary)
2013-02-06 03:16:51 +08:00
serializer_opts[:each_serializer] = serializer
ActiveModel::ArraySerializer.new(obj.to_ary, serializer_opts).as_json
2013-02-07 23:45:24 +08:00
else
serializer.new(obj, serializer_opts).as_json
2013-02-06 03:16:51 +08:00
end
end
2013-02-06 03:16:51 +08:00
# This is odd, but it seems that in Rails `render json: obj` is about
# 20% slower than calling MultiJSON.dump ourselves. I'm not sure why
# Rails doesn't call MultiJson.dump when you pass it json: obj but
# it seems we don't need whatever Rails is doing.
def render_serialized(obj, serializer, opts = nil)
render_json_dump(serialize_data(obj, serializer, opts), opts)
2013-02-06 03:16:51 +08:00
end
def render_json_dump(obj, opts = nil)
opts ||= {}
2015-04-28 01:52:37 +08:00
if opts[:rest_serializer]
obj["__rest_serializer"] = "1"
opts.each { |k, v| obj[k] = v if k.to_s.start_with?("refresh_") }
obj["extras"] = opts[:extras] if opts[:extras]
2017-09-12 04:44:20 +08:00
obj["meta"] = opts[:meta] if opts[:meta]
2015-04-28 01:52:37 +08:00
end
render json: MultiJson.dump(obj), status: opts[:status] || 200
2013-02-06 03:16:51 +08:00
end
def can_cache_content?
current_user.blank? && cookies[:authentication_data].blank?
end
# Our custom cache method
def discourse_expires_in(time_length)
return unless can_cache_content?
2014-01-04 15:53:27 +08:00
Middleware::AnonymousCache.anon_cache(request.env, time_length)
2013-02-06 03:16:51 +08:00
end
def fetch_user_from_params(opts = nil, eager_load = [])
opts ||= {}
user =
if params[:username]
username_lower = params[:username].downcase.chomp(".json")
if current_user && current_user.username_lower == username_lower
current_user
else
find_opts = { username_lower: username_lower }
find_opts[:active] = true unless opts[:include_inactive] || current_user.try(:staff?)
result = User
(result = result.includes(*eager_load)) if !eager_load.empty?
result.find_by(find_opts)
end
elsif params[:external_id]
external_id = params[:external_id].chomp(".json")
if provider_name = params[:external_provider]
raise Discourse::InvalidAccess unless guardian.is_admin? # external_id might be something sensitive
provider = Discourse.enabled_authenticators.find { |a| a.name == provider_name }
raise Discourse::NotFound if !provider&.is_managed? # Only managed authenticators use UserAssociatedAccount
UserAssociatedAccount.find_by(
provider_name: provider_name,
provider_uid: external_id,
)&.user
else
SingleSignOnRecord.find_by(external_id: external_id).try(:user)
end
end
2015-05-07 09:00:51 +08:00
raise Discourse::NotFound if user.blank?
guardian.ensure_can_see!(user)
user
end
2013-09-04 23:53:00 +08:00
def post_ids_including_replies
post_ids = params[:post_ids].map(&:to_i)
post_ids |= PostReply.where(post_id: params[:reply_post_ids]).pluck(:reply_post_id) if params[
:reply_post_ids
]
2013-09-04 23:53:00 +08:00
post_ids
end
def no_cookies
# do your best to ensure response has no cookies
# longer term we may want to push this into middleware
headers.delete "Set-Cookie"
request.session_options[:skip] = true
end
def secure_session
SecureSession.new(session["secure_session_id"] ||= SecureRandom.hex)
end
def handle_permalink(path)
permalink = Permalink.find_by_url(path)
if permalink && permalink.target_url
redirect_to permalink.target_url, status: :moved_permanently
end
end
def rate_limit_second_factor!(user)
return if params[:second_factor_token].blank?
RateLimiter.new(nil, "second-factor-min-#{request.remote_ip}", 6, 1.minute).performed!
RateLimiter.new(nil, "second-factor-min-#{user.username}", 6, 1.minute).performed! if user
end
2013-02-06 03:16:51 +08:00
private
def preload_anonymous_data
store_preloaded("site", Site.json_for(guardian))
2013-08-06 04:25:44 +08:00
store_preloaded("siteSettings", SiteSetting.client_settings_json)
store_preloaded("customHTML", custom_html_json)
2014-06-19 02:04:10 +08:00
store_preloaded("banner", banner_json)
store_preloaded("customEmoji", custom_emoji)
store_preloaded("isReadOnly", @readonly_mode.to_s)
store_preloaded("isStaffWritesOnly", @staff_writes_only_mode.to_s)
FEATURE: Introduce theme/component QUnit tests (take 2) (#12661) This commit allows themes and theme components to have QUnit tests. To add tests to your theme/component, create a top-level directory in your theme and name it `test`, and Discourse will save all the files in that directory (and its sub-directories) as "tests files" in the database. While tests files/directories are not required to be organized in a specific way, we recommend that you follow Discourse core's tests [structure](https://github.com/discourse/discourse/tree/master/app/assets/javascripts/discourse/tests). Writing theme tests should be identical to writing plugins or core tests; all the `import` statements and APIs that you see in core (or plugins) to define/setup tests should just work in themes. You do need a working Discourse install to run theme tests, and you have 2 ways to run theme tests: * In the browser at the `/qunit` route. `/qunit` will run tests of all active themes/components as well as core and plugins. The `/qunit` now accepts a `theme_name` or `theme_url` params that you can use to run tests of a specific theme/component like so: `/qunit?theme_name=<your_theme_name>`. * In the command line using the `themes:qunit` rake task. This take is meant to run tests of a single theme/component so you need to provide it with a theme name or URL like so: `bundle exec rake themes:qunit[name=<theme_name>]` or `bundle exec rake themes:qunit[url=<theme_url>]`. There are some refactors to how Discourse processes JavaScript that comes with themes/components, and these refactors may break your JS customizations; see https://meta.discourse.org/t/upcoming-core-changes-that-may-break-some-themes-components-april-12/186252?u=osama for details on how you can check if your themes/components are affected and what you need to do to fix them. This commit also improves theme error handling in Discourse. We will now be able to catch errors that occur when theme initializers are run and prevent them from breaking the site and other themes/components.
2021-04-12 20:02:58 +08:00
store_preloaded("activatedThemes", activated_themes_json)
2018-06-07 13:28:18 +08:00
end
2017-07-28 09:20:09 +08:00
def preload_current_user_data
store_preloaded(
"currentUser",
MultiJson.dump(
CurrentUserSerializer.new(
current_user,
scope: guardian,
root: false,
enable_sidebar_param: params[:enable_sidebar],
),
),
DEV: Topic tracking state improvements (#13218) I merged this PR in yesterday, finally thinking this was done https://github.com/discourse/discourse/pull/12958 but then a wild performance regression occurred. These are the problem methods: https://github.com/discourse/discourse/blob/1aa20bd681e634f7fff22953ed62d90c2573b331/app/serializers/topic_tracking_state_serializer.rb#L13-L21 Turns out date comparison is super expensive on the backend _as well as_ the frontend. The fix was to just move the `treat_as_new_topic_start_date` into the SQL query rather than using the slower `UserOption#treat_as_new_topic_start_date` method in ruby. After this change, 1% of the total time is spent with the `created_in_new_period` comparison instead of ~20%. ---- History: Original PR which had to be reverted **https://github.com/discourse/discourse/pull/12555**. See the description there for what this PR is achieving, plus below. The issue with the original PR is addressed in https://github.com/discourse/discourse/pull/12958/commits/92ef54f4020111ffacb0f2a27da5d5c2855f9d5d If you went to the `x unread` link for a tag Chrome would freeze up and possibly crash, or eventually unfreeze after nearly 10 mins. Other routes for unread/new were similarly slow. From profiling the issue was the `sync` function of `topic-tracking-state.js`, which calls down to `isNew` which in turn calls `moment`, a change I had made in the PR above. The time it takes locally with ~1400 topics in the tracking state is 2.3 seconds. To solve this issue, I have moved these calculations for "created in new period" and "unread not too old" into the tracking state serializer. When I was looking at the profiler I also noticed this issue which was just compounding the problem. Every time we modify topic tracking state we recalculate the sidebar tracking/everything/tag counts. However this calls `forEachTracked` and `countTags` which can be quite expensive as they go through the whole tracking state (and were also calling the removed moment functions). I added some logs and this was being called 30 times when navigating to a new /unread route because `sync` is being called from `build-topic-route` (one for each topic loaded due to pagination). So I just added a debounce here and it makes things even faster. Finally, I changed topic tracking state to use a Map so our counts of the state keys is faster (Maps have .size whereas objects you have to do Object.keys(obj) which is O(n).) <!-- NOTE: All pull requests should have tests (rspec in Ruby, qunit in JavaScript). If your code does not include test coverage, please include an explanation of why it was omitted. -->
2021-06-02 07:06:29 +08:00
)
report = TopicTrackingState.report(current_user)
DEV: Topic tracking state improvements (#13218) I merged this PR in yesterday, finally thinking this was done https://github.com/discourse/discourse/pull/12958 but then a wild performance regression occurred. These are the problem methods: https://github.com/discourse/discourse/blob/1aa20bd681e634f7fff22953ed62d90c2573b331/app/serializers/topic_tracking_state_serializer.rb#L13-L21 Turns out date comparison is super expensive on the backend _as well as_ the frontend. The fix was to just move the `treat_as_new_topic_start_date` into the SQL query rather than using the slower `UserOption#treat_as_new_topic_start_date` method in ruby. After this change, 1% of the total time is spent with the `created_in_new_period` comparison instead of ~20%. ---- History: Original PR which had to be reverted **https://github.com/discourse/discourse/pull/12555**. See the description there for what this PR is achieving, plus below. The issue with the original PR is addressed in https://github.com/discourse/discourse/pull/12958/commits/92ef54f4020111ffacb0f2a27da5d5c2855f9d5d If you went to the `x unread` link for a tag Chrome would freeze up and possibly crash, or eventually unfreeze after nearly 10 mins. Other routes for unread/new were similarly slow. From profiling the issue was the `sync` function of `topic-tracking-state.js`, which calls down to `isNew` which in turn calls `moment`, a change I had made in the PR above. The time it takes locally with ~1400 topics in the tracking state is 2.3 seconds. To solve this issue, I have moved these calculations for "created in new period" and "unread not too old" into the tracking state serializer. When I was looking at the profiler I also noticed this issue which was just compounding the problem. Every time we modify topic tracking state we recalculate the sidebar tracking/everything/tag counts. However this calls `forEachTracked` and `countTags` which can be quite expensive as they go through the whole tracking state (and were also calling the removed moment functions). I added some logs and this was being called 30 times when navigating to a new /unread route because `sync` is being called from `build-topic-route` (one for each topic loaded due to pagination). So I just added a debounce here and it makes things even faster. Finally, I changed topic tracking state to use a Map so our counts of the state keys is faster (Maps have .size whereas objects you have to do Object.keys(obj) which is O(n).) <!-- NOTE: All pull requests should have tests (rspec in Ruby, qunit in JavaScript). If your code does not include test coverage, please include an explanation of why it was omitted. -->
2021-06-02 07:06:29 +08:00
serializer =
ActiveModel::ArraySerializer.new(
report,
DEV: Topic tracking state improvements (#13218) I merged this PR in yesterday, finally thinking this was done https://github.com/discourse/discourse/pull/12958 but then a wild performance regression occurred. These are the problem methods: https://github.com/discourse/discourse/blob/1aa20bd681e634f7fff22953ed62d90c2573b331/app/serializers/topic_tracking_state_serializer.rb#L13-L21 Turns out date comparison is super expensive on the backend _as well as_ the frontend. The fix was to just move the `treat_as_new_topic_start_date` into the SQL query rather than using the slower `UserOption#treat_as_new_topic_start_date` method in ruby. After this change, 1% of the total time is spent with the `created_in_new_period` comparison instead of ~20%. ---- History: Original PR which had to be reverted **https://github.com/discourse/discourse/pull/12555**. See the description there for what this PR is achieving, plus below. The issue with the original PR is addressed in https://github.com/discourse/discourse/pull/12958/commits/92ef54f4020111ffacb0f2a27da5d5c2855f9d5d If you went to the `x unread` link for a tag Chrome would freeze up and possibly crash, or eventually unfreeze after nearly 10 mins. Other routes for unread/new were similarly slow. From profiling the issue was the `sync` function of `topic-tracking-state.js`, which calls down to `isNew` which in turn calls `moment`, a change I had made in the PR above. The time it takes locally with ~1400 topics in the tracking state is 2.3 seconds. To solve this issue, I have moved these calculations for "created in new period" and "unread not too old" into the tracking state serializer. When I was looking at the profiler I also noticed this issue which was just compounding the problem. Every time we modify topic tracking state we recalculate the sidebar tracking/everything/tag counts. However this calls `forEachTracked` and `countTags` which can be quite expensive as they go through the whole tracking state (and were also calling the removed moment functions). I added some logs and this was being called 30 times when navigating to a new /unread route because `sync` is being called from `build-topic-route` (one for each topic loaded due to pagination). So I just added a debounce here and it makes things even faster. Finally, I changed topic tracking state to use a Map so our counts of the state keys is faster (Maps have .size whereas objects you have to do Object.keys(obj) which is O(n).) <!-- NOTE: All pull requests should have tests (rspec in Ruby, qunit in JavaScript). If your code does not include test coverage, please include an explanation of why it was omitted. -->
2021-06-02 07:06:29 +08:00
each_serializer: TopicTrackingStateSerializer,
scope: guardian,
)
store_preloaded("topicTrackingStates", MultiJson.dump(serializer))
end
def custom_html_json
target = view_context.mobile_view? ? :mobile : :desktop
data =
if @theme_id.present?
2018-06-07 13:28:18 +08:00
{
top: Theme.lookup_field(@theme_id, target, "after_header"),
footer: Theme.lookup_field(@theme_id, target, "footer"),
2018-06-07 13:28:18 +08:00
}
else
2018-06-07 13:28:18 +08:00
{}
end
data.merge! DiscoursePluginRegistry.custom_html if DiscoursePluginRegistry.custom_html
DiscoursePluginRegistry.html_builders.each do |name, _|
if name.start_with?("client:")
data[name.sub(/^client:/, "")] = DiscoursePluginRegistry.build_html(name, self)
2018-06-07 13:28:18 +08:00
end
end
MultiJson.dump(data)
2018-06-07 13:28:18 +08:00
end
2017-02-24 19:56:13 +08:00
def self.banner_json_cache
@banner_json_cache ||= DistributedCache.new("banner_json")
end
2014-06-19 02:04:10 +08:00
def banner_json
json = ApplicationController.banner_json_cache["json"]
2022-06-14 01:10:21 +08:00
return "{}" if !current_user && SiteSetting.login_required?
2014-06-19 02:04:10 +08:00
unless json
2014-12-23 08:12:26 +08:00
topic = Topic.where(archetype: Archetype.banner).first
banner = topic.present? ? topic.banner : {}
ApplicationController.banner_json_cache["json"] = json = MultiJson.dump(banner)
end
json
2013-02-06 03:16:51 +08:00
end
def custom_emoji
serializer = ActiveModel::ArraySerializer.new(Emoji.custom, each_serializer: EmojiSerializer)
MultiJson.dump(serializer)
end
# Render action for a JSON error.
2018-06-07 13:28:18 +08:00
#
2013-02-06 03:16:51 +08:00
# obj - a translated string, an ActiveRecord model, or an array of translated strings
2018-06-07 13:28:18 +08:00
# opts:
2013-02-06 03:16:51 +08:00
# type - a machine-readable description of the error
# status - HTTP status code to return
# headers - extra headers for the response
2013-02-06 03:16:51 +08:00
def render_json_error(obj, opts = {})
2014-06-19 02:04:10 +08:00
opts = { status: opts } if opts.is_a?(Integer)
opts.fetch(:headers, {}).each { |name, value| headers[name.to_s] = value }
2013-02-06 03:16:51 +08:00
render(
json: MultiJson.dump(create_errors_json(obj, opts)),
status: opts[:status] || status_code(obj),
)
end
def status_code(obj)
return 403 if obj.try(:forbidden)
return 404 if obj.try(:not_found)
422
2018-06-07 13:28:18 +08:00
end
2013-02-06 03:16:51 +08:00
def success_json
{ success: "OK" }
2013-02-06 03:16:51 +08:00
end
def failed_json
2014-06-19 02:04:10 +08:00
{ failed: "FAILED" }
2018-06-07 13:28:18 +08:00
end
2015-02-23 13:28:50 +08:00
def json_result(obj, opts = {})
2013-02-06 03:16:51 +08:00
if yield(obj)
json = success_json
2018-06-07 13:28:18 +08:00
2013-02-06 03:16:51 +08:00
# If we were given a serializer, add the class to the json that comes back
if opts[:serializer].present?
json[obj.class.name.underscore] = opts[:serializer].new(
obj,
scope: guardian,
).serializable_hash
2018-06-07 13:28:18 +08:00
end
2013-02-06 03:16:51 +08:00
render json: MultiJson.dump(json)
else
error_obj = nil
if opts[:additional_errors]
error_target =
opts[:additional_errors].find do |o|
2019-05-07 09:27:05 +08:00
target = obj.public_send(o)
target && target.errors.present?
end
2019-05-07 09:27:05 +08:00
error_obj = obj.public_send(error_target) if error_target
2013-02-06 03:16:51 +08:00
end
render_json_error(error_obj || obj)
2013-02-07 23:45:24 +08:00
end
2018-06-07 13:28:18 +08:00
end
2013-02-06 03:16:51 +08:00
def mini_profiler_enabled?
defined?(Rack::MiniProfiler) && (guardian.is_developer? || Rails.env.development?)
2013-02-06 03:16:51 +08:00
end
def authorize_mini_profiler
return unless mini_profiler_enabled?
Rack::MiniProfiler.authorize_request
end
2013-02-07 23:45:24 +08:00
def check_xhr
2013-08-06 04:25:44 +08:00
# bypass xhr check on PUT / POST / DELETE provided api key is there, otherwise calling api is annoying
return if !request.get? && (is_api? || is_user_api?)
unless ((request.format && request.format.json?) || request.xhr?)
raise ApplicationController::RenderEmpty.new
end
2013-02-06 03:16:51 +08:00
end
def apply_cdn_headers
if Discourse.is_cdn_request?(request.env, request.method)
Discourse.apply_cdn_headers(response.headers)
end
end
def self.requires_login(arg = {})
@requires_login_arg = arg
end
def self.requires_login_arg
@requires_login_arg
end
def block_if_requires_login
if arg = self.class.requires_login_arg
check =
if except = arg[:except]
!except.include?(action_name.to_sym)
elsif only = arg[:only]
only.include?(action_name.to_sym)
else
true
end
ensure_logged_in if check
end
2018-06-07 13:28:18 +08:00
end
2013-02-06 03:16:51 +08:00
def ensure_logged_in
raise Discourse::NotLoggedIn.new unless current_user.present?
end
2013-02-07 23:45:24 +08:00
2015-04-11 05:00:50 +08:00
def ensure_staff
raise Discourse::InvalidAccess.new unless current_user && current_user.staff?
end
def ensure_admin
raise Discourse::InvalidAccess.new unless current_user && current_user.admin?
end
def ensure_wizard_enabled
raise Discourse::InvalidAccess.new unless SiteSetting.wizard_enabled?
end
def destination_url
request.original_url unless request.original_url =~ /uploads/
end
def redirect_to_login
dont_cache_page
FEATURE: Rename 'Discourse SSO' to DiscourseConnect (#11978) The 'Discourse SSO' protocol is being rebranded to DiscourseConnect. This should help to reduce confusion when 'SSO' is used in the generic sense. This commit aims to: - Rename `sso_` site settings. DiscourseConnect specific ones are prefixed `discourse_connect_`. Generic settings are prefixed `auth_` - Add (server-side-only) backwards compatibility for the old setting names, with deprecation notices - Copy `site_settings` database records to the new names - Rename relevant translation keys - Update relevant translations This commit does **not** aim to: - Rename any Ruby classes or methods. This might be done in a future commit - Change any URLs. This would break existing integrations - Make any changes to the protocol. This would break existing integrations - Change any functionality. Further normalization across DiscourseConnect and other auth methods will be done separately The risks are: - There is no backwards compatibility for site settings on the client-side. Accessing auth-related site settings in Javascript is fairly rare, and an error on the client side would not be security-critical. - If a plugin is monkey-patching parts of the auth process, changes to locale keys could cause broken error messages. This should also be unlikely. The old site setting names remain functional, so security-related overrides will remain working. A follow-up commit will be made with a post-deploy migration to delete the old `site_settings` rows.
2021-02-08 18:04:33 +08:00
if SiteSetting.auth_immediately && SiteSetting.enable_discourse_connect?
# save original URL in a session so we can redirect after login
session[:destination_url] = destination_url
redirect_to path("/session/sso")
FEATURE: Rename 'Discourse SSO' to DiscourseConnect (#11978) The 'Discourse SSO' protocol is being rebranded to DiscourseConnect. This should help to reduce confusion when 'SSO' is used in the generic sense. This commit aims to: - Rename `sso_` site settings. DiscourseConnect specific ones are prefixed `discourse_connect_`. Generic settings are prefixed `auth_` - Add (server-side-only) backwards compatibility for the old setting names, with deprecation notices - Copy `site_settings` database records to the new names - Rename relevant translation keys - Update relevant translations This commit does **not** aim to: - Rename any Ruby classes or methods. This might be done in a future commit - Change any URLs. This would break existing integrations - Make any changes to the protocol. This would break existing integrations - Change any functionality. Further normalization across DiscourseConnect and other auth methods will be done separately The risks are: - There is no backwards compatibility for site settings on the client-side. Accessing auth-related site settings in Javascript is fairly rare, and an error on the client side would not be security-critical. - If a plugin is monkey-patching parts of the auth process, changes to locale keys could cause broken error messages. This should also be unlikely. The old site setting names remain functional, so security-related overrides will remain working. A follow-up commit will be made with a post-deploy migration to delete the old `site_settings` rows.
2021-02-08 18:04:33 +08:00
elsif SiteSetting.auth_immediately && !SiteSetting.enable_local_logins &&
Discourse.enabled_authenticators.length == 1 && !cookies[:authentication_data]
# Only one authentication provider, direct straight to it.
# If authentication_data is present, then we are halfway though registration. Don't redirect offsite
cookies[:destination_url] = destination_url
redirect_to path("/auth/#{Discourse.enabled_authenticators.first.name}")
else
# save original URL in a cookie (javascript redirects after login in this case)
cookies[:destination_url] = destination_url
redirect_to path("/login")
end
end
def redirect_to_login_if_required
return if request.format.json? && is_api?
# Used by clients authenticated via user API.
# Redirects to provided URL scheme if
# - request uses a valid public key and auth_redirect scheme
# - one_time_password scope is allowed
if !current_user && params.has_key?(:user_api_public_key) && params.has_key?(:auth_redirect)
begin
OpenSSL::PKey::RSA.new(params[:user_api_public_key])
rescue OpenSSL::PKey::RSAError
return render plain: I18n.t("user_api_key.invalid_public_key")
end
if UserApiKey.invalid_auth_redirect?(params[:auth_redirect])
return render plain: I18n.t("user_api_key.invalid_auth_redirect")
end
if UserApiKey.allowed_scopes.superset?(Set.new(["one_time_password"]))
redirect_to("#{params[:auth_redirect]}?otp=true", allow_other_host: true)
return
end
end
if !current_user && SiteSetting.login_required?
flash.keep
if (request.format && request.format.json?) || request.xhr? || !request.get?
ensure_logged_in
else
redirect_to_login
end
return
end
2020-01-15 18:27:12 +08:00
return if !current_user
return if !should_enforce_2fa?
redirect_path = path("/u/#{current_user.encoded_username}/preferences/second-factor")
2020-01-15 18:27:12 +08:00
if !request.fullpath.start_with?(redirect_path)
redirect_to path(redirect_path)
nil
end
2018-06-07 13:28:18 +08:00
end
2020-01-15 18:27:12 +08:00
def should_enforce_2fa?
disqualified_from_2fa_enforcement = request.format.json? || is_api? || current_user.anonymous?
enforcing_2fa =
(
2020-01-15 18:27:12 +08:00
(SiteSetting.enforce_second_factor == "staff" && current_user.staff?) ||
SiteSetting.enforce_second_factor == "all"
)
!disqualified_from_2fa_enforcement && enforcing_2fa &&
!current_user.has_any_second_factor_methods_enabled?
end
def build_not_found_page(opts = {})
if SiteSetting.bootstrap_error_pages?
preload_json
opts[:layout] = "application" if opts[:layout] == "no_ember"
end
@current_user =
begin
current_user
rescue StandardError
nil
end
if !SiteSetting.login_required? || @current_user
key = "page_not_found_topics:#{I18n.locale}"
@topics_partial =
Discourse
.cache
.fetch(key, expires_in: 10.minutes) do
2019-02-12 18:20:33 +08:00
category_topic_ids = Category.pluck(:topic_id).compact
@top_viewed =
TopicQuery
.new(nil, except_topic_ids: category_topic_ids)
.list_top_for("monthly")
.topics
.first(10)
@recent = Topic.includes(:category).where.not(id: category_topic_ids).recent(10)
render_to_string partial: "/exceptions/not_found_topics", formats: [:html]
end
.html_safe
2019-02-12 18:20:33 +08:00
end
2015-07-28 16:02:39 +08:00
@container_class = "wrap not-found-container"
@page_title = I18n.t("page_not_found.page_title")
@title = opts[:title] || I18n.t("page_not_found.title")
@group = opts[:group]
@hide_search = true if SiteSetting.login_required
params[:slug] = params[:slug].first if params[:slug].kind_of?(Array)
params[:id] = params[:id].first if params[:id].kind_of?(Array)
@slug = (params[:slug].presence || params[:id].presence || "").to_s.tr("-", " ")
render_to_string status: opts[:status],
layout: opts[:layout],
formats: [:html],
template: "/exceptions/not_found"
2018-06-07 13:28:18 +08:00
end
def is_asset_path
request.env["DISCOURSE_IS_ASSET_PATH"] = 1
end
def is_feed_request?
request.format.atom? || request.format.rss?
end
def add_noindex_header
if request.get? && !response.headers["X-Robots-Tag"]
if SiteSetting.allow_index_in_robots_txt
response.headers["X-Robots-Tag"] = "noindex"
else
response.headers["X-Robots-Tag"] = "noindex, nofollow"
end
end
end
def add_noindex_header_to_non_canonical
canonical = (@canonical_url || @default_canonical)
if canonical.present? && canonical != request.url &&
!SiteSetting.allow_indexing_non_canonical_urls
response.headers["X-Robots-Tag"] ||= "noindex"
end
end
protected
def honeypot_value
secure_session[HONEYPOT_KEY] ||= SecureRandom.hex
end
def challenge_value
secure_session[CHALLENGE_KEY] ||= SecureRandom.hex
end
def render_post_json(post, add_raw: true)
post_serializer = PostSerializer.new(post, scope: guardian, root: false)
post_serializer.add_raw = add_raw
counts = PostAction.counts_for([post], current_user)
if counts && counts = counts[post.id]
post_serializer.post_actions = counts
end
render_json_dump(post_serializer)
2018-06-07 13:28:18 +08:00
end
# returns an array of integers given a param key
# returns nil if key is not found
def param_to_integer_list(key, delimiter = ",")
case params[key]
when String
params[key].split(delimiter).map(&:to_i)
when Array
params[key].map(&:to_i)
end
2018-06-07 13:28:18 +08:00
end
FEATURE: Introduce theme/component QUnit tests (take 2) (#12661) This commit allows themes and theme components to have QUnit tests. To add tests to your theme/component, create a top-level directory in your theme and name it `test`, and Discourse will save all the files in that directory (and its sub-directories) as "tests files" in the database. While tests files/directories are not required to be organized in a specific way, we recommend that you follow Discourse core's tests [structure](https://github.com/discourse/discourse/tree/master/app/assets/javascripts/discourse/tests). Writing theme tests should be identical to writing plugins or core tests; all the `import` statements and APIs that you see in core (or plugins) to define/setup tests should just work in themes. You do need a working Discourse install to run theme tests, and you have 2 ways to run theme tests: * In the browser at the `/qunit` route. `/qunit` will run tests of all active themes/components as well as core and plugins. The `/qunit` now accepts a `theme_name` or `theme_url` params that you can use to run tests of a specific theme/component like so: `/qunit?theme_name=<your_theme_name>`. * In the command line using the `themes:qunit` rake task. This take is meant to run tests of a single theme/component so you need to provide it with a theme name or URL like so: `bundle exec rake themes:qunit[name=<theme_name>]` or `bundle exec rake themes:qunit[url=<theme_url>]`. There are some refactors to how Discourse processes JavaScript that comes with themes/components, and these refactors may break your JS customizations; see https://meta.discourse.org/t/upcoming-core-changes-that-may-break-some-themes-components-april-12/186252?u=osama for details on how you can check if your themes/components are affected and what you need to do to fix them. This commit also improves theme error handling in Discourse. We will now be able to catch errors that occur when theme initializers are run and prevent them from breaking the site and other themes/components.
2021-04-12 20:02:58 +08:00
def activated_themes_json
id = @theme_id
return "{}" if id.blank?
ids = Theme.transform_ids(id)
FEATURE: Introduce theme/component QUnit tests (take 2) (#12661) This commit allows themes and theme components to have QUnit tests. To add tests to your theme/component, create a top-level directory in your theme and name it `test`, and Discourse will save all the files in that directory (and its sub-directories) as "tests files" in the database. While tests files/directories are not required to be organized in a specific way, we recommend that you follow Discourse core's tests [structure](https://github.com/discourse/discourse/tree/master/app/assets/javascripts/discourse/tests). Writing theme tests should be identical to writing plugins or core tests; all the `import` statements and APIs that you see in core (or plugins) to define/setup tests should just work in themes. You do need a working Discourse install to run theme tests, and you have 2 ways to run theme tests: * In the browser at the `/qunit` route. `/qunit` will run tests of all active themes/components as well as core and plugins. The `/qunit` now accepts a `theme_name` or `theme_url` params that you can use to run tests of a specific theme/component like so: `/qunit?theme_name=<your_theme_name>`. * In the command line using the `themes:qunit` rake task. This take is meant to run tests of a single theme/component so you need to provide it with a theme name or URL like so: `bundle exec rake themes:qunit[name=<theme_name>]` or `bundle exec rake themes:qunit[url=<theme_url>]`. There are some refactors to how Discourse processes JavaScript that comes with themes/components, and these refactors may break your JS customizations; see https://meta.discourse.org/t/upcoming-core-changes-that-may-break-some-themes-components-april-12/186252?u=osama for details on how you can check if your themes/components are affected and what you need to do to fix them. This commit also improves theme error handling in Discourse. We will now be able to catch errors that occur when theme initializers are run and prevent them from breaking the site and other themes/components.
2021-04-12 20:02:58 +08:00
Theme.where(id: ids).pluck(:id, :name).to_h.to_json
end
FEATURE: Replace `Crawl-delay` directive with proper rate limiting (#15131) We have a couple of site setting, `slow_down_crawler_user_agents` and `slow_down_crawler_rate`, that are meant to allow site owners to signal to specific crawlers that they're crawling the site too aggressively and that they should slow down. When a crawler is added to the `slow_down_crawler_user_agents` setting, Discourse currently adds a `Crawl-delay` directive for that crawler in `/robots.txt`. Unfortunately, many crawlers don't support the `Crawl-delay` directive in `/robots.txt` which leaves the site owners no options if a crawler is crawling the site too aggressively. This PR replaces the `Crawl-delay` directive with proper rate limiting for crawlers added to the `slow_down_crawler_user_agents` list. On every request made by a non-logged in user, Discourse will check the User Agent string and if it contains one of the values of the `slow_down_crawler_user_agents` list, Discourse will only allow 1 request every N seconds for that User Agent (N is the value of the `slow_down_crawler_rate` setting) and the rest of requests made within the same interval will get a 429 response. The `slow_down_crawler_user_agents` setting becomes quite dangerous with this PR since it could rate limit lots if not all of anonymous traffic if the setting is not used appropriately. So to protect against this scenario, we've added a couple of new validations to the setting when it's changed: 1) each value added to setting must 3 characters or longer 2) each value cannot be a substring of tokens found in popular browser User Agent. The current list of prohibited values is: apple, windows, linux, ubuntu, gecko, firefox, chrome, safari, applewebkit, webkit, mozilla, macintosh, khtml, intel, osx, os x, iphone, ipad and mac.
2021-11-30 17:55:25 +08:00
def rate_limit_crawlers
return if current_user.present?
return if SiteSetting.slow_down_crawler_user_agents.blank?
user_agent = request.user_agent&.downcase
return if user_agent.blank?
SiteSetting
.slow_down_crawler_user_agents
.downcase
.split("|")
.each do |crawler|
if user_agent.include?(crawler)
key = "#{crawler}_crawler_rate_limit"
limiter =
RateLimiter.new(nil, key, 1, SiteSetting.slow_down_crawler_rate, error_code: key)
limiter.performed!
break
end
FEATURE: Replace `Crawl-delay` directive with proper rate limiting (#15131) We have a couple of site setting, `slow_down_crawler_user_agents` and `slow_down_crawler_rate`, that are meant to allow site owners to signal to specific crawlers that they're crawling the site too aggressively and that they should slow down. When a crawler is added to the `slow_down_crawler_user_agents` setting, Discourse currently adds a `Crawl-delay` directive for that crawler in `/robots.txt`. Unfortunately, many crawlers don't support the `Crawl-delay` directive in `/robots.txt` which leaves the site owners no options if a crawler is crawling the site too aggressively. This PR replaces the `Crawl-delay` directive with proper rate limiting for crawlers added to the `slow_down_crawler_user_agents` list. On every request made by a non-logged in user, Discourse will check the User Agent string and if it contains one of the values of the `slow_down_crawler_user_agents` list, Discourse will only allow 1 request every N seconds for that User Agent (N is the value of the `slow_down_crawler_rate` setting) and the rest of requests made within the same interval will get a 429 response. The `slow_down_crawler_user_agents` setting becomes quite dangerous with this PR since it could rate limit lots if not all of anonymous traffic if the setting is not used appropriately. So to protect against this scenario, we've added a couple of new validations to the setting when it's changed: 1) each value added to setting must 3 characters or longer 2) each value cannot be a substring of tokens found in popular browser User Agent. The current list of prohibited values is: apple, windows, linux, ubuntu, gecko, firefox, chrome, safari, applewebkit, webkit, mozilla, macintosh, khtml, intel, osx, os x, iphone, ipad and mac.
2021-11-30 17:55:25 +08:00
end
end
FEATURE: Centralized 2FA page (#15377) 2FA support in Discourse was added and grown gradually over the years: we first added support for TOTP for logins, then we implemented backup codes, and last but not least, security keys. 2FA usage was initially limited to logging in, but it has been expanded and we now require 2FA for risky actions such as adding a new admin to the site. As a result of this gradual growth of the 2FA system, technical debt has accumulated to the point where it has become difficult to require 2FA for more actions. We now have 5 different 2FA UI implementations and each one has to support all 3 2FA methods (TOTP, backup codes, and security keys) which makes it difficult to maintain a consistent UX for these different implementations. Moreover, there is a lot of repeated logic in the server-side code behind these 5 UI implementations which hinders maintainability even more. This commit is the first step towards repaying the technical debt: it builds a system that centralizes as much as possible of the 2FA server-side logic and UI. The 2 main components of this system are: 1. A dedicated page for 2FA with support for all 3 methods. 2. A reusable server-side class that centralizes the 2FA logic (the `SecondFactor::AuthManager` class). From a top-level view, the 2FA flow in this new system looks like this: 1. User initiates an action that requires 2FA; 2. Server is aware that 2FA is required for this action, so it redirects the user to the 2FA page if the user has a 2FA method, otherwise the action is performed. 3. User submits the 2FA form on the page; 4. Server validates the 2FA and if it's successful, the action is performed and the user is redirected to the previous page. A more technically-detailed explanation/documentation of the new system is available as a comment at the top of the `lib/second_factor/auth_manager.rb` file. Please note that the details are not set in stone and will likely change in the future, so please don't use the system in your plugins yet. Since this is a new system that needs to be tested, we've decided to migrate only the 2FA for adding a new admin to the new system at this time (in this commit). Our plan is to gradually migrate the remaining 2FA implementations to the new system. For screenshots of the 2FA page, see PR #15377 on GitHub.
2022-02-17 17:12:59 +08:00
FEATURE: Add 2FA support to the Discourse Connect Provider protocol (#16386) Discourse has the Discourse Connect Provider protocol that makes it possible to use a Discourse instance as an identity provider for external sites. As a natural extension to this protocol, this PR adds a new feature that makes it possible to use Discourse as a 2FA provider as well as an identity provider. The rationale for this change is that it's very difficult to implement 2FA support in a website and if you have multiple websites that need to have 2FA, it's unrealistic to build and maintain a separate 2FA implementation for each one. But with this change, you can piggyback on Discourse to take care of all the 2FA details for you for as many sites as you wish. To use Discourse as a 2FA provider, you'll need to follow this guide: https://meta.discourse.org/t/-/32974. It walks you through what you need to implement on your end/site and how to configure your Discourse instance. Once you're done, there is only one additional thing you need to do which is to include `require_2fa=true` in the payload that you send to Discourse. When Discourse sees `require_2fa=true`, it'll prompt the user to confirm their 2FA using whatever methods they've enabled (TOTP or security keys), and once they confirm they'll be redirected back to the return URL you've configured and the payload will contain `confirmed_2fa=true`. If the user has no 2FA methods enabled however, the payload will not contain `confirmed_2fa`, but it will contain `no_2fa_methods=true`. You'll need to be careful to re-run all the security checks and ensure the user can still access the resource on your site after they return from Discourse. This is very important because there's nothing that guarantees the user that will come back from Discourse after they confirm 2FA is the same user that you've redirected to Discourse. Internal ticket: t62183.
2022-04-13 20:04:09 +08:00
def run_second_factor!(action_class, action_data = nil)
action = action_class.new(guardian, request, action_data)
FEATURE: Centralized 2FA page (#15377) 2FA support in Discourse was added and grown gradually over the years: we first added support for TOTP for logins, then we implemented backup codes, and last but not least, security keys. 2FA usage was initially limited to logging in, but it has been expanded and we now require 2FA for risky actions such as adding a new admin to the site. As a result of this gradual growth of the 2FA system, technical debt has accumulated to the point where it has become difficult to require 2FA for more actions. We now have 5 different 2FA UI implementations and each one has to support all 3 2FA methods (TOTP, backup codes, and security keys) which makes it difficult to maintain a consistent UX for these different implementations. Moreover, there is a lot of repeated logic in the server-side code behind these 5 UI implementations which hinders maintainability even more. This commit is the first step towards repaying the technical debt: it builds a system that centralizes as much as possible of the 2FA server-side logic and UI. The 2 main components of this system are: 1. A dedicated page for 2FA with support for all 3 methods. 2. A reusable server-side class that centralizes the 2FA logic (the `SecondFactor::AuthManager` class). From a top-level view, the 2FA flow in this new system looks like this: 1. User initiates an action that requires 2FA; 2. Server is aware that 2FA is required for this action, so it redirects the user to the 2FA page if the user has a 2FA method, otherwise the action is performed. 3. User submits the 2FA form on the page; 4. Server validates the 2FA and if it's successful, the action is performed and the user is redirected to the previous page. A more technically-detailed explanation/documentation of the new system is available as a comment at the top of the `lib/second_factor/auth_manager.rb` file. Please note that the details are not set in stone and will likely change in the future, so please don't use the system in your plugins yet. Since this is a new system that needs to be tested, we've decided to migrate only the 2FA for adding a new admin to the new system at this time (in this commit). Our plan is to gradually migrate the remaining 2FA implementations to the new system. For screenshots of the 2FA page, see PR #15377 on GitHub.
2022-02-17 17:12:59 +08:00
manager = SecondFactor::AuthManager.new(guardian, action)
yield(manager) if block_given?
result = manager.run!(request, params, secure_session)
FEATURE: Add 2FA support to the Discourse Connect Provider protocol (#16386) Discourse has the Discourse Connect Provider protocol that makes it possible to use a Discourse instance as an identity provider for external sites. As a natural extension to this protocol, this PR adds a new feature that makes it possible to use Discourse as a 2FA provider as well as an identity provider. The rationale for this change is that it's very difficult to implement 2FA support in a website and if you have multiple websites that need to have 2FA, it's unrealistic to build and maintain a separate 2FA implementation for each one. But with this change, you can piggyback on Discourse to take care of all the 2FA details for you for as many sites as you wish. To use Discourse as a 2FA provider, you'll need to follow this guide: https://meta.discourse.org/t/-/32974. It walks you through what you need to implement on your end/site and how to configure your Discourse instance. Once you're done, there is only one additional thing you need to do which is to include `require_2fa=true` in the payload that you send to Discourse. When Discourse sees `require_2fa=true`, it'll prompt the user to confirm their 2FA using whatever methods they've enabled (TOTP or security keys), and once they confirm they'll be redirected back to the return URL you've configured and the payload will contain `confirmed_2fa=true`. If the user has no 2FA methods enabled however, the payload will not contain `confirmed_2fa`, but it will contain `no_2fa_methods=true`. You'll need to be careful to re-run all the security checks and ensure the user can still access the resource on your site after they return from Discourse. This is very important because there's nothing that guarantees the user that will come back from Discourse after they confirm 2FA is the same user that you've redirected to Discourse. Internal ticket: t62183.
2022-04-13 20:04:09 +08:00
if !result.no_second_factors_enabled? && !result.second_factor_auth_completed? &&
!result.second_factor_auth_skipped?
FEATURE: Centralized 2FA page (#15377) 2FA support in Discourse was added and grown gradually over the years: we first added support for TOTP for logins, then we implemented backup codes, and last but not least, security keys. 2FA usage was initially limited to logging in, but it has been expanded and we now require 2FA for risky actions such as adding a new admin to the site. As a result of this gradual growth of the 2FA system, technical debt has accumulated to the point where it has become difficult to require 2FA for more actions. We now have 5 different 2FA UI implementations and each one has to support all 3 2FA methods (TOTP, backup codes, and security keys) which makes it difficult to maintain a consistent UX for these different implementations. Moreover, there is a lot of repeated logic in the server-side code behind these 5 UI implementations which hinders maintainability even more. This commit is the first step towards repaying the technical debt: it builds a system that centralizes as much as possible of the 2FA server-side logic and UI. The 2 main components of this system are: 1. A dedicated page for 2FA with support for all 3 methods. 2. A reusable server-side class that centralizes the 2FA logic (the `SecondFactor::AuthManager` class). From a top-level view, the 2FA flow in this new system looks like this: 1. User initiates an action that requires 2FA; 2. Server is aware that 2FA is required for this action, so it redirects the user to the 2FA page if the user has a 2FA method, otherwise the action is performed. 3. User submits the 2FA form on the page; 4. Server validates the 2FA and if it's successful, the action is performed and the user is redirected to the previous page. A more technically-detailed explanation/documentation of the new system is available as a comment at the top of the `lib/second_factor/auth_manager.rb` file. Please note that the details are not set in stone and will likely change in the future, so please don't use the system in your plugins yet. Since this is a new system that needs to be tested, we've decided to migrate only the 2FA for adding a new admin to the new system at this time (in this commit). Our plan is to gradually migrate the remaining 2FA implementations to the new system. For screenshots of the 2FA page, see PR #15377 on GitHub.
2022-02-17 17:12:59 +08:00
# should never happen, but I want to know if somehow it does! (osama)
raise "2fa process ended up in a bad state!"
end
result
end
def link_preload
@links_to_preload = []
yield
response.headers["Link"] = @links_to_preload.join(", ") if !@links_to_preload.empty?
end
def spa_boot_request?
request.get? && !(request.format && request.format.json?) && !request.xhr?
end
def fetch_limit_from_params(params: self.params, default:, max:)
raise "default limit cannot be greater than max limit" if default.present? && default > max
if params.has_key?(:limit)
limit =
begin
Integer(params[:limit])
rescue ArgumentError
raise Discourse::InvalidParameters.new(:limit)
end
raise Discourse::InvalidParameters.new(:limit) if limit < 0 || limit > max
limit
else
default
end
end
2013-02-06 03:16:51 +08:00
end