Over the years we accrued many spelling mistakes in the code base.
This PR attempts to fix spelling mistakes and typos in all areas of the code that are extremely safe to change
- comments
- test descriptions
- other low risk areas
Adds a new slow mode for topics that are heating up. Users will have to wait for a period of time before being able to post again.
We store this interval inside the topics table and track the last time a user posted using the last_posted_at datetime in the TopicUser relation.
* This is causing issues where sometimes bookmarked is out of sync with what is in the Bookmark table. The BookmarkManager handles updating this column now.
* Add migration to fix bookmarked column that is incorrectly marked false when a Bookmark record exists.
Doing .pluck(:column).first is a very common pattern in Discourse and in
most cases, a limit cause isn't being added. Instead of adding a limit
clause to all these callsites, this commit adds two new methods to
ActiveRecord::Relation:
pluck_first, equivalent to limit(1).pluck(*columns).first
and pluck_first! which, like other finder methods, raises an exception
when no record is found
Zeitwerk simplifies working with dependencies in dev and makes it easier reloading class chains.
We no longer need to use Rails "require_dependency" anywhere and instead can just use standard
Ruby patterns to require files.
This is a far reaching change and we expect some followups here.
This reduces chances of errors where consumers of strings mutate inputs
and reduces memory usage of the app.
Test suite passes now, but there may be some stuff left, so we will run
a few sites on a branch prior to merging
* FIX: don't allow inviting more than `max_allowed_message_recipients` setting allows
* add specs for guardian
* user preferences for auto track shouldn't be applicable to PMs (it auto watches on visit)
Execlude PMs from "Automatically track topics I enter..." and "When I post in a topic, set that topic to..." user preferences
* groups take only 1 slot in PM
* just return if topic is a PM
Introduce new patterns for direct sql that are safe and fast.
MiniSql is not prone to memory bloat that can happen with direct PG usage.
It also has an extremely fast materializer and very a convenient API
- DB.exec(sql, *params) => runs sql returns row count
- DB.query(sql, *params) => runs sql returns usable objects (not a hash)
- DB.query_hash(sql, *params) => runs sql returns an array of hashes
- DB.query_single(sql, *params) => runs sql and returns a flat one dimensional array
- DB.build(sql) => returns a sql builder
See more at: https://github.com/discourse/mini_sql
Figuring out what unread topics a user has is a very expensive
operation over time.
Users can easily accumulate 10s of thousands of tracking state rows
(1 for every topic they ever visit)
When figuring out what a user has that is unread we need to join
the tracking state records to the topic table. This can very quickly
lead to cases where you need to scan through the entire topic table.
This commit optimises it so we always keep track of the "first" date
a user has unread topics. Then we can easily filter out all earlier
topics from the join.
We use pg functions, instead of nested queries here to assist the
planner.
- Regular users are not notified of whispers
- Regular users no longer have "stuck" topics in unread
- Additional tracking for staff highest post number
- Remove a bunch of unused columns in topics table
- present tags watched on the user prefs page
- automatically watch or unwatch old topics based on watch status
New watching and tracking logic takes care of handling old topics
(either with or without read state)
When you watch a topic you now watch historically
Also removes confusing warnings from user.
We cap new and unread at 2/5th of SiteSetting.max_tracked_new_unread
This dynamic capping is applied under 2 conditions:
1. New capping is applied once every 15 minutes in the periodical job, this effectively ensures that usually even super active sites are capped at 200 new items
2. Unread capping is applied if a user hits max_tracked_new_unread,
meaning if new + unread == 500, we defer a job that runs within 15 minutes that will cap user at 200 unread
This logic ensures that at worst case a user gets "bad" numbers for 15 minutes and then the system goes ahead and fixes itself up
- add missing index to user actions for fast retrieval by post
- add missing indexes to users for fast retrieval of staff
- only refresh topic_users liked/bookmarked cache for affected users