Introduce new patterns for direct sql that are safe and fast.
MiniSql is not prone to memory bloat that can happen with direct PG usage.
It also has an extremely fast materializer and very a convenient API
- DB.exec(sql, *params) => runs sql returns row count
- DB.query(sql, *params) => runs sql returns usable objects (not a hash)
- DB.query_hash(sql, *params) => runs sql returns an array of hashes
- DB.query_single(sql, *params) => runs sql and returns a flat one dimensional array
- DB.build(sql) => returns a sql builder
See more at: https://github.com/discourse/mini_sql
Figuring out what unread topics a user has is a very expensive
operation over time.
Users can easily accumulate 10s of thousands of tracking state rows
(1 for every topic they ever visit)
When figuring out what a user has that is unread we need to join
the tracking state records to the topic table. This can very quickly
lead to cases where you need to scan through the entire topic table.
This commit optimises it so we always keep track of the "first" date
a user has unread topics. Then we can easily filter out all earlier
topics from the join.
We use pg functions, instead of nested queries here to assist the
planner.
- Regular users are not notified of whispers
- Regular users no longer have "stuck" topics in unread
- Additional tracking for staff highest post number
- Remove a bunch of unused columns in topics table
- present tags watched on the user prefs page
- automatically watch or unwatch old topics based on watch status
New watching and tracking logic takes care of handling old topics
(either with or without read state)
When you watch a topic you now watch historically
Also removes confusing warnings from user.
We cap new and unread at 2/5th of SiteSetting.max_tracked_new_unread
This dynamic capping is applied under 2 conditions:
1. New capping is applied once every 15 minutes in the periodical job, this effectively ensures that usually even super active sites are capped at 200 new items
2. Unread capping is applied if a user hits max_tracked_new_unread,
meaning if new + unread == 500, we defer a job that runs within 15 minutes that will cap user at 200 unread
This logic ensures that at worst case a user gets "bad" numbers for 15 minutes and then the system goes ahead and fixes itself up
- add missing index to user actions for fast retrieval by post
- add missing indexes to users for fast retrieval of staff
- only refresh topic_users liked/bookmarked cache for affected users
Now that post numbers are monotonically increasing we should not need this job
Stuff should just self correct as users browser along
Corrected the job not to reset the disimissed posts in case we need it