This commit brings back some reports hidden or changed
by the commit in 14b436923c if
the site setting `use_legacy_pageviews` is false.
* Unhide the old “Consolidated Pageviews” report and rename it
to “Legacy Consolidated Pageviews”
* Add a legacy_page_view_total_reqs report called “Legacy Pageviews”,
which calculates pageviews in the same way the old page_view_total_reqs
report did.
This will allow admins to better compare old and new pageview
stats which are based on browser detection if they have switched
over to _not_ use legacy pageviews.
Constants should always be only assigned once. The logical OR assignment
of a constant is a relic of the past before we used zeitwerk for
autoloading and had bugs where a file could be loaded twice resulting in
constant redefinition warnings.
### UI changes
All of the UI changes described are gated behind the `use_legacy_pageviews`
site setting.
This commit changes the admin dashboard pageviews report to
use the "Consolidated Pageviews with Browser Detection" report
introduced in 2f2da72747 with
the following changes:
* The report name is changed to "Site traffic"
* The pageview count on the dashboard is counting only using the new method
* The old "Consolidated Pageviews" report is renamed as "Consolidated Legacy Pageviews"
* By default "known crawlers" and "other" sources of pageviews are hidden on the report
When `use_legacy_pageviews` is `true`, we do not show or allow running
the "Site traffic" report for admins. When `use_legacy_pageviews` is `false`,
we do not show or allow running the following legacy reports:
* consolidated_page_views
* consolidated_page_views_browser_detection
* page_view_anon_reqs
* page_view_logged_in_reqs
### Historical data changes
Also part of this change is that, since we introduced our new "Consolidated
Pageviews with Browser Detection" report, some admins are confused at either:
* The lack of data before a certain date , which didn’t exist before
we started collecting it
* Comparing this and the current "Consolidated Pageviews" report data,
which rolls up "Other Pageviews" into "Anonymous Browser" and so it
appears inaccurate
All pageview data in the new report before the date where the _first_
anon or logged in browser pageview was recorded is now hidden.
Allow admin to create custom flag which requires an additional message.
I decided to rename the old `custom_flag` into `require_message` as it is more descriptive.
Allow admin to create custom flag which requires an additional message.
I decided to rename the old `custom_flag` into `require_message` as it is more descriptive.
Adds a report to show the top 100 most viewed topics in a date range,
combining logged in and anonymous views. Can be filtered by category.
This is a followup to 527f02e99f
and d1191b7f5f. We are also going to
be able to see this data in a new topic map, but this admin report
helps to see an overview across the forum for a date range.
Followup 94fe31e5b3,
change the color of the "Known Crawler" bar on the
new "Consolidated Pageviews with Browser Detection (Experimental)"
report to be purple, like it was on the original
"Consolidated Pageviews" report to allow for easier
visual comparison.
Also removes the report colors to named keys in a hash
for easier reference than having to look up the
index of the array all the time.
In the "Consolidated Pageviews with Browser Detection (Experimental)"
report we used the same color for "Known Crawler" and "Other pageviews"
which makes the report confusing to look at, this commit makes them
different.
Our 'page_view_crawler' / 'page_view_anon' metrics are based purely on the User Agent sent by clients. This means that 'badly behaved' bots which are imitating real user agents are counted towards 'anon' page views.
This commit introduces a new method of tracking visitors. When an initial HTML request is made, we assume it is a 'non-browser' request (i.e. a bot). Then, once the JS application has booted, we notify the server to count it as a 'browser' request. This reliance on a JavaScript-capable browser matches up more closely to dedicated analytics systems like Google Analytics.
Existing data collection and graphs are unchanged. Data collected via the new technique is available in a new 'experimental' report.
We were incorrectly using `return` in a block which was causing exceptions at runtime. These exceptions were not causing much issues as they are in defer block.
While working on writing a test for this specific case, I noticed that our `upsert_custom_fields` function was using rails `update_all` which is not updating the `updated_at` timestamp. This commit also fixes it and adds a test for it.
Removes duplication from LimitedEdit to see who can edit
posts, and also removes the old trust level setting check
since it's no longer necessary.
Also make it so staff can always edit since can_edit_post?
already has a staff escape hatch.
Affects the following settings:
* whispers_allowed_groups
* anonymous_posting_allowed_groups
* personal_message_enabled_groups
* shared_drafts_allowed_groups
* here_mention_allowed_groups
* uploaded_avatars_allowed_groups
* ignore_allowed_groups
This turns off `client: true` for these group-based settings,
because there is no guarantee that the current user gets all
their group memberships serialized to the client. Better to check
server-side first.
The `deprecate_column` helper would change its behavior based on the current `Discourse::VERSION`. This means that 'finalizing' a stable release introduces a previously untested behavior change.
Much better to keep it as a deprecation until manual action is taken to introduce the breaking change.
When updating the position of a category, the server correctly updates the position in the database, but the response sent back to the client still contains the old position, causing it to "flip back" in the UI when saving. Only reloading the page will reveal the new, correct value.
The Positionable concern correctly positions the record and updates the database, but we don't assign the new position to the already instantiated model.
This change just assigns self.position after the database update. 😎
We're changing the implementation of trust levels to use groups. Part of this is to have site settings that reference trust levels use groups instead. It converts the min_trust_to_edit_post site setting to edit_post_allowed_groups.
The old implementation will co-exist for a short period while I update any references in plugins and themes.
Operate a key at a time, to make it clearer what's going on.
This also fixes a bug where array integer fields would get re-written
even when there wasn't a change.
Array custom fields use separate rows for each value, but whenever we
update an array, we have always destroy the existing rows and create new
ones. Therefore, there's no benefit over using the json type.
This is part 1 of 3, split up of PR #23529. This PR refactors the
webauthn code to support passkey authentication/registration.
Passkeys aren't used yet, that is coming in PRs 2 and 3.
Co-authored-by: Alan Guo Xiang Tan <gxtan1990@gmail.com>
They're both constant per-instance values, there is no need to store them
in the session. This also makes the code a bit more readable by moving
the `session_challenge_key` method up to the `DiscourseWebauthn` module.
Currently when we decide we're going to drop a column in the future we just mark it with a TODO comment and add it to ignored_columns. This makes it instantly unavailable, and we mostly forget about the TODO in the end. 😬
This change adds a HasDeprecatedColumns concern which offers a little bit more flexibility. We can still simulate the old behaviour by setting drop_from to the current version, but we can also set it to a future version, causing it to raise a deprecation warning until then if used.
This commit removes any logic in the app and in specs around
enable_experimental_hashtag_autocomplete and deletes some
old category hashtag code that is no longer necessary.
It also adds a `slug_ref` category instance method, which
will generate a reference like `parent:child` for a category,
with an optional depth, which hashtags use. Also refactors
PostRevisor which was using CategoryHashtagDataSource directly
which is a no-no.
Deletes the old hashtag markdown rule as well.
* FIX Add 'Ignored' flags to Moderator Activity report
The Moderator Activity query didn’t include the number of deferred flags in the Flags Reviewed totals. As this number is designed to reflect how many flags a moderator has seen, reviewed, and made a judgement on, the Ignored ones should also be included.
* Apply suggestions from code review
---------
Co-authored-by: Jarek Radosz <jradosz@gmail.com>
Adding a filter without a type parameter has been deprecated for the last three years, and was marked for removal in 2.9.0.
During this time we have had a few deprecation warnings in logs coming from Reports::Bookmarks.
The fallback was to set the type to the name of the filter. This change just passes the type (same as name) explicitly instead, and removes the deprecation fallback.
This patch sets some limits on custom fields:
- an entity can’t have more than 100 custom fields defined on it
- a custom field can’t hold a value greater than 10,000,000 characters
The current implementation of custom fields is relatively complex and
does an upsert in SQL at some point, thus preventing to simply add an
`ActiveRecord` validation on the custom field model without having to
rewrite a part of the existing logic.
That’s one of the reasons this patch is implementing validations in the
`HasCustomField` module adding them to the model including the module.
This amends it so our cached counting reliant specs run in synchronize mode
When running async there are situations where data is left over in the table
after a transactional test. This means that repeat runs of the test suite
fail.
Previously, Discourse's password hashing was hard-coded to a specific algorithm and parameters. Any changes to the algorithm or parameters would essentially invalidate all existing user passwords.
This commit introduces a new `password_algorithm` column on the `users` table. This persists the algorithm/parameters which were use to generate the hash for a given user. All existing rows in the users table are assumed to be using Discourse's current algorithm/parameters. With this data stored per-user in the database, we'll be able to keep existing passwords working while adjusting the algorithm/parameters for newly hashed passwords.
Passwords which were hashed with an old algorithm will be automatically re-hashed with the new algorithm when the user next logs in.
Values in the `password_algorithm` column are based on the PHC string format (https://github.com/P-H-C/phc-string-format/blob/master/phc-sf-spec.md). Discourse's existing algorithm is described by the string `$pbkdf2-sha256$i=64000,l=32$`
To introduce a new algorithm and start using it, make sure it's implemented in the `PasswordHasher` library, then update `User::TARGET_PASSWORD_ALGORITHM`.
The algorithm failed to find the correct category by slug when there are multiple sub-sub-categories with the same child-category name and the first child doesn't have the correct grandchild.
So, searching for "child / grandchild" worked in the following case, it found (3):
- (1) parent 1
- (2) child
- (3) grandchild
- (4) parent 2
- (5) child
- (6) grandchild
But it failed to find the grandchild in the following case:
- (1) parent 1
- (2) child
- (4) parent 2
- (5) child
- (6) grandchild
And this also fixes a flaky spec by forcing categories to always order by by `parent_category_id` and `id`.
This makes it possible to partly revert 60990aab55
This is just cleaning up a TODO I had to add more specs
to this controller -- there are more thorough tests on the
actual HashtagService class and the type-specific hashtag
classes.