For some reason, despite iframe also indicating a
```
<meta name="robots" content="noindex">
```
.. Google is still indexing the embed/comment URLs. This causes links like http://\<site>/embed/comments\?topic_id\=6366 to be indexed instead of the topic.
This commit adds it explicitly in the header.
Introduced back in 2022 in
e3d495850d,
our new more specific message-id format for inbound and
outbound emails has now been in use for a very long time,
we can remove the support for the old formats:
`topic/:topic_id/:post_id.:random@:host`
`topic/:topic_id@:host`
`topic/:topic_id.:random@:host`
In this PR service objects were moved to Core https://github.com/discourse/discourse/pull/26506
However, ServiceRunner should be moved as well. Mostly for CI to run effortlessly without loading plugins.
We are seeing a weird resolution error on Github actions with the
following backtrace:
```
Failure/Error:
visit File.join(
GlobalSetting.relative_url_root || "",
"/session/#{user.encoded_username}/become.json?redirect=false",
)
Socket::ResolutionError:
getaddrinfo: Temporary failure in name resolution
```
Switch to use `127.0.0.1` instead of forcing a name resolution.
In 958437e7dd we ensured that the email summaries are properly sent based on 'digest_attempted_at' for people who barely/never visit the forum.
This fixed the "frequency" of the email summaries but introduced a bug where the digest would be sent even though there wasn't anything new since for some users.
The logic we use to compute the threshold date for the content to be included in the digest was
```ruby
@since = opts[:since] || user.last_seen_at || user.user_stat&.digest_attempted_at || 1.month.ago
```
It was working as expected for users who haven never been seen but for users who have connected at least once, we would use their "last_seen_at" date as the "threshold date" for the content to be sent in a summary 😬
This fix changes the logic to be the most recent date amongst the `last_seen_at`, `digest_attempted_at` and `1.month.ago` so it's correctly handling cases where
- user has never been seen nor emailed a summary
- user has been seen in a while but has recently been sent a summary
- user has been sent a summary recently but hasn't been seen in a while.
This commit moves the logic for crawler rate limits out of the application controller and into the request tracker middleware. The reason for this move is to apply rate limits to all crawler requests instead of just the requests that make it to the application controller. Some requests are served early from the middleware stack without reaching the Rails app for performance reasons (e.g. `AnonymousCache`) which results in crawlers getting 200 responses even though they've reached their limits and should be getting 429 responses.
Internal topic: t/128810.
This commit splits out the updating of `TopicUser#last_read_post_number` in
`TopicUser.ensure_consistency!` to a new
`TopicUser.update_last_read_post_number` method` which
`PostTiming.pretend_read` will now call instead. Previously,
`PostTiming.pretend_read` calls `TopicUser.ensure_consistency!` which in
turn calls `TopicUser.update_post_action_cache` but that is
unnecessary for `PostTiming.pretend_read` since `PostTiming.pretend_read` does not
affect the `TopicUser#liked` or `TopicUser.bookmarked` columns which
`TopicUser.update_post_action_cache` updates. As the query in
`TopicUser.update_post_action_cache` can be expensive, we should avoid
calling it when it isn't necessary.
One such scenario where it is unnecessary is when we are closing a
topic.
This gives us daily fidelity of topic view stats
New table stores a row per topic viewed per day tracking
anonymous and logged on views
We also have a new endpoint `/t/ID/views-stats.json` to get the statistics for the topic.
This commit adds a `DISCOURSE_DUMP_BACKTRACES_ON_UNICORN_WORKER_TIMEOUT`
environment that will allow us to dump all backtraces for all threads of
a Unicorn worker 2 seconds before it times out. In development,
backtraces are dumped to `STDOUT` and in production we will dump it to
`unicorn.stdout.log`.
We want to dump all the backtraces to make it easier to identify the
cause of a Unicorn worker timing out.
Prior to this fix we had too logic to detect if a user is active or not:
- idle codepath on the frontend
- online user ids on the backend
The frontend solution is not very reliable, and both solution are just trying to be too smart. Making a lot of people questioning why they receive a notification sometimes and sometimes not. This commit removes all this logic and replaces it with a much more simpler logic:
- you can't receive notifications for channel you are actually watching
- we won't play a sound more than once every 3seconds
The `onNotification` signature is:
```
onNotification(data, siteSettings, user, appEvents)
```
And we were not passing `appEvents`.
Also explicitly inject `currentUser` in `chat-notification-manager` service.
When selected some text inside a post, we offer the ability to "fast edit" the selected text without opening the composer.
However, there are certain cases where this isn't working quite a expected, due to the fact that we have some text in the "cooked" version of the post that isn't literally in the "raw" version of the post.
This ensures that whenever someone selects the within
- a quote
- a onebox
- an encrypted message
- a "cooked" date
we directly show the composer instead of showing the fast edit modal and then leaving the user with an invisible error.
Internal ref. t/128400
* FIX: When creating new message via URL do not redirect
If a user clicks on `/new-message` route from inside the instance we're
redirecting the user to `/latest` page which is only intended if the
user is coming from an external site. This commit checks for this
condition and only redirects when user is coming from external source.
This also makes the behavior consistent with `new-topic` route.
Internal topic reference: `/t/-/129523/`
* user_profiles - `location`
* users - `date_of_birth`
* topics - `pinned_at`, `pinned_until`, `pinned_globally`
This also include changes to correctly import PMs. Currently PM topics
are skipped because of a check in `import_users` step which requires `category_id`
to be present.
We consider that you should always receive a notification sound when someone speaks directly with you in chat.
This commit also refactors the way we play audio in chat to make it simpler and throttle it to 3 seconds.
We also added a safeguard to ensure we won't play sounds for old messages, this case can happen when message bus is catching up the backlog (eg: in an inactive tab for example).
When users click a link that points to an existing group chat, we should reopen that chat instead of creating a new group chat so users can more easily continue ongoing conversations.
This commit updates `S3Inventory#files` to ignore S3 inventory files
which have a `last_modified` timestamp which are not at least 2 days
older than `BackupMetadata.last_restore_date` timestamp.
This check was previously only in `Jobs::EnsureS3UploadsExistence` but
`S3Inventory` can also be used via Rake tasks so this protection needs
to be in `S3Inventory` and not in the scheduled job.