mirror of
https://github.com/fish-shell/fish-shell.git
synced 2024-12-01 23:24:21 +08:00
8e17d29e04
This PR is aimed at improving how job ids are assigned. In particular, previous to this commit, a job id would be consumed by functions (and thus aliases). Since it's usual to use functions as command wrappers this results in awkward job id assignments. For example if the user is like me and just made the jump from vim -> neovim then the user might create the following alias: ``` alias vim=nvim ``` Previous to this commit if the user ran `vim` after setting up this alias, backgrounded (^Z) and ran `jobs` then the output might be: ``` Job Group State Command 2 60267 stopped nvim $argv ``` If the user subsequently opened another vim (nvim) session, backgrounded and ran jobs then they might see what follows: ``` Job Group State Command 4 70542 stopped nvim $argv 2 60267 stopped nvim $argv ``` These job ids feel unnatural, especially when transitioning away from e.g. bash where job ids are sequentially incremented (and aliases/functions don't consume a job id). See #6053 for more details. As @ridiculousfish pointed out in https://github.com/fish-shell/fish-shell/issues/6053#issuecomment-559899400, we want to elide a job's job id if it corresponds to a single function in the foreground. This translates to the following prerequisites: - A job must correspond to a single process (i.e. the job continuation must be empty) - A job must be in the foreground (i.e. `&` wasn't appended) - The job's single process must resolve to a function invocation If all of these conditions are true then we should mark a job as "internal" and somehow remove it from consideration when any infrastructure tries to interact with jobs / job ids. I saw two paths to implement these requirements: - At the time of job creation calculate whether or not a job is "internal" and use a separate list of job ids to track their ids. Additionally introduce a new flag denoting that a job is internal so that e.g. `jobs` doesn't list internal jobs - I started implementing this route but quickly realized I was computing the same information that would be computed later on (e.g. "is this job a single process" and "is this jobs statement a function"). Specifically I was computing data that populate_job_process would end up computing later anyway. Additionally this added some weird complexities to the job system (after the change there were two job id lists AND an additional flag that had to be taken into consideration) - Once a function is about to be executed we release the current jobs job id if the prerequisites are satisfied (which at this point have been fully computed). - I opted for this solution since it seems cleaner. In this implementation "releasing a job id" is done by both calling `release_job_id` and by marking the internal job_id member variable to -1. The former operation allows subsequent child jobs to reuse that same job id (so e.g. the situation described in Motivation doesn't occur), and the latter ensures that no other job / job id infrastructure will interact with these jobs because valid jobs have positive job ids. The second operation causes job_id to become non-const which leads to the list of code changes outside of `exec.c` (i.e. a codemod from `job_t::job_id` -> `job_t::job_id()` and moving the old member variable to a non-const private `job_t::job_id_`) Note: Its very possible I missed something and setting the job id to -1 will break some other infrastructure, please let me know if so! I tried to run `make/ninja lint`, but a bunch of non-relevant issues appeared (e.g. `fatal error: 'config.h' file not found`). I did successfully clang-format (`git clang-format -f`) and run tests, though. This PR closes #6053.
110 lines
3.8 KiB
C++
110 lines
3.8 KiB
C++
// Implementation of the disown builtin.
|
|
#include "config.h" // IWYU pragma: keep
|
|
|
|
#include "builtin_disown.h"
|
|
|
|
#include <cerrno>
|
|
#include <csignal>
|
|
#include <set>
|
|
|
|
#include "builtin.h"
|
|
#include "common.h"
|
|
#include "fallback.h" // IWYU pragma: keep
|
|
#include "io.h"
|
|
#include "parser.h"
|
|
#include "proc.h"
|
|
#include "wutil.h" // IWYU pragma: keep
|
|
|
|
/// Helper for builtin_disown.
|
|
static int disown_job(const wchar_t *cmd, parser_t &parser, io_streams_t &streams, job_t *j) {
|
|
if (j == nullptr) {
|
|
streams.err.append_format(_(L"%ls: Unknown job '%ls'\n"), L"bg");
|
|
builtin_print_error_trailer(parser, streams.err, cmd);
|
|
return STATUS_INVALID_ARGS;
|
|
}
|
|
|
|
// Stopped disowned jobs must be manually signaled; explain how to do so.
|
|
if (j->is_stopped()) {
|
|
killpg(j->pgid, SIGCONT);
|
|
const wchar_t *fmt =
|
|
_(L"%ls: job %d ('%ls') was stopped and has been signalled to continue.\n");
|
|
streams.err.append_format(fmt, cmd, j->job_id(), j->command_wcstr());
|
|
}
|
|
|
|
// We cannot directly remove the job from the jobs() list as `disown` might be called
|
|
// within the context of a subjob which will cause the parent job to crash in exec_job().
|
|
// Instead, we set a flag and the parser removes the job from the jobs list later.
|
|
j->mut_flags().disown_requested = true;
|
|
add_disowned_pgid(j->pgid);
|
|
|
|
return STATUS_CMD_OK;
|
|
}
|
|
|
|
/// Builtin for removing jobs from the job list.
|
|
int builtin_disown(parser_t &parser, io_streams_t &streams, wchar_t **argv) {
|
|
const wchar_t *cmd = argv[0];
|
|
int argc = builtin_count_args(argv);
|
|
help_only_cmd_opts_t opts;
|
|
|
|
int optind;
|
|
int retval = parse_help_only_cmd_opts(opts, &optind, argc, argv, parser, streams);
|
|
if (retval != STATUS_CMD_OK) return retval;
|
|
|
|
if (opts.print_help) {
|
|
builtin_print_help(parser, streams, cmd);
|
|
return STATUS_CMD_OK;
|
|
}
|
|
|
|
if (argv[1] == nullptr) {
|
|
// Select last constructed job (ie first job in the job queue) that is possible to disown.
|
|
// Stopped jobs can be disowned (they will be continued).
|
|
// Foreground jobs can be disowned.
|
|
// Even jobs that aren't under job control can be disowned!
|
|
job_t *job = nullptr;
|
|
for (const auto &j : parser.jobs()) {
|
|
if (j->is_constructed() && (!j->is_completed())) {
|
|
job = j.get();
|
|
break;
|
|
}
|
|
}
|
|
|
|
if (job) {
|
|
retval = disown_job(cmd, parser, streams, job);
|
|
} else {
|
|
streams.err.append_format(_(L"%ls: There are no suitable jobs\n"), cmd);
|
|
retval = STATUS_CMD_ERROR;
|
|
}
|
|
} else {
|
|
std::set<job_t *> jobs;
|
|
|
|
// If one argument is not a valid pid (i.e. integer >= 0), fail without disowning anything,
|
|
// but still print errors for all of them.
|
|
// Non-existent jobs aren't an error, but information about them is useful.
|
|
// Multiple PIDs may refer to the same job; include the job only once by using a set.
|
|
for (int i = 1; argv[i]; i++) {
|
|
int pid = fish_wcstoi(argv[i]);
|
|
if (errno || pid < 0) {
|
|
streams.err.append_format(_(L"%ls: '%ls' is not a valid job specifier\n"), cmd,
|
|
argv[i]);
|
|
retval = STATUS_INVALID_ARGS;
|
|
} else {
|
|
if (job_t *j = parser.job_get_from_pid(pid)) {
|
|
jobs.insert(j);
|
|
} else {
|
|
streams.err.append_format(_(L"%ls: Could not find job '%d'\n"), cmd, pid);
|
|
}
|
|
}
|
|
}
|
|
if (retval != STATUS_CMD_OK) {
|
|
return retval;
|
|
}
|
|
|
|
// Disown all target jobs
|
|
for (const auto &j : jobs) {
|
|
retval |= disown_job(cmd, parser, streams, j);
|
|
}
|
|
}
|
|
|
|
return retval;
|
|
}
|