When executing a buffered block or builtin, the usual approach is to
execute, collect output in a string, and then output that string to
stdout or whatever the redirections say. Similarly for stderr.
If we get no output, then we can elide the outputting which means
skipping the background thread. In this case we just mark the process as
finished immediately.
We do this in multiple locations which is confusing. Factor them all
together into a new function run_internal_process_or_short_circuit.
This reduces the syscall count for `fish -c exit` from 651 to 566.
We don't attempt to *cache* the pgrp or anything, we just call it once
when we're about to execute the job to see if we are in foreground and
to assign it to the job, instead of once for checking foreground and
once to give it to the job.
Caching it with a simple `static` would get the count down to 480, but
it's possible for fish to have its pgroup changed.
Store the entire function declaration, not just its job list.
This allows us to extract the body of the function complete with any
leading comments and indents.
Fixes#5285
In particular, this allows `true && time true`, or `true; and time true`,
and both `time not true` as well as `not time true` (like bash).
time is valid only as job _prefix_, so `true | time true` could call
`/bin/time` (same in bash)
See discussion in #6442
Extend the commit 8e17d29e04 to block processes, for example:
begin ; stuff ; end
or if/while blocks as well.
Note there's an existing optimization where we do not create a job for a
block if it has no redirections.
This PR is aimed at improving how job ids are assigned. In particular,
previous to this commit, a job id would be consumed by functions (and
thus aliases). Since it's usual to use functions as command wrappers
this results in awkward job id assignments.
For example if the user is like me and just made the jump from vim -> neovim
then the user might create the following alias:
```
alias vim=nvim
```
Previous to this commit if the user ran `vim` after setting up this
alias, backgrounded (^Z) and ran `jobs` then the output might be:
```
Job Group State Command
2 60267 stopped nvim $argv
```
If the user subsequently opened another vim (nvim) session, backgrounded
and ran jobs then they might see what follows:
```
Job Group State Command
4 70542 stopped nvim $argv
2 60267 stopped nvim $argv
```
These job ids feel unnatural, especially when transitioning away from
e.g. bash where job ids are sequentially incremented (and aliases/functions
don't consume a job id).
See #6053 for more details.
As @ridiculousfish pointed out in
https://github.com/fish-shell/fish-shell/issues/6053#issuecomment-559899400,
we want to elide a job's job id if it corresponds to a single function in the
foreground. This translates to the following prerequisites:
- A job must correspond to a single process (i.e. the job continuation
must be empty)
- A job must be in the foreground (i.e. `&` wasn't appended)
- The job's single process must resolve to a function invocation
If all of these conditions are true then we should mark a job as
"internal" and somehow remove it from consideration when any
infrastructure tries to interact with jobs / job ids.
I saw two paths to implement these requirements:
- At the time of job creation calculate whether or not a job is
"internal" and use a separate list of job ids to track their ids.
Additionally introduce a new flag denoting that a job is internal so
that e.g. `jobs` doesn't list internal jobs
- I started implementing this route but quickly realized I was
computing the same information that would be computed later on (e.g.
"is this job a single process" and "is this jobs statement a
function"). Specifically I was computing data that populate_job_process
would end up computing later anyway. Additionally this added some
weird complexities to the job system (after the change there were two
job id lists AND an additional flag that had to be taken into
consideration)
- Once a function is about to be executed we release the current jobs
job id if the prerequisites are satisfied (which at this point have
been fully computed).
- I opted for this solution since it seems cleaner. In this
implementation "releasing a job id" is done by both calling
`release_job_id` and by marking the internal job_id member variable to
-1. The former operation allows subsequent child jobs to reuse that
same job id (so e.g. the situation described in Motivation doesn't
occur), and the latter ensures that no other job / job id
infrastructure will interact with these jobs because valid jobs have
positive job ids. The second operation causes job_id to become
non-const which leads to the list of code changes outside of `exec.c`
(i.e. a codemod from `job_t::job_id` -> `job_t::job_id()` and moving the
old member variable to a non-const private `job_t::job_id_`)
Note: Its very possible I missed something and setting the job id to -1
will break some other infrastructure, please let me know if so!
I tried to run `make/ninja lint`, but a bunch of non-relevant issues
appeared (e.g. `fatal error: 'config.h' file not found`). I did
successfully clang-format (`git clang-format -f`) and run tests, though.
This PR closes#6053.
It looks like the last status already contains the signal that cancelled
execution.
Also make `fish -c something` always return the last exit status of
"something", instead of hardcoded 127 if exited or signalled.
Fixes#6444
This was previously required so that, if there was a redirection to a
file, we would fork a process to create the file even if there was no
output. For example `echo -n >/tmp/file.txt` would have to create
file.txt even though it would be empty.
However now we open the file before fork, so we no longer need special
logic around this.
user_supplied was used to distinguish IO redirections which were
explicit, vs those that came about through "transmogrphication." But
transmogrification is no more. Remove the flag.
Previously, if the user control-C'd out of a process, we would set a
bogus exit status in the process, but it was difficult to observe this
because we would be cancelling anyways. But set it properly.
parser_t::eval indicates whether there was a parse error. It can be
easily confused with the status of the execution. Use a real type to
make it more clear.
This adds a test for the obscure case where an fd is redirected to
itself. This is tricky because the dup2 will not clear the CLO_EXEC bit.
So do it manually; also posix_spawn can't be used in this case.
Prior to this fix, a file redirection was turned into an io_file_t. This is
annoying because every place where we want to apply the redirection, we
might fail due to open() failing. Switch to opening the file at the point
we resolve the redirection spec. This will simplify a lot of code.
Prior to this change, a process after it has been constructed by
parse_execution, but before it is executed, was given a list of
io_data_t redirections. The problem is that redirections have a
sensitive ownership policy because they hold onto fds. This made it
rather hard to reason about fd lifetime.
Change these to redirection_spec_t. This is a textual description
of a redirection after expansion. It does not represent an open file and
so its lifetime is no longer important.
This enables files to be held only on the stack, and are no longer owned
by a process of indeterminate lifetime.
fish has to ensure that the pipes it creates do not conflict with any
explicit fds named in redirections. Switch this code to using
autoclose_fd_t to make the ownership logic more explicit, and also
introduce fd_set_t to reduce the dependence on io_chain_t.
Prior to this fix, a job would hold onto any IO redirections from its
parent. For example:
begin
echo a
end < file.txt
The "echo a" job would hold a reference to the I/O redirection.
The problem is that jobs then extend the life of pipes until the job is
cleaned up. This can prevent pipes from closing, leading to hangs.
Fix this by not storing the block IO; this ensures that jobs do not
prolong the life of pipes.
Fixes#6397
Currently a job needs to know three things about its "parents:"
1. Any IO redirections for the block or function containing this job
2. The pgid for the parent job
3. Whether the parent job has been fully constructed (to defer self-disown)
These are all tracked in somewhat separate awkward ways. Collapse them
into a single new type job_lineage_t.
In preparation for concurrent execution, invert the control of function and
block execution. Allow a process to return an std::function that performs the
the execution. This can be run on either the main or a background thread
(eventually).
This adds initial support for statements with prefixed variable assignments.
Statments like this are supported:
a=1 b=$a echo $b # outputs 1
Just like in other shells, the left-hand side of each assignment must
be a valid variable identifier (no quoting/escaping). Array indexing
(PATH[1]=/bin ls $PATH) is *not* yet supported, but can be added fairly
easily.
The right hand side may be any valid string token, like a command
substitution, or a brace expansion.
Since `a=* foo` is equivalent to `begin set -lx a *; foo; end`,
the assignment, like `set`, uses nullglob behavior, e.g. below command
can safely be used to check if a directory is empty.
x=/nothing/{,.}* test (count $x) -eq 0
Generic file completion is done after the equal sign, so for example
pressing tab after something like `HOME=/` completes files in the
root directory
Subcommand completion works, so something like
`GIT_DIR=repo.git and command git ` correctly calls git completions
(but the git completion does not use the variable as of now).
The variable assignment is highlighted like an argument.
Closes#6048
This adds support for `fish_trace`, a new variable intended to serve the
same purpose as `set -x` as in bash. Setting this variable to anything
non-empty causes execution to be traced. In the future we may give more
specific meaning to the value of the variable.
The user's prompt is not traced unless you run it explicitly. Events are
also not traced because it is noisy; however autoloading is.
Fixes#3427
Soon we will have more complicated logic around whether to call tcsetpgrp.
Prepare to centralize the logic by passing in the new term owner pgrp,
instead of having child_setup_process perform the decision.
When executing a job, if the first process is fish internal, then have
fish claim the job's pgroup.
The idea here is that the terminal must be owned by a pgroup containing
the process reading from the terminal. If the first process is fish
internal (a function or builtin) then the pgroup must contain the fish
process.
This is a bit of a workaround of the behavior where the first process that
executes in a job becomes the process group leader. If there's a deferred
process, then we will execute processes out of order so the pgroup can be
wrong. Fix this by setting the process group leader explicitly as fish
when necessary.
Fixes#5855
25afc9b377 made this unnecessary by
having child processes wait for a signal after fork(), but this change
was later reverted. If we artificially slow down fish (e.g. with a sleep)
after the fork call, we see commands getting backgrounded by mistake.
Put back the tcsetgrp() call.
Prior to this fix, a function_block stored a process_t, which was only used
when printing backtraces. Switch this to an array of arguments, and make
various other cleanups around null terminated argument arrays.
This runs build_tools/style.fish, which runs clang-format on C++, fish_indent on fish and (new) black on python.
If anything is wrong with the formatting, we should fix the tools, but automated formatting is worth it.
Prior to this change, fish used a global flag to decide if we should check
for changes to universal variables. This flag was then checked at arbitrary
locations, potentially triggering variable updates and event handlers for
those updates; this was very hard to reason about.
Switch to triggering a universal variable update at a fixed location,
after running an external command. The common case is that the variable
file has not changed, which we can identify with just a stat() call, so
this is pretty cheap.
This reverts commit cdce8511a1.
This change was unsafe. The prior version (now restored) took the lock and
then copied the data. By returning a reference, the caller holds a
reference to data outside of the lock.
This function isn't worth optimizing. Hardly any functions use this
facility, and for those that do, they typically just capture one or two
variables.
* Convert `function_get_inherit_vars()` to return a reference to the
(possibly) existing map, rather than a copy;
* Preallocate and reuse a static (read-only) map for the (very) common
case of no inherited vars;
* Pass references to the inherit vars map around thereafter, never
triggering the map copy (or even move) constructor.
NB: If it turns out the reference is unsafe, we can switch the inherit vars
to be a shared_ptr and return that instead.