In preparation for using wait handles in --on-process-exit events, factor
wait handles into their own wait handle store. Also switch them to
per-process instead of per-job, which is a simplification.
This is preparing to address the problem where fish cannot wait on a
reaped job, because it only looks at the active job list. Introduce the
idea of a "wait handle," which is a thing that `wait` can use to check if
a job is finished. A job may produce its wait handle on demand, and
parser_t will save the wait handle from wait-able jobs at the point they
are reaped.
This change merely introduces the idea; the next change makes builtin_wait
start using it.
This goes to a separate file because that makes option parsing easier
and allows profiling both at the same time.
The "normal" profile now contains only the profile data of the actual
run, which is much more useful - you can now profile a function by
running
fish -C 'source /path/to/thing' --profile /tmp/thefunction.prof -c 'thefunction'
and won't need to filter out extraneous information.
Detect recursive calls to builtin complete and the internal completion in
the same place.
In 0a0149cc2 (Prevent infinite recursion when completion wraps variable assignment)
we don't print an error when completing certain aliases like:
alias vim "A=B vim"
But we also gave no completions.
We could make this case work, but I think that trying to salvage situations
like this one is way too complex. Instead, let the user know by printing an
error. Not sure if the style of the error fits.
We could add some heuristic to alias to not add --wraps in some cyclic cases.
This can easily lead to an infinite loop, if a variable handler
triggers a repaint and the variable is set in the prompt, e.g. some of
the git variables.
A simple way to reproduce:
function fish_mode_prompt
commandline -f repaint
end
Repainting executes the mode prompt, which triggers a repaint, which
triggers the mode prompt, ....
So we just set a flag and check it.
Fixes#7324.
Prior to this fix, the `exit` command would set a global variable in the
reader, which parse_execution would check. However in concurrent mode you
may have multiple scripts being sourced at once, and 'exit' should only
apply to the current script.
Switch to using a variable in the parser's libdata instead.
This can be used to determine whether the previous command produced a real status, or just carried over the status from the command before it. Backgrounded commands and variable assignments will not increment status_generation, all other commands will.
This is a set of miscellaneous cleanup for profiling.
An errant newline has been removed from 'if' statement output, which got
introduced with the new ast.
Switch from storing unique_ptr to a deque, which allocates less.
Collapse "parse" and "exec" times into just a single value "duration". The
"parse" time no longer makes sense, as we now parse ahead of time.
These are events that have been queued but not yet fired. There's no
reason to modify the events after creating them. Mark them as const
to ensure that doesn't happen.
When fish receives a "cancellation inducing" signal (SIGINT in particular)
it has to unwind execution - for example while loops or whatever else that
is executing. There are two ways this may come about:
1. The fish process received the signal
2. A child process received the signal
An example of the second case is:
some_command | some_function
Here `some_command` is the tty owner and so will receive control-C, but
then fish has to cancel function execution.
Prior to this change, these were handled uniformly: both would just set a
cancellation signal inside the parser. However in the future we will have
multiple parsers and it may not be obvious which one to set the flag in.
So instead distinguish these cases: if a process receives SIGINT we mark
the signal in its job group, and if fish receives it we set a global
variable.
Initially I wanted to pick a different name to avoid confusion with
process groups, but really job trees *are* process groups. So name them
to reflect that fact.
Also rename "placeholder" to "internal" which is clearer.
Job trees come in two flavors: “placeholders” for jobs which are only fish
functions, and non-placeholders which need to track a pgid. This adds
logic to allow a job to decide if its parent's job tree is appropriate,
and allocating a new tree if not.
Give string expansion an (optional) parent pgroup. This is threaded all
the way into eval(). This ensures that in a mixed pipeline like:
cmd | begin ; something (cmd2) ; end
that cmd2 and cmd have the same pgroup.
Add a test to ensure that command substitutions inherit pgroups
properly.
Fixes#6624
f8ba0ac5bf introduced a bug where INT handlers would themselves be
cancelled, due to the signal. Defer processing handlers until the
parser is ready to execute more fish script.
Fixes the interactive case of #6649.
The `function --on-job-exit caller` feature allows a command substitution
to observe when the parent job exits. This has never worked very well - in
particular it is based on job IDs, so a function that observes this will
run multiple times. Implement it properly.
Do this by having a not-recycled "internal job id".
This is only used by psub, but ensure it works properly none-the-less.
Prior to this fix, fish was rather inconsistent in when $status gets set
in response to an error. For example, a failed expansion like "$foo["
would not modify $status.
This makes the following inter-related changes:
1. String expansion now directly returns the value to set for $status on
error. The value is always used.
2. parser_t::eval() now directly returns the proc_status_t, which cleans
up a lot of call sites.
3. We expose a new function exec_subshell_for_expand() which ignores
$status but returns errors specifically related to subshell expansion.
4. We reify the notion of "expansion breaking" errors. These include
command-not-found, expand syntax errors, and others.
The upshot is we are more consistent about always setting $status on
errors.
This commit recognizes an existing pattern: many operations need some
combination of a set of variables, a way to detect cancellation, and
sometimes a parser. For example, tab completion needs a parser to execute
custom completions, the variable set, should cancel on SIGINT. Background
autosuggestions don't need a parser, but they do need the variables and
should cancel if the user types something new. Etc.
This introduces a new triple operation_context_t that wraps these concepts
up. This simplifies many method signatures and argument passing.
Previously, the block stack was a true stack. However in most cases, you
want to traverse the stack from the topmost frame down. This is awkward
to do with range-based for loops.
Switch it to pushing new blocks to the front of the block list.
This simplifies some traversals.
parser_t::eval indicates whether there was a parse error. It can be
easily confused with the status of the execution. Use a real type to
make it more clear.
Currently a job needs to know three things about its "parents:"
1. Any IO redirections for the block or function containing this job
2. The pgid for the parent job
3. Whether the parent job has been fully constructed (to defer self-disown)
These are all tracked in somewhat separate awkward ways. Collapse them
into a single new type job_lineage_t.
This adds initial support for statements with prefixed variable assignments.
Statments like this are supported:
a=1 b=$a echo $b # outputs 1
Just like in other shells, the left-hand side of each assignment must
be a valid variable identifier (no quoting/escaping). Array indexing
(PATH[1]=/bin ls $PATH) is *not* yet supported, but can be added fairly
easily.
The right hand side may be any valid string token, like a command
substitution, or a brace expansion.
Since `a=* foo` is equivalent to `begin set -lx a *; foo; end`,
the assignment, like `set`, uses nullglob behavior, e.g. below command
can safely be used to check if a directory is empty.
x=/nothing/{,.}* test (count $x) -eq 0
Generic file completion is done after the equal sign, so for example
pressing tab after something like `HOME=/` completes files in the
root directory
Subcommand completion works, so something like
`GIT_DIR=repo.git and command git ` correctly calls git completions
(but the git completion does not use the variable as of now).
The variable assignment is highlighted like an argument.
Closes#6048
This adds support for `fish_trace`, a new variable intended to serve the
same purpose as `set -x` as in bash. Setting this variable to anything
non-empty causes execution to be traced. In the future we may give more
specific meaning to the value of the variable.
The user's prompt is not traced unless you run it explicitly. Events are
also not traced because it is noisy; however autoloading is.
Fixes#3427
In e167714899 we allowed recursive calls
to complete. However, some completions use infinite recursion in their
completions and rely on `complete` to silently stop as soon as it is
called recursively twice without parameter (thus completing the
current commandline). For example:
complete -c su -s -xa "(complete -C(commandline -ct))"
su -c <TAB>
Infinite recursion happens because (commandline -ct) is an empty list,
which would print an error message. This commmit explicitly detects
such recursive calls where `complete` has no parameter and silently
terminates. This enables above completion (like before raising the
recursion limit) while still allowing legitimate cases with limited
recursion.
Closes#6171
We used to have a global notion of "is the shell interactive" but soon we
will want to have multiple independent execution threads, only some of
which may be interactive. Start tracking this data per-parser.
To support distinct parsers having different working directories, we need
to keep the working directory alive, and also retain a non-path reference
to it.