It was always a bit ridiculous that argparse required `X-longflag` if
that "X" short flag was never actually used anywhere.
Since the short letter is for getopt's benefit, we can hack around
this with our old friend: Unicode Private Use Areas.
We have a counter, starting at 0xE000 and going to 0xF8FF, that counts
up for all options that don't have a short flag and provides one. This
gives us up to 6400 long-only options.
6.4K should be enough for everybody.
Prior to this change, a glob like `**/file.txt` would only match
`file.txt` in subdirectories; the `**` must match at least one directory.
This is historical behavior.
With this change we move a little closer to bash's implementation by
allowing a literal `**` segment to match in the current directory. That
is, `**/foo` will match both `foo` and `bar/foo`, while `b**/foo` will
only match `bar/foo`.
Fixes#7222.
Before running a command, or before importing a command from bash history,
we perform error checking. As part of error checking we expand commands
including variables and globs. If the glob is very large, like `/**`, then
we could hang expanding it.
One fix would be to limit the amount of expansion from the glob, but
instead let's just not expand command globs when performing error checking.
Fixes#7407
This would tell you a function was "Defined in - @ line 1" for every
function defined via `source`.
Really, ideally we'd figure out where the *source* call was, but that'
much more complicated, so we just give a comprehensible message.
This matches what we do in --profile's output:
```
> source /home/alfa/.config/fish/config.fish
--> set -gx XDG_CACHE_HOME /home/alfa/.cache
--> set -gx XDG_CONFIG_HOME /home/alfa/.config
--> set -gx XDG_DATA_HOME /home/alfa/.local/share
```
instead of
```
+ source /home/alfa/.config/fish/config.fish
+++ set -gx XDG_CACHE_HOME /home/alfa/.cache
+++ set -gx XDG_CONFIG_HOME /home/alfa/.config
+++ set -gx XDG_DATA_HOME /home/alfa/.local/share
```
This increases a 100ms timeout to 200ms, because we've hit it on
Github Actions:
```
INPUT 3904.65 ms (Line 223): set -g fish_escape_delay_ms 100\n
OUTPUT +1.74 ms (Line 224): \rprompt 25>
INPUT +0.71 ms (Line 230): echo abc def
INPUT +0.57 ms (Line 231): \x1b
INPUT +0.57 ms (Line 232): t\r
OUTPUT +2.41 ms (Line 234): \r\ndef abc\r\n
OUTPUT +1.63 ms (Line 234): \rprompt 26>
INPUT +0.75 ms (Line 239): echo ghi jkl
INPUT +0.57 ms (Line 240): \x1b
INPUT +134.98 ms (Line 242): t\r
```
In other places it decreases sleeps where we just wait for a timeout to elapse, in which case we don't need much longer than the timeout.
And again clang-format does something I don't like:
- if (found != end && std::strncmp(found->name, name, len) == 0 && found->name[len] == 0) return found;
+ if (found != end && std::strncmp(found->name, name, len) == 0 && found->name[len] == 0)
+ return found;
I *know* this is a bit of a long line. I would still quite like having
no brace-less multi-line if *ever*. Either put the body on the same
line, or add braces.
Blergh
E.g. if we do `string match -q`, and we find a match, nothing about
the input can change anything, so we quit early.
This is mainly useful for performance, but it also allows `string`
with `-q` to be used with infinite input (e.g. `yes`).
Alternative to #7495.
Currently a bit limited, unfortunately printf's `%a` specifier is
absolutely unreadable.
So we add `hex` and `octal` with `0x` and `0` prefixes respectively,
and also take a number but currently only allow 16 and 8.
The output is truncated to integer, so scale values other than 0 are
invalid and 0 is implied.
The docs mention this may change.
The '--import' flag was used for importing named capture groups, but it
was decided to always import them unconditionally. This flag was causing
the tests to fail.
Prior to this fix, when key binding is a script command (i.e. not a
readline command), fish would run that key binding using fish's shell
tty modes. Switch to using the external tty modes. This re-fixes
issue #2214.
With the upcoming fix to place the tty in external-proc mode, add a sleep
which resolves a race between emitting a newline and restoring it to shell
mode.
Prior to this change, when a process resumes because it is brought back
to the foreground, we would reset the terminal attributes to shell mode.
This fixed#2114 but subtly introduced #7483.
This backs out 9fd9f70346, re-introducing #2114 and re-fixing #7483.
A followup fix will re-fix #2114; these are broken out separately for
bisecting purposes.
Fixes#7483.
I *think* this might sometimes (on CI) be eating the prompt, so that the actual `prompt`
part of `expect_prompt` doesn't find anything.
On Github Actions we see things like:
```
Testing file pexpects/generic.py ... Failed to match pattern: prompt 5
generic.py:35: timeout from expect_prompt("echo .history.*")
[...]
OUTPUT +1.08 ms (Line 31): \rprompt 4>
INPUT +0.35 ms (Line 34): echo $history[1]\n
OUTPUT +1.58 ms (Line 35): echo $history[1]\r\necho $history[1]\r\n⏎ \r⏎ \r\rprompt 5>
```
so the prompt *is* printed, it's just not correctly matched.
Apparently on macOS SIGTSTP (from control-Z) causes `read()` to return
EINTR.
This means `cat | cat` will exit as soon as it's backgrounded and
brought back.
So instead we use `sleep`, which won't read(), and therefore is
impervious to these puny attacks.
See discussion in #7447.
The classic mistake: Some of these have a bit of a delay, but it's supposed to
be *under* the timeout, so it needs to be *shorter* not longer to
increase the slack.
We just had the following output on Github Actions:
INPUT +0.94 ms (Line 34): echo ghi jkl
INPUT +0.72 ms (Line 35): \x1b
INPUT +63.12 ms (Line 37): t\r
The default escape delay is 30ms, that had 60ms between an escape and
a tab, so it missed it.
So: We have to increase the delay for CI's benefit. Let's try with
80ms, because otherwise we'd have to bump up other timeouts and the
bind tests take long enough as it is.
Github Action's macOS builds are even more resource-starved (even tho
they use the same provider?) than
Travis, but Travis is unusable to us now, so....
In some cases the completion we come up with may be unexpected, e.g.
if you have files like
/etc/realfile
and
/etc/wrongfile
and enter "/etc/gile", it will accept "wrongfile" because "g" and
"ile" are in there - it's a substring insertion match.
The underlying cause was a typo, so it should be easy to go back.
So we do a bit of magic and let "cancel" undo, but only right after a
completion was accepted via complete or complete-and-search.
That means that just reflexively pressing escape would, by default, get you back to
the old token and let you fix your mistake.
We don't do this when the completion was accepted via the pager,
because 1. there's more of a chance to see the problem there and 2.
it's harder to redo in that case.
Fixes#7433.
This was typically overridden by "too many/few arguments", but it's
actually incorrect:
sin(55
has the correct number of arguments to `sin`, but it's lacking
the closing `)`.
We've heard news of this regressing, so let's add the test that should
have been there already (mea culpa!).
Because we now use POSIX_VDISABLE, this should also work in tandem
with ctrl-space (which sends NUL), but we can't test *that* because
some systems might not have POSIX_VDISABLE.
This switch is no longer necessary when only one command is given.
Internally completions are stored separately for each command,
so we only every print one command name per "complete" line anyway.
These aliases seem to be common, see #7389 and others. This prevents
recursion on that example, so `alias ssh "env TERM=screen ssh"` will just
have the same completions as ssh.
Checking the last token is a heuristic which hopefully works for most
cases. Users are encouraged to use functions instead of aliases.
Ensure that the increment= param is set via keyword, not via positional arg.
This mistake was masking a bug where the "^a b c" match was not being tested,
because it was being set as the value for increment!
This switches the 'increment' param from "after" to "before." Instead
of expect_prompt saying if the next prompt will be incremented, each
call site says if it should have been incremented sinec the last prompt.
I am not sure why this worked, actually.
These tests did not have $fish set anywhere, and on my fresh OpenBSD
VM it ended up calling whatever that calls "fish" (I think it's that
"Go fish!" game?).
If the padding is not divisible by the char's width without remainder,
we pad the remainder with spaces, so the total width of the output is correct.
Also add completions, changelog entry, adjust documentation, add examples
with emoji and some tests. Apply some minor style nitpicks and avoid extra
allocations of the input strings.
Prior to this change, tab completing with a variable assignment like
`VAR=val cmd<tab>` would parse out and apply VAR=val, then recursively
invoke completions. This caused some awkwardness around the wrap chain -
if a wrapped command had a variable completion we risked infinite
recursion. A secondary problem is that we would run any command
substitutions inside variable assignment, which the user does not expect
to run until pressing enter.
With this change, we explicitly track variable assignments encountered
during tab completion, including both those explicitly given on the
command line and those found during wrap chain walk. We then apply them
while suppressing command substitutions.
In preparation for applying variable assignments (VAR=VAL cmd), separate
them out from the command when performing completions. This includes both
those that the user typed, and any that come about through
completion --wraps.
The "wrap chain" refers to a sequence of commands which wrap other
commands, for completion purposes. One possibility is that a wrap chain
will produce a combinatorial explosion or even an infinite loop, so there
needs to be logic to prevent that. Part of that logic is encapsulated in a
visited set (wrap_chain_visited_set_t) to prevent exploring the same item
twice.
Prior to this change, we stored pairs (command, wrapped_command). But we
only really need to store the wrapped command. Switch to that.
One consequence is that if a command wraps another command in more than
one way, we won't explore both ways. This seems unlikely in practice.
Detect recursive calls to builtin complete and the internal completion in
the same place.
In 0a0149cc2 (Prevent infinite recursion when completion wraps variable assignment)
we don't print an error when completing certain aliases like:
alias vim "A=B vim"
But we also gave no completions.
We could make this case work, but I think that trying to salvage situations
like this one is way too complex. Instead, let the user know by printing an
error. Not sure if the style of the error fits.
We could add some heuristic to alias to not add --wraps in some cyclic cases.
This reads any additional positional arguments given to `fish -c` into
$argv.
We don't handle the first argument specially (as `$0`) as that's confusing and
doesn't seem very useful.
Fixes#2314.
This makes history searches case-insensitive, unless the search string
contains an uppercase character.
This is what vim calls "smartcase".
Fixes#7273.
Closes#7344
Apply a targeted fix to the place where complete() is called to handle nested
variable assignments. Sadly, reporting an error is probably not okay here,
because people might legitimately use aliases like:
alias vim "A=B command vim"
This is all a bit ugly, and I hope to find a cleaner solution. Supporting
completions on commandlines like `x=$PWD cd $x/ ` is a nice feature but it
comes with some complexity.
"function --argument" is not a thing, it's "--argument-names". This only
accidentally works because our getopt is awful and allows abbreviated
long options.
Similarly, one argparse test used "--d" instead of "-d" or "--def".
Since builtins don't actually have the streams connected, but instead
read input via the io_streams_t objects, this would just always say
what *fish's* fds were.
Instead, pass along some of the stream data to check those
specifically - nobody cares that `test`s fd 0 *technically* is stdin.
What they want to know is that, if they used another program in that
place, it would connect to the TTY.
This is pretty hacky - I abused static variables for this, but
since it's two bools and an int it's probably okay.
See #1228.
Fixes#4766.
This re-enables the test that eval retains pgroups, from #6806.
The old version was racey and failed a lot. In the new version, we use
temp files to resolve the race.
The case for symlinked directories being duplicated a lot isn't there,
but there *is* a usecase for adding the symlink rather than the
target, and that's homebrew.
E.g. homebrew installs ruby into /usr/local/Cellar/ruby/2.7.1_2/bin,
and links to it from /usr/local/opt/ruby/bin. If we add the target, we
would miss updates.
Having path entries that point to the same location isn't a big
problem - it's a path lookup, so it takes a teensy bit longer. The
canonicalization is mainly so paths don't end up duplicated via weird
spelling and so relative paths can be used.
Taken from GNU realpath, this one makes realpath not resolve symlinks.
It still makes paths absolute and handles duplicate and trailing
slashes.
(useful in fish_add_path)
With a commandline like
```
a b c d
```
and the cursor at the beginning, this would eat "a b", which isn't a
sensible bigword.
Bigword should be "a word, with optional leading whitespace".
This was caused by an overly zealous state-machine that always ate one
char and only *then* started eating leading whitespace.
Instead eat *a character*, and if it was whitespace go on eating
whitespace, and if it was a printable go straight to only eating
printables.
Fixes#7325.
This can easily lead to an infinite loop, if a variable handler
triggers a repaint and the variable is set in the prompt, e.g. some of
the git variables.
A simple way to reproduce:
function fish_mode_prompt
commandline -f repaint
end
Repainting executes the mode prompt, which triggers a repaint, which
triggers the mode prompt, ....
So we just set a flag and check it.
Fixes#7324.
Currently, completions have to be specified like
```fish
complete -c foo -l opt
```
while
```fish
complete foo -l opt
```
just complains about there being too many arguments.
That's kinda useless, so we just assume if there is one left-over
argument that it's meant to be the command.
Theoretically we could also use *all* the arguments as commands to
complete, but that seems unlikely to be what the user wants.
(I don't think multi-command completions really happen)
Currently only `complete` will list completions, and it will list all
of them.
That's a bit ridiculous, especially since `complete -c foo` just does nothing.
So just make `complete -c foo` list all the completions for `foo`.
Previously, when a command wasn't found, fish would emit the
"fish_command_not_found" *event*.
This was annoying as it was hard to override (the code ended up
checking for a function called `__fish_command_not_found_handler`
anyway!), the setup was ugly,
and it's useless - there is no use case for multiple command-not-found handlers.
Instead, let's just call a function `fish_command_not_found` if it
exists, or print the default message otherwise.
The event is completely removed, but because a missing event is not an error
(MEISNAE in C++-speak) this isn't an issue.
Note that, for backwards-compatibility, we still keep the default
handler function around even tho the new one is hard-coded in C++.
Also, if we detect a previous handler, the new handler just calls it.
This way, the backwards-compatible way to install a custom handler is:
```fish
function __fish_command_not_found_handler --on-event fish_command_not_found
# do a little dance, make a little love, get down tonight
end
```
and the new hotness is
```fish
function fish_command_not_found
# do the thing
end
```
Fixes#7293.
Now command, jobs, type, abbr, builtin, functions and set take `-q` to
query for existence, but the long option is inconsistent.
The first three use `--quiet`, the latter use `--query`. Add `--query`
to the first three, but keep `--quiet` around.
Fixes#7276.
Just as `math "bitand(5,3)"` and `math "bitor(6,2)"`.
These cast to long long before doing their thing,
so they truncate to an integer, producing weird results with floats.
That's to be expected because float representation is *very*
different, and performing bitwise operations on floats feels quite useless.
Fixes#7281.
Prior to this change, if we saw more than one repaint readline command in
a row, we would try to ignore the second one. However this was never the
right thing to do since sometimes we really do need to repaint twice in a
row (e.g. the user hits Ctrl+L twice). Previously we were saved by the
buginess of this mechanism but with the repainting refactoring we see
missing redraws.
Remove the coalescing logic and add a test. Fixes#7280.
Because TERM was set to something other than 'dumb', we were subject to
syntax highlighting and other interactive features that would affect the
output. In practice we were getting lucky timing-wise, but with upcoming
interactive changes syntax highlighting started to fail this test.
It could be nice to use a heuristic for this in future, but for now let's
stick to the old behavior so we can keep formatting scripts without occasional
bad formatting changes.
A heuristic could also be used to break lines after |, && or || but I don't
think there is much need for that at the moment.
Closes#7252
This concerns code like the following:
while true ; sleep 100; end
Here 'while' is a "simple block execution" and does not create a new job,
or get a pgid. Each 'sleep' however is an external command execution, and
is treated as a distinct job. (bash is the same way). So `while` and
`sleep` are always in different job groups.
The problem comes about if 'sleep' is cancelled through SIGINT or SIGQUIT.
Prior to 2a4c545b21, if *any* process got a SIGINT or SIGQUIT, then fish
would mark a global "stop executing" variable. This obviously prevents
background execution of fish functions.
In 2a4c545b21, this was changed so only the job's group gets marked as
cancelled. However in the case of one job group spawning another, we
weren't propagating the signal.
This adds a signal to parse_execution_context which the parser checks after
execution. It's not ideal since now we have three different places where
signals can be recorded. However it fixes this regression which is too
important to leave unfixed for long.
Fixes#7259
Might help figuring out where this times out on CI?
We're waiting *20 seconds* for the output to appear, there's no way
that's too slow. So maybe we're going too fast elsewhere?
This indents continuations after pipes and conjunctions if they contain
a newline.
Example:
cmd1 &&
cmd2
But it avoids the "double indent" if it indented unconditionally:
cmd1 | begin
cmd2
end
More work towards improving #7252
Prior to this change, when emitting gap text (comments, newlines, etc),
fish_indent would use the indentation of the text at the end of the gap.
But this has the wrong result for this case:
begin
command
# comment
end
as the comment would get the indent of the 'end'. Instead use the indent
computed for the gap text itself.
Addresses one case of #7252.
It's useless - `expect` has a timeout anyway, and it defaults to 5s,
so these 0.5s sleeps just mean it'll always take at least 0.5s.
Sometimes it is useful to let things settle before *sending* text, and
it would be nice to be able to set the timeout for each expect
separately, but just adding to the timeout isn't useful.
This one sometimes fails with a zombie detected, so I'm assuming it's
too fast for reaping to happen, so we add another 100ms sleep.
Yeah, this isn't great but...eh
This can be used to determine whether the previous command produced a real status, or just carried over the status from the command before it. Backgrounded commands and variable assignments will not increment status_generation, all other commands will.
This changes how fish attempts to protect itself from calling tcsetpgrp() too
aggressively. Recall that tcsetpgrp() will "force" itself, if SIGTTOU is
ignored (which it is in fish when job control is enabled).
Prior to this fix, we avoided SIGTTINs by only transferring the tty ownership
if fish was already the owner. This dated from a time before we had really
nailed down how pgroups should be assigned. Now we more deliberately assign a
job's pgroup so we don't need this conservative check.
However we still need logic to avoid transferring the tty if fish is not the
owner. The bad case is when job control is enabled while fish is running in the
background - here fish would transfer the tty and "steal" from the foreground
process.
So retain the checks of the current tty owner but migrate them to the point of
calling tcsetpgrp() itself.
Unfortunately this doesn't quite fix the issue with Pantheon
Terminal (#7913), as that somehow manages to re-set $VTE_VERSION by
the time littlecheck runs.
This switches fish_indent from parsing with parse_tree
to the new ast.
This is the most difficult transition because the new ast retains less
lexical information than the old parse tree. The strategy is:
1. Use parse_util_compute_indents to compute indenting for each token.
2. Compute the "gap text" between the text of significant tokens. This
contains whitespace, comments, etc.
3. "Fix up" the gap text while leaving the significant tokens alone.
I really kinda hate how insistent clang-format is to have line
breaks *IFF THE LINE IS TOO LONG*.
Like... lemme just add a break if it looks better, will you?
But it is the style at this time, so we shall tie an onion to our
belt.
A broken/missing optspec or `--` is a bug in the script using
argparse, an unknown option or invalid argument is a bug in using that script.
So in the former case print a stacktrace, because the person writing
the `argparse` call is at fault, in the latter don't.
Fixes#6703.
Note: This includes a super cheesy thing to print variable contents.
The expect version has one that's a bit more elaborate (featuring a
marker setup), but tbh that doesn't seem to be worth it.
If we do need it, we can add it, but it seems more likely we'd just do
`set -S`, or do it in a check instead.
With the new pexpect based framework, bind and pipeline expect tests can
be removed.
Amusingly the complete.fish check required the existence of bind.expect.
Fix the check at the same time.
Make it easier to use pexpect and to understand its error messages.
Switch to a style in tests using bound methods, which makes them
less noisy to write.
This adds a new interactive test framework based on Python's pexpect. This
is intended to supplant the TCL expect-based tests.
New tests go in `tests/pexpects/`. As a proof-of-concept, the
pipeline.expect test and the (gnarly) bind.expect test are ported to the
new framework.
There's a terrible number of fishscripts that start with
set path (dirname (status filename))
And that's really just a bit boring.
So let's let it be
set path (status dirname)
This is a function you can either execute once, interactively, or
stick in config.fish, and it will do the right thing.
Some options are included to choose some slightly different behavior,
like setting $PATH directly instead of $fish_user_paths, or moving
already existing components to the front/back instead of ignoring
them, or appending new components instead of prepending them.
The defaults were chosen because they are the most safe, and
especially because they allow it to be idempotent - running it again
and again and again won't change anything, it won't even run the
actual `set` because it skips that if all components are already in.
Fixes#6960.
Variables like $status and $history showed up in all scopes, including
universal, when querying with `set -q` or `set -S`.
This makes it so they all only count as set in global scope, because
we already only allow assignment to electric variables in global scope.
Fixes#7032
The default implementation will not print any output in that case, but this provides users with additional flexibility when it comes to customising the shell's behaviour.
This allows users to customise the behaviour of the shell by redefining the function. This is similar to how fish_title or fish_greeting behave, where the default implementation can be easily overridden.
The function receives as arguments the job id, command line, signal name and signal description.
Give string expansion an (optional) parent pgroup. This is threaded all
the way into eval(). This ensures that in a mixed pipeline like:
cmd | begin ; something (cmd2) ; end
that cmd2 and cmd have the same pgroup.
Add a test to ensure that command substitutions inherit pgroups
properly.
Fixes#6624
Changes it from
```
$fish_color_user: not set in local scope
$fish_color_user: set in global scope, unexported, with 1 elements
$fish_color_user[1]: length=3 value=|080|
$fish_color_user: set in universal scope, unexported, with 1 elements
$fish_color_user[1]: length=7 value=|brgreen|
```
(with the trailing empty line - not just a newline)
to
```
$fish_color_user: set in global scope, unexported, with 1 elements
$fish_color_user[1]: |080|
$fish_color_user: set in universal scope, unexported, with 1 elements
$fish_color_user[1]: |brgreen|
```
Prior to this fix, if job control is enabled but stdin is not a tty, we
would return an error from terminal_maybe_give_to_job which would cause us
to avoid waiting for the job. Instead just return notneeded.
Fixes#6573.
The description for an alias which already has escape sequences will
use backslash escapes for quoting; usually `string escape` can simply
quote it. Use a regex that accepts either escaping style.
Travis puts the commit message in an environment variable, so if it
contains the string `_flag` this would match TRAVIS_COMMIT_MESSAGE.
That happened in ca91c201c3, so the
tests failed.
We simply tighten the regex a little more, and make a commit message
that doesn't include the string.
This allows code of the form `if jobs -q $some_pid` in scripts to check whether a previously started job is still running. Previously this would return the correct value, but also print an error message.
The invalid argument errors will still be printed.
Added test cases for both.
Because `command ./somedir/somecommand` is okay.
Fixes test failure from aa304cbd3d.
Child directories in $PATH are still not suggested, as was the main
intention of the commit that introduced the tests:
8a3cf144f Don't include child directories of $PATH in completions.
We have now entirely switched the script tests to littlecheck.
Note: This adjusts the complete_directories test, because it removes a
directory that was created before by a .in test. There's no real
change in behavior.
This does require the test directory be cleaned, or the tests will fail.
test_util gets to stay for a while longer, because it sets up the
testing env (locale and such).
This, together with the other testX, really just tests some basic
syntax. So let's just call it "basic".
Note that this file uses escaped newlines on purpose, so restyling it
would currently break it. I'm not sure what the best thing to do here is.
Instead of invoking littlecheck.py independently for each file, pass
all files at once. This amortizes the Python startup cost, and reduces
the total test time by ~15 seconds (!).
This isn't quite the old-style test, but it checks some of the line
continuation stuff.
Note that littlecheck ignores leading whitespace, so testing the
actual indentation requires some more effort.
Things like
```fish
\
echo foo
```
or
```fish
echo foo; \
echo bar
```
are a formatting blunder and should be handled.
This makes it so the escaped newline is removed, and the
semicolon/token_type_end handling will then put the statements on
different lines.
One case this doesn't handle brilliantly is an escaped newline after a
pipe:
```fish
echo foo | \
cat
```
is turned into
```fish
echo foo | cat
```
which here works great, but in long pipelines can cause issues.
Pipes at the end of the line cause fish to continue parsing on the
next line, so this can just be written as
```fish
echo foo |
cat
```
for now.
This tries to see if quotes guard some expansion from happening. If it
detects a "weird" character it'll leave the quotes in place, even in
some cases where it might not trigger.
So
for i in 'c' 'color'
turns into
for i in c color
The rationale here is that these quotes are useless, wasting
space (and line length), but more importantly that they are
superstitions. They don't do anything, but look like they do.
The counter argument is that they can be kept in case of later
changes, or that they make the intent clear - "this is supposed to be
a string we pass".
The problem is that under TSAN, the timing of signals becomes very weird and
exposes some real race conditions. We will need to re-design how signal
event handlers work.
This test launches two background processes and is sensitive to
interleaving of output. Fix it so that newlines are not output by
the background process.
Hopefully this fixes the flakiness of this test.
f8ba0ac5bf introduced a bug where INT handlers would themselves be
cancelled, due to the signal. Defer processing handlers until the
parser is ready to execute more fish script.
Fixes the interactive case of #6649.
If a background process runs a fish function which launches another
background process, ensure that these background procs get different
pgroups. Add a test for it.
This executes `fish --no-execute` a whole bunch of times in order to
find syntax errors in our fish scripts.
tests/ is exempt because it contains syntax errors on purpose.
This is a great idea in principle, but it takes ~4s on my system.
It used to error out when a command wasn't known, even when it was a
function that would only be discovered via autoloading.
Now we just accept that a command doesn't exist when no-execute is
given - we're not gonna execute it anyway.
Also, in the same breath stop counting empty commands after expansion
and empty wildcard expansions as errors - these depend on runtime
values, so we can't verify them without executing.
Fixes#977.
(note that it still executes "time", but that's another commit)
Appending to an fd doesn't really make sense, but we allowed the
syntax previously and it was actually used.
It's not too harmful to allow it, so let's just do that again.
For the record: Zsh also allows it, bash doesn't.
Fixes#6614
Glob ordering is used in a variety of places, including figuring out
conf.d and really needs to be stable.
Other ordering, like completions, is really just cosmetic and can
change if it makes for a nicer experience.
So we uncouple it by copying the wcsfilecmp from 3.0.2, which will
return the ordering to what it was in that release.
Fixes#6593
The `function --on-job-exit caller` feature allows a command substitution
to observe when the parent job exits. This has never worked very well - in
particular it is based on job IDs, so a function that observes this will
run multiple times. Implement it properly.
Do this by having a not-recycled "internal job id".
This is only used by psub, but ensure it works properly none-the-less.
This one tests a bunch of separate stuff, so we put it into a few
different files.
The main, new one is "slices.fish", which tests various index expressions.
This makes two changes:
1. Remove the 'brace_text_start' idea. The idea of 'brace_text_start' was
to prevent emitting `BRACE_SPACE` at the beginning or end of an item. But
we later strip these off anyways, so there is no apparent benefit. If we
are not doing brace expansion, this prevented emitting whitespace at the
beginning or end of an item, leading to #6564.
2. When performing brace expansion, only stomp the space character with
`BRACE_SPACE`; do not stomp newlines and tabs. This is because the fix in
came from a newline or tab literal, then we would have effectively
replaced a newline or tab with a space, so this is important for #6564 as
well. Moreover, it is not easy to place a literal newline or tab inside a
brace expansion, and users who do probably do not mean for it to be
stripped, so I believe this is a good change in general.
Fixes#6564
Just another version of the error. We still want to get a bug if it
ever triggers a *wrong* error, so we still list all the options
instead of going for `.*option:.*Z.*`.
Fixes#6554
Solaris/OpenIndiana/Illumos `rm` checks that and errors out.
In these cases we don't actually need it to be a part of $PWD as
it's just for cleanup, so we `cd` out before.
See #5472
See 1ee57e9244Fixes#6555Fixes#6558
Prior to this fix, fish was rather inconsistent in when $status gets set
in response to an error. For example, a failed expansion like "$foo["
would not modify $status.
This makes the following inter-related changes:
1. String expansion now directly returns the value to set for $status on
error. The value is always used.
2. parser_t::eval() now directly returns the proc_status_t, which cleans
up a lot of call sites.
3. We expose a new function exec_subshell_for_expand() which ignores
$status but returns errors specifically related to subshell expansion.
4. We reify the notion of "expansion breaking" errors. These include
command-not-found, expand syntax errors, and others.
The upshot is we are more consistent about always setting $status on
errors.
macOS `mktemp -d` likes to return symlinks. Guard against that possibility.
That allows the test to succeed when run directly, instead of through the
build target.
It was possible to start the new job and execute `jobs` again before
the job died (or we noticed it did), so the test would fail.
To properly test, we need to ensure the job has been removed. `wait`
should do it.
complete -C'echo $HOM ' would complete $HOM instead of a new token.
Fixes another regression introduced in
6fb7f9b6b - Fix completion for builtins with subcommands
It's now good enough to do so.
We don't allow grid-alignment:
```fish
complete -c foo -s b -l barnanana -a '(something)'
complete -c foo -s z -a '(something)'
```
becomes
```fish
complete -c foo -s b -l barnanana -a '(something)'
complete -c foo -s z -a '(something)'
```
It's just more trouble than it is worth.
The one part I'd change:
We align and/or'd parts of an if-condition with the in-block code:
```fish
if true
and false
dosomething
end
```
becomes
```fish
if true
and false
dosomething
end
```
but it's not used terribly much and if we ever fix it we can just
reindent.
for-loops that were not inside a function could overwrite global
and universal variables with the loop variable. Avoid this by making
for-loop-variables local variables in their enclosing scope.
This means that if someone does:
set a global
for a in local; end
echo $a
The local $a will shadow the global one (but not be visible in child
scopes). Which is surprising, but less dangerous than the previous
behavior.
The detection whether the loop is running inside a function was failing
inside command substitutions. Remove this special handling of functions
alltogether, it's not needed anymore.
Fixes#6480
Store the entire function declaration, not just its job list.
This allows us to extract the body of the function complete with any
leading comments and indents.
Fixes#5285
In particular, this allows `true && time true`, or `true; and time true`,
and both `time not true` as well as `not time true` (like bash).
time is valid only as job _prefix_, so `true | time true` could call
`/bin/time` (same in bash)
See discussion in #6442
Extend the commit 8e17d29e04 to block processes, for example:
begin ; stuff ; end
or if/while blocks as well.
Note there's an existing optimization where we do not create a job for a
block if it has no redirections.
job_promote attempts to bring the most recently "touched" job to the front
of the job list. It did this via:
std::rotate(begin, job, end)
However this has the effect of pushing job-1 to the end. That is,
promoting '2' in [1, 2, 3] would result in [2, 3, 1].
Correct this by replacing it with:
std::rotate(begin, job, job+1);
now we get the desired [2, 1, 3].
Also add a test.
It looks like the last status already contains the signal that cancelled
execution.
Also make `fish -c something` always return the last exit status of
"something", instead of hardcoded 127 if exited or signalled.
Fixes#6444
This was previously required so that, if there was a redirection to a
file, we would fork a process to create the file even if there was no
output. For example `echo -n >/tmp/file.txt` would have to create
file.txt even though it would be empty.
However now we open the file before fork, so we no longer need special
logic around this.
Do this only when splitting on IFS characters which usually contains
whitespace characters --- read --delimiter is unchanged; it still
consumes no more than one delimiter per variable. This seems better,
because it allows arbitrary delimiters in the last field.
Fixes#6406
This adds a test for the obscure case where an fd is redirected to
itself. This is tricky because the dup2 will not clear the CLO_EXEC bit.
So do it manually; also posix_spawn can't be used in this case.
The IO cleanup left file redirections open in the child. For example,
/bin/cmd < file.txt would redirect stdin but also leave the file open.
Ensure these get closed properly.
Prior to this fix, a job would hold onto any IO redirections from its
parent. For example:
begin
echo a
end < file.txt
The "echo a" job would hold a reference to the I/O redirection.
The problem is that jobs then extend the life of pipes until the job is
cleaned up. This can prevent pipes from closing, leading to hangs.
Fix this by not storing the block IO; this ensures that jobs do not
prolong the life of pipes.
Fixes#6397
(command pwd) uses the system's implementation of pwd. At least the GNU
coreutils implementation defaults to -P, which resulted in symlinks being
expanded when switching between directories with nextd/prevd.
This did some weird unescaping to try to extract the first word.
So we're now more likely to be *correct*, and the alias benchmark is
about 20% *faster*.
Call it a win-win.
This splits a string into variables according to the shell's
tokenization rules, considering quoting, escaping etc.
This runs an automatic `unescape` on the string so it's presented like
it would be passed to the command. E.g.
printf '%s\n' a\ b
returns the tokens
printf
%s\n
a b
It might be useful to add another mode "--tokenize-raw" that doesn't
do that, but this seems to be the more useful of the two.
Fixes#3823.
This added the function offset *again*, but it's already included in
the line for the current file.
And yes, I have explicitly tested a function file with a function
defined at a later line.
Fixes#6350
Since #6287, bare variable assignments do not parse, which broke
the "Unsupported use of '='" error message.
This commit catches parse errors that occur on bare variable assignments.
When a statement node fails to parse, then we check if there is at least one
prefixing variable assignment. If so, we emit the old error message.
See also #6347
This adds initial support for statements with prefixed variable assignments.
Statments like this are supported:
a=1 b=$a echo $b # outputs 1
Just like in other shells, the left-hand side of each assignment must
be a valid variable identifier (no quoting/escaping). Array indexing
(PATH[1]=/bin ls $PATH) is *not* yet supported, but can be added fairly
easily.
The right hand side may be any valid string token, like a command
substitution, or a brace expansion.
Since `a=* foo` is equivalent to `begin set -lx a *; foo; end`,
the assignment, like `set`, uses nullglob behavior, e.g. below command
can safely be used to check if a directory is empty.
x=/nothing/{,.}* test (count $x) -eq 0
Generic file completion is done after the equal sign, so for example
pressing tab after something like `HOME=/` completes files in the
root directory
Subcommand completion works, so something like
`GIT_DIR=repo.git and command git ` correctly calls git completions
(but the git completion does not use the variable as of now).
The variable assignment is highlighted like an argument.
Closes#6048
Presently the completion engine ignores builtins that are part of the
fish syntax. This can be a problem when completing a string that was
based on the output of `commandline -p`. This changes completions to
treat these builtins like any other command.
This also disables generic (filename) completion inside comments and
after strings that do not tokenize.
Additionally, comments are stripped off the output of `commandline -p`.
Fixes#5415Fixes#2705
PATH and CDPATH have special behavior around empty elements. Express this
directly in env_stack_t::set rather than via variable dispatch; this is
cleaner.
This adds support for `fish_trace`, a new variable intended to serve the
same purpose as `set -x` as in bash. Setting this variable to anything
non-empty causes execution to be traced. In the future we may give more
specific meaning to the value of the variable.
The user's prompt is not traced unless you run it explicitly. Events are
also not traced because it is noisy; however autoloading is.
Fixes#3427
Until now, something like
`math '7 = 2'`
would complain about a "missing" operator.
Now we print an error about logical operators not being supported and
point the user towards `test`.
Fixes#6096
The `--entire` would enable output even though the `--quiet` should
have silenced it. These two don't make any sense together so print an
error, because the user could have just left off the `-q`.
This spewed errors because the `math` invocation got no second
operand:
Testing file checks/sigint.fish ... math: Error: Too few arguments
'1571487730 -'
but only if the `date` didn't do milliseconds, which is the case on
FreeBSD and NetBSD.
(also force the variable to be global - we don't want to have a
universal causing trouble here)
Universal exported variables (created by `set -xU`) used to show up
both as universal and global variable in child instances of fish.
As a result, when changing an exported universal variable, the
new value would only be visible after a new login (or deleting the
variable from global scope in each fish instance).
Additionally, something like `set -xU EDITOR vim -g` would be imported
into the global scope as a single word resulting in failures to
execute $EDITOR in fish.
We cannot simply give precedence to universal variables, because
another process might have exported the same variable. Instead, we
only skip importing a variable when it is equivalent to an exported
universal variable with the same name. We compare their values after
joining with spaces, hence skipping those imports does not change the
environment fish passes to its children. Only the representation in
fish is changed from `"vim -g"` to `vim -g`.
Closes#5258.
This eliminates the issue #5348 for universal variables.
Consider a group of short options, like -xzPARAM, where x and z are options and z takes an argument.
This commit enables completion of the argument to the last option (z), both within the same
token (-xzP) or in the next one (-xz P).
complete -C'-xz' will complete only parameters to z.
complete -C'-xz ' will complete only parameters to z if z requires a parameter
otherwise, it will also complete non-option parameters
To do so this implements a heuristic to differentiate such strings from single long options. To
detect whether our token contains some short options, we only require the first character after the
dash (here x) to be an option. Previously, all characters had to be short options. The last option
in our example is z. Everything after the last option is assumed to be a parameter to the last
option.
Assume there is also a single long option -x-foo, then complete -C'-x' will suggest both -x-foo and
-xy. However, when the single option x requires an argument, this will not suggest -x-foo.
However, I assume this will almost never happen in practise since completions very rarely mix
short and single long options.
Fixes#332
This stops reading argument names after another option appears. It does not break any previous uses and in fact fixes uses like
```fish
function foo --argument-names bar --description baz
```
* `function` command handles options after argument names (Fixes#6186)
* Removed unneccesary test
I got tired of seeing ' ... ok (0 sec)' so now with GNU date/gdate
installed there is millisecond output shown. One can get rough
nanoseconds from gdate.
This sequence can be generatd by control-spacebar. Allow it to be bound
properly.
To do this we must be sure that we never round-trip the key sequence
through a C string.
Meaning empty variables, command substitutions that don't print
anything.
A switch without an argument
```fish
switch
case ...
end
```
is still a syntax error, and more than one argument is still a runtime
error.
The none-argument matches either an empty-string `case ''` or a
catch-all `case '*'`.
Fixes#5677.
Fixes#4943.
This test uses universal variables, and so it can fail when run
multiple times.
It might be a good idea to do this in general, but for now let's just
try it here.
Previously when propagating explicitly separated output, we would early-out
if the buffer was empty, where empty meant contains no characters. However
it may contain one or more empty strings, in which case we should propagate
those strings.
Remove this footgun "empty" function and handle this properly.
Fixes#5987
I tested this manually (`littlecheck.py -s fish=fish tests/checks/eval.fish`) from the base directory, which means I got
"tests/checks/eval", while the real test gets "checks/eval".
I then reran `make test_fishscript`, but that didn't pull in the
updated test - we should really handle that better.
I'm gonna add more tests to this and I don't want to touch the old stuff.
Notice that this needs to have the output of the complete_directories
test adjusted because this one now runs later.
That's something we should take into account in future.
history now often writes to the history file asynchronously, but the history
test expects to find the text in the file immediately after running the
command. Hack a bit in history to make this test more reliable.
This required a bit of thinking.
What we do is we have one test that fakes $HOME, and then we do the
various config tests there.
The fake config we have is reused and we exercise all of the same codepaths.
This prints a green "ok" with the duration, just like the rest of the
tests.
Note that this clashes a bit with
https://github.com/ridiculousfish/littlecheck/pull/3.
(also don't check for python again and again and again)
This is a bit weird sometimes, e.g. to test the return status (that
fish actually *returns $status*), we use a #RUN line with %fish
invoking %fish, so we can use the substitution.
Still much nicer.
The missing scripts are those that rely on config.
Especially as, in this case, the documentation is quite massive.
Caught by porting string's test to littlecheck.
See #3404 - this was already supposed to be included.
This is a nice test (ha!) for how this works and what littlecheck can
do for us.
1. Input is now the actual file, not "Standard Input" anymore. So
any errors mentioning that now include the filename.
2. Regex are really nice for filenames, but especially for line
numbers
3. It's much nicer to have the output where it's created, instead of
needing to follow three files at the same time.
Instead of requiring a flag to enable newline trimming, invert it so the
flag (now `--no-trim-newlines`) disables newline trimming. This way our
default behavior matches that of sh's `"$(cmd)"`.
Also change newline trimming to trim all newlines instead of just one,
again to match sh's behavior.
The `string collect` subcommand behaves quite similarly in practice to
`string split0 -m 0` in that it doesn't split its output, but it also
takes an optional `--trim-newline` flag to trim a single trailing
newline off of the output.
See issue #159.
This adds support for .check files inside the tests directory. .check
files are tests designed to be run with littlecheck.
Port printf test to littlecheck and remove the printf.in test.
It's always a bit annoying that `*` requires quoting.
So we allow "x" as an alternative, only it needs to be followed by
whitespace to distinguish it from "0x" hexadecimal notation.
This makes the following changes:
1. Events in background threads are executed in those threads, instead of
being silently dropped
2. Blocked events are now per-parser instead of global
3. Events are posted in builtin_set instead of within the environment stack
The last one means that we no longer support event handlers for implicit
sets like (example) argv. Instead only the `set` builtin (and also `cd`)
post variable-change events.
Events from universal variable changes are still not fully rationalized.
When setting a variable without a specified scope, we should give priority
to an existing local or global above an existing universal variable with
the same name.
In 16fd780484 there was a regression that
made universal variables have priority.
Fixes#5883
Brace expansion with single words in it is quite useless - `HEAD@{0}`
expanding to `HEAD@0` breaks git.
So we complicate the rule slightly - if there is no variable expansion
or "," inside of braces, they are just treated as literal braces.
Note that this is technically backwards-incompatible, because
echo foo{0}
will now print `foo{0}` instead of `foo0`. However that's a
technicality because the braces were literally useless in that case.
Our tests needed to be adjusted, but that's because they are meant to
exercise this in weird ways.
I don't believe this will break any code in practice.
Fixes#5869.
This read something like `o=!_validate_int`, and the flag modifier
reading kept the pointer after the `!`, so it created a long flag
called `_validate_int`, which meant it would not only error out form
```fish
argparse 'i=!_validate_int' 'o=!_validate_int' -- $argv
```
with "Long flag '_validate_int' already defined", but also set
$_flag_validate_int.
Fixes#5864.
As mentioned in #2900, something like
```fish
test -n "$var"; and set -l foo $var
```
is sufficiently idiomatic that it should be allowable.
Also fixes some additional weirdness with semicolons.
This removes semicolons at the end of the line and collapses
consecutive ones, while replacing meaningful semicolons with newlines.
I.e.
```fish
echo;
```
becomes
```fish
echo
```
but
```fish
echo; echo
```
becomes
```fish
echo
echo
```
Fixes#5859.
This keeps all unknown options in $argv, so
```fish
argparse -i a/alpha -- -a banana -o val -w
```
results in $_flag_a set to banana, and $argv set to `-o val -w`.
This allows users to use multiple argparse passes, or to simply avoid
specifying all options e.g. in completions - `systemctl` has 46 of
them, most not having any effect on the completions.
Fixes#5367.
This is a long-standing issue with how `complete --do-complete` does
its argument parsing: It takes an optional argument, so it has to be
attached to the token like `complete --do-complete=foo` or (worse)
`complete -Cfoo`.
But since `complete` doesn't take any bare arguments otherwise (it
would error with "too many arguments" if you did `complete -C foo`) we
can just take one free argument as the argument to `--do-complete`.
It's more of a command than an option anyway, since it entirely
changes what the `complete` call _does_.
Prior to this fix, a job would only inherit a pgrp from its parent if the
first command were external. There seems to be no reason for this
restriction and this causes tcsetgrp() churn, potentially cuasing SIGTTIN.
Switch to unconditionally inheriting a pgrp from parents.
This should fix most of #5765, the only remaining question is
tcsetpgrp from builtins.
Pursuant to 0be7903859, there still
remained one issue with the test when run from within a symlinked
directory after fish gained support for cding into symlinks.
This change should make the test function OK both when the tests are run
out of a PWD containing a symlink in its hierarchy and when run
otherwise.
The final test in `realpath.in` was based on the no-longer-valid
assumption that $PWD cannot be a symlink. Since the recent changes in
fish 3.0 to allow `cd`ing into "virtual" directories preserving symlinks
as-is, when `make test` was run from a path that contained a symlink
component, this test would fail the `pwd-resolved-to-itself` check.
As the test is not designed to initialize then cd into an absolute path
guaranteed to not be symbolic, so this final check is just wrong.
This has been driving nuts for years. The output of the diff emitted
when a test fails was always reversed, because the diff tool is called
with `${difftool} ${new} ${old}` so all the `-` and `+` contexts are
reversed, and the highlights are all screwed up.
The output of a `make test` run should show what has changed from the
baseline/expected, not how the expected differs from the actual. When
considered from both the perspective of intentional changes to the test
outputs and failed test outputs, it is desirable to see how the test
output has changed from the previously expected, and not the other way
around.
(If you were used to the previous behavior, I apologize. But it was
wrong.)
This was printed basically everywhere.
The user knows what they executed on standard input.
A good example:
```fish
set c (subme 513)
```
used to print
```
fish: Too much data emitted by command substitution so it was discarded
set -l x (string repeat -n $argv x)
^
in function 'subme'
called on standard input
with parameter list '513'
in command substitution
called on standard input
```
and now it is
```
fish: Too much data emitted by command substitution so it was discarded
set -l x (string repeat -n $argv x)
^
in function 'subme' with arguments '513'
in command substitution
```
See #5434.
This printed things like
```
in function 'f'
called on standard input
in function 'd'
called on standard input
in function 'b'
called on standard input
in function 'a'
called on standard input
```
As a first step, it removes the empty lines so it's now
```
in function 'f'
called on standard input
in function 'd'
called on standard input
in function 'b'
called on standard input
in function 'a'
called on standard input
```
See #5434.
If a function process is deferred, allow it to be unbuffered.
This permits certain simple cases where functions are piped to external
commands to execute without buffering.
This is a somewhat-hacky stopgap measure that can't really be extended
to more general concurrent processes. However it is overall an improvement
in user experience that might help flush out some bugs too.
POSIX dictates here that incomplete conversions, like in
printf %d\n 15.2
or
printf %d 14g
are still printed along with any error.
This seems alright, as it allows users to silence stderr to accept incomplete conversions.
This commit implements it, but what's a bit weird is the ordering between stdout and stderr,
causing the error to be printed _after_, like
15
14
15.1: value not completely converted
14,2: value not completely converted
but that seems like a general issue with how we buffer the streams.
(I know that nonfatal_error is a copy of most of fatal_error - I tried
differently, and va_* is weird)
Fixes#5532.
Before this change, - was sorted with other punctuation before
A-Z. Now, it sorts above the rest of the characters.
This has a practical effect on completions, where when there are
both -s and --long with the same description, the short option
is now before the long option in the pager, which is what is now
selected when navigating `foo -<TAB>`. The long options can be
picked out with `foo --<TAB>`. Before, short options which
duplicated a long option literally could not be selected by
any means from the pager.
Fixes#5634
This tweaks wcsfilecmp such that certain punctuation characters will
come after A-Z.
A big win with `set <TAB>` - the __prefixed fish junk now comes
after the stuff users should care about.
This disables an extra round of escaping in the `string replace -r`
replacement string.
Currently, to add a backslash to an a or b (to "escape" it):
string replace -ra '([ab])' '\\\\\\\$1' a
7 backslashes!
This removes one of the layers, so now 3 or 4 works (each one escaped
for the single-quotes, so pcre receives two, which it reads as one literal):
string replace -ra '([ab])' '\\\\$1' a
This is backwards-incompatible as replacement strings will change
meaning, so we put it behind a feature flag.
The name is kinda crappy, though.
Fixes#5474.
As a simple replacement for `wc -l`.
This counts both lines on stdin _and_ arguments.
So if "file" has three lines, then `count a b c < file` will print 6.
And since it counts newlines, like wc, `echo -n foo | count` prints 0.
It turns out that `string split0` didn't actually ever do any
splitting. The arg_iterator_t already split stdin on NUL, and split0 just
performed an additional search that could never succeed (since
arguments from argv already can't contain NUL).
Let the arg_iterator_t not perform any splitting if asked, and then
let split0 split in 0.
One slight wart is that split0 ignores a trailing NUL, which normal
split doesn't.
Fixes#5701.
In a galaxy far, far away, event_blockage_t was intended to block only cetain
events. But it always just blocked everything. Eliminate the event block
mask.
On some systems, this sometimes uses unicode quotation marks.
Not on mine, but on Travis it does.
The only other workaround I can think of is setting locale to C, but
that implies not being able to test anything unicode-related in the
entire invocation tests.
So for now disable this test.
Illumos/OpenIndiana/SunOS/Solaris has an rm/rmdir that tries to
protect the user by not allowing them to delete $PWD.
Normally, this would be a good thing as deleting $PWD is a stupid
thing to do. Except in this case, we absolutely need to do that.
So instead we weasel around it by invoking an sh to cd out of the
directory to then invoke an `rmdir` to delete it. That should throw
off any attempts at protection (we could also have tried $PWD/. or
similar, but that's possibly still protected against).
This is the last failing test on
Illumos/OpenIndiana/SunOS/Solaris/afunnyquip, so:
Fixes#5472.
This tested #1728, where redirecting a directory (`begin; something;
end < .`) would cause `status` to misbehave.
Unfortunately, on Illumos/OpenIndiana/SunOS, this returns a different
error (EINVAL instead of EISDIR), so we can't check that with our test harness, because
we can't redirect it.
Since it's not important that this gives the same error across
systems (and indeed we provide no way of intercepting the error!),
use an invocation test instead, because that allows different output per-uname.
See #5472.
300ms was waaay too long, and even 100ms wasn't necessary.
Emacs' evil mode uses 10ms (0.01s), so let's stay a tad higher in case
some terminals are slow.
If anyone really wants to be able to type alt+h with escape, let them
raise the timeout.
Fixes#3904.
This is a large change to how io_buffers are filled. The essential problem
comes about with code like (example):
echo ( /bin/pwd )
The output of /bin/pwd must go to fish, not the tty. To arrange for this,
fish does the following:
1. Invoke pipe() to create a pipe.
2. Add an io_bufferfill_t redirection that owns the write end of the pipe.
3. After fork (or equiv), call dup2() to replace pwd's stdout with this pipe.
Now when /bin/pwd writes, it will send output to the read end of the pipe.
But who reads it?
Prior to this fix, fish would do the following in a loop:
1. select() on the pipe with a 10 msec timeout
2. waitpid(WNOHANG) on the pwd proc
This polling is ugly and confusing and is what is replaced here.
With this new change, fish now reads from the pipe via a background thread:
1. Spawn a background pthread, which select()s on the pipe's read end with
a long (100 msec) timeout.
2. In the foreground, waitpid() (allowing hanging) on the pwd proc.
The big win here is a major simplification of job_t::continue_job() since
it no longer has to worry about filling buffers. This will make things
easier for concurrent execution.
It may not be obvious why the background thread still needs a poll (100 msec).
The answer is for cases where the write end of the fd escapes, in particular
background processes invoked inside command substitutions. psub is perhaps
the only important case of this (other shells typically just hang here).