This was typically overridden by "too many/few arguments", but it's
actually incorrect:
sin(55
has the correct number of arguments to `sin`, but it's lacking
the closing `)`.
We've heard news of this regressing, so let's add the test that should
have been there already (mea culpa!).
Because we now use POSIX_VDISABLE, this should also work in tandem
with ctrl-space (which sends NUL), but we can't test *that* because
some systems might not have POSIX_VDISABLE.
This switch is no longer necessary when only one command is given.
Internally completions are stored separately for each command,
so we only every print one command name per "complete" line anyway.
These aliases seem to be common, see #7389 and others. This prevents
recursion on that example, so `alias ssh "env TERM=screen ssh"` will just
have the same completions as ssh.
Checking the last token is a heuristic which hopefully works for most
cases. Users are encouraged to use functions instead of aliases.
Ensure that the increment= param is set via keyword, not via positional arg.
This mistake was masking a bug where the "^a b c" match was not being tested,
because it was being set as the value for increment!
This switches the 'increment' param from "after" to "before." Instead
of expect_prompt saying if the next prompt will be incremented, each
call site says if it should have been incremented sinec the last prompt.
I am not sure why this worked, actually.
These tests did not have $fish set anywhere, and on my fresh OpenBSD
VM it ended up calling whatever that calls "fish" (I think it's that
"Go fish!" game?).
If the padding is not divisible by the char's width without remainder,
we pad the remainder with spaces, so the total width of the output is correct.
Also add completions, changelog entry, adjust documentation, add examples
with emoji and some tests. Apply some minor style nitpicks and avoid extra
allocations of the input strings.
Prior to this change, tab completing with a variable assignment like
`VAR=val cmd<tab>` would parse out and apply VAR=val, then recursively
invoke completions. This caused some awkwardness around the wrap chain -
if a wrapped command had a variable completion we risked infinite
recursion. A secondary problem is that we would run any command
substitutions inside variable assignment, which the user does not expect
to run until pressing enter.
With this change, we explicitly track variable assignments encountered
during tab completion, including both those explicitly given on the
command line and those found during wrap chain walk. We then apply them
while suppressing command substitutions.
In preparation for applying variable assignments (VAR=VAL cmd), separate
them out from the command when performing completions. This includes both
those that the user typed, and any that come about through
completion --wraps.
The "wrap chain" refers to a sequence of commands which wrap other
commands, for completion purposes. One possibility is that a wrap chain
will produce a combinatorial explosion or even an infinite loop, so there
needs to be logic to prevent that. Part of that logic is encapsulated in a
visited set (wrap_chain_visited_set_t) to prevent exploring the same item
twice.
Prior to this change, we stored pairs (command, wrapped_command). But we
only really need to store the wrapped command. Switch to that.
One consequence is that if a command wraps another command in more than
one way, we won't explore both ways. This seems unlikely in practice.
Detect recursive calls to builtin complete and the internal completion in
the same place.
In 0a0149cc2 (Prevent infinite recursion when completion wraps variable assignment)
we don't print an error when completing certain aliases like:
alias vim "A=B vim"
But we also gave no completions.
We could make this case work, but I think that trying to salvage situations
like this one is way too complex. Instead, let the user know by printing an
error. Not sure if the style of the error fits.
We could add some heuristic to alias to not add --wraps in some cyclic cases.
This reads any additional positional arguments given to `fish -c` into
$argv.
We don't handle the first argument specially (as `$0`) as that's confusing and
doesn't seem very useful.
Fixes#2314.
This makes history searches case-insensitive, unless the search string
contains an uppercase character.
This is what vim calls "smartcase".
Fixes#7273.
Closes#7344
Apply a targeted fix to the place where complete() is called to handle nested
variable assignments. Sadly, reporting an error is probably not okay here,
because people might legitimately use aliases like:
alias vim "A=B command vim"
This is all a bit ugly, and I hope to find a cleaner solution. Supporting
completions on commandlines like `x=$PWD cd $x/ ` is a nice feature but it
comes with some complexity.
"function --argument" is not a thing, it's "--argument-names". This only
accidentally works because our getopt is awful and allows abbreviated
long options.
Similarly, one argparse test used "--d" instead of "-d" or "--def".
Since builtins don't actually have the streams connected, but instead
read input via the io_streams_t objects, this would just always say
what *fish's* fds were.
Instead, pass along some of the stream data to check those
specifically - nobody cares that `test`s fd 0 *technically* is stdin.
What they want to know is that, if they used another program in that
place, it would connect to the TTY.
This is pretty hacky - I abused static variables for this, but
since it's two bools and an int it's probably okay.
See #1228.
Fixes#4766.
This re-enables the test that eval retains pgroups, from #6806.
The old version was racey and failed a lot. In the new version, we use
temp files to resolve the race.
The case for symlinked directories being duplicated a lot isn't there,
but there *is* a usecase for adding the symlink rather than the
target, and that's homebrew.
E.g. homebrew installs ruby into /usr/local/Cellar/ruby/2.7.1_2/bin,
and links to it from /usr/local/opt/ruby/bin. If we add the target, we
would miss updates.
Having path entries that point to the same location isn't a big
problem - it's a path lookup, so it takes a teensy bit longer. The
canonicalization is mainly so paths don't end up duplicated via weird
spelling and so relative paths can be used.
Taken from GNU realpath, this one makes realpath not resolve symlinks.
It still makes paths absolute and handles duplicate and trailing
slashes.
(useful in fish_add_path)
With a commandline like
```
a b c d
```
and the cursor at the beginning, this would eat "a b", which isn't a
sensible bigword.
Bigword should be "a word, with optional leading whitespace".
This was caused by an overly zealous state-machine that always ate one
char and only *then* started eating leading whitespace.
Instead eat *a character*, and if it was whitespace go on eating
whitespace, and if it was a printable go straight to only eating
printables.
Fixes#7325.
This can easily lead to an infinite loop, if a variable handler
triggers a repaint and the variable is set in the prompt, e.g. some of
the git variables.
A simple way to reproduce:
function fish_mode_prompt
commandline -f repaint
end
Repainting executes the mode prompt, which triggers a repaint, which
triggers the mode prompt, ....
So we just set a flag and check it.
Fixes#7324.
Currently, completions have to be specified like
```fish
complete -c foo -l opt
```
while
```fish
complete foo -l opt
```
just complains about there being too many arguments.
That's kinda useless, so we just assume if there is one left-over
argument that it's meant to be the command.
Theoretically we could also use *all* the arguments as commands to
complete, but that seems unlikely to be what the user wants.
(I don't think multi-command completions really happen)
Currently only `complete` will list completions, and it will list all
of them.
That's a bit ridiculous, especially since `complete -c foo` just does nothing.
So just make `complete -c foo` list all the completions for `foo`.
Previously, when a command wasn't found, fish would emit the
"fish_command_not_found" *event*.
This was annoying as it was hard to override (the code ended up
checking for a function called `__fish_command_not_found_handler`
anyway!), the setup was ugly,
and it's useless - there is no use case for multiple command-not-found handlers.
Instead, let's just call a function `fish_command_not_found` if it
exists, or print the default message otherwise.
The event is completely removed, but because a missing event is not an error
(MEISNAE in C++-speak) this isn't an issue.
Note that, for backwards-compatibility, we still keep the default
handler function around even tho the new one is hard-coded in C++.
Also, if we detect a previous handler, the new handler just calls it.
This way, the backwards-compatible way to install a custom handler is:
```fish
function __fish_command_not_found_handler --on-event fish_command_not_found
# do a little dance, make a little love, get down tonight
end
```
and the new hotness is
```fish
function fish_command_not_found
# do the thing
end
```
Fixes#7293.
Now command, jobs, type, abbr, builtin, functions and set take `-q` to
query for existence, but the long option is inconsistent.
The first three use `--quiet`, the latter use `--query`. Add `--query`
to the first three, but keep `--quiet` around.
Fixes#7276.
Just as `math "bitand(5,3)"` and `math "bitor(6,2)"`.
These cast to long long before doing their thing,
so they truncate to an integer, producing weird results with floats.
That's to be expected because float representation is *very*
different, and performing bitwise operations on floats feels quite useless.
Fixes#7281.
Prior to this change, if we saw more than one repaint readline command in
a row, we would try to ignore the second one. However this was never the
right thing to do since sometimes we really do need to repaint twice in a
row (e.g. the user hits Ctrl+L twice). Previously we were saved by the
buginess of this mechanism but with the repainting refactoring we see
missing redraws.
Remove the coalescing logic and add a test. Fixes#7280.
Because TERM was set to something other than 'dumb', we were subject to
syntax highlighting and other interactive features that would affect the
output. In practice we were getting lucky timing-wise, but with upcoming
interactive changes syntax highlighting started to fail this test.
It could be nice to use a heuristic for this in future, but for now let's
stick to the old behavior so we can keep formatting scripts without occasional
bad formatting changes.
A heuristic could also be used to break lines after |, && or || but I don't
think there is much need for that at the moment.
Closes#7252
This concerns code like the following:
while true ; sleep 100; end
Here 'while' is a "simple block execution" and does not create a new job,
or get a pgid. Each 'sleep' however is an external command execution, and
is treated as a distinct job. (bash is the same way). So `while` and
`sleep` are always in different job groups.
The problem comes about if 'sleep' is cancelled through SIGINT or SIGQUIT.
Prior to 2a4c545b21, if *any* process got a SIGINT or SIGQUIT, then fish
would mark a global "stop executing" variable. This obviously prevents
background execution of fish functions.
In 2a4c545b21, this was changed so only the job's group gets marked as
cancelled. However in the case of one job group spawning another, we
weren't propagating the signal.
This adds a signal to parse_execution_context which the parser checks after
execution. It's not ideal since now we have three different places where
signals can be recorded. However it fixes this regression which is too
important to leave unfixed for long.
Fixes#7259
Might help figuring out where this times out on CI?
We're waiting *20 seconds* for the output to appear, there's no way
that's too slow. So maybe we're going too fast elsewhere?
This indents continuations after pipes and conjunctions if they contain
a newline.
Example:
cmd1 &&
cmd2
But it avoids the "double indent" if it indented unconditionally:
cmd1 | begin
cmd2
end
More work towards improving #7252
Prior to this change, when emitting gap text (comments, newlines, etc),
fish_indent would use the indentation of the text at the end of the gap.
But this has the wrong result for this case:
begin
command
# comment
end
as the comment would get the indent of the 'end'. Instead use the indent
computed for the gap text itself.
Addresses one case of #7252.
It's useless - `expect` has a timeout anyway, and it defaults to 5s,
so these 0.5s sleeps just mean it'll always take at least 0.5s.
Sometimes it is useful to let things settle before *sending* text, and
it would be nice to be able to set the timeout for each expect
separately, but just adding to the timeout isn't useful.
This one sometimes fails with a zombie detected, so I'm assuming it's
too fast for reaping to happen, so we add another 100ms sleep.
Yeah, this isn't great but...eh
This can be used to determine whether the previous command produced a real status, or just carried over the status from the command before it. Backgrounded commands and variable assignments will not increment status_generation, all other commands will.
This changes how fish attempts to protect itself from calling tcsetpgrp() too
aggressively. Recall that tcsetpgrp() will "force" itself, if SIGTTOU is
ignored (which it is in fish when job control is enabled).
Prior to this fix, we avoided SIGTTINs by only transferring the tty ownership
if fish was already the owner. This dated from a time before we had really
nailed down how pgroups should be assigned. Now we more deliberately assign a
job's pgroup so we don't need this conservative check.
However we still need logic to avoid transferring the tty if fish is not the
owner. The bad case is when job control is enabled while fish is running in the
background - here fish would transfer the tty and "steal" from the foreground
process.
So retain the checks of the current tty owner but migrate them to the point of
calling tcsetpgrp() itself.
Unfortunately this doesn't quite fix the issue with Pantheon
Terminal (#7913), as that somehow manages to re-set $VTE_VERSION by
the time littlecheck runs.
This switches fish_indent from parsing with parse_tree
to the new ast.
This is the most difficult transition because the new ast retains less
lexical information than the old parse tree. The strategy is:
1. Use parse_util_compute_indents to compute indenting for each token.
2. Compute the "gap text" between the text of significant tokens. This
contains whitespace, comments, etc.
3. "Fix up" the gap text while leaving the significant tokens alone.
I really kinda hate how insistent clang-format is to have line
breaks *IFF THE LINE IS TOO LONG*.
Like... lemme just add a break if it looks better, will you?
But it is the style at this time, so we shall tie an onion to our
belt.
A broken/missing optspec or `--` is a bug in the script using
argparse, an unknown option or invalid argument is a bug in using that script.
So in the former case print a stacktrace, because the person writing
the `argparse` call is at fault, in the latter don't.
Fixes#6703.
Note: This includes a super cheesy thing to print variable contents.
The expect version has one that's a bit more elaborate (featuring a
marker setup), but tbh that doesn't seem to be worth it.
If we do need it, we can add it, but it seems more likely we'd just do
`set -S`, or do it in a check instead.
With the new pexpect based framework, bind and pipeline expect tests can
be removed.
Amusingly the complete.fish check required the existence of bind.expect.
Fix the check at the same time.
Make it easier to use pexpect and to understand its error messages.
Switch to a style in tests using bound methods, which makes them
less noisy to write.
This adds a new interactive test framework based on Python's pexpect. This
is intended to supplant the TCL expect-based tests.
New tests go in `tests/pexpects/`. As a proof-of-concept, the
pipeline.expect test and the (gnarly) bind.expect test are ported to the
new framework.
There's a terrible number of fishscripts that start with
set path (dirname (status filename))
And that's really just a bit boring.
So let's let it be
set path (status dirname)
This is a function you can either execute once, interactively, or
stick in config.fish, and it will do the right thing.
Some options are included to choose some slightly different behavior,
like setting $PATH directly instead of $fish_user_paths, or moving
already existing components to the front/back instead of ignoring
them, or appending new components instead of prepending them.
The defaults were chosen because they are the most safe, and
especially because they allow it to be idempotent - running it again
and again and again won't change anything, it won't even run the
actual `set` because it skips that if all components are already in.
Fixes#6960.
Variables like $status and $history showed up in all scopes, including
universal, when querying with `set -q` or `set -S`.
This makes it so they all only count as set in global scope, because
we already only allow assignment to electric variables in global scope.
Fixes#7032
The default implementation will not print any output in that case, but this provides users with additional flexibility when it comes to customising the shell's behaviour.
This allows users to customise the behaviour of the shell by redefining the function. This is similar to how fish_title or fish_greeting behave, where the default implementation can be easily overridden.
The function receives as arguments the job id, command line, signal name and signal description.
Give string expansion an (optional) parent pgroup. This is threaded all
the way into eval(). This ensures that in a mixed pipeline like:
cmd | begin ; something (cmd2) ; end
that cmd2 and cmd have the same pgroup.
Add a test to ensure that command substitutions inherit pgroups
properly.
Fixes#6624
Changes it from
```
$fish_color_user: not set in local scope
$fish_color_user: set in global scope, unexported, with 1 elements
$fish_color_user[1]: length=3 value=|080|
$fish_color_user: set in universal scope, unexported, with 1 elements
$fish_color_user[1]: length=7 value=|brgreen|
```
(with the trailing empty line - not just a newline)
to
```
$fish_color_user: set in global scope, unexported, with 1 elements
$fish_color_user[1]: |080|
$fish_color_user: set in universal scope, unexported, with 1 elements
$fish_color_user[1]: |brgreen|
```
Prior to this fix, if job control is enabled but stdin is not a tty, we
would return an error from terminal_maybe_give_to_job which would cause us
to avoid waiting for the job. Instead just return notneeded.
Fixes#6573.
The description for an alias which already has escape sequences will
use backslash escapes for quoting; usually `string escape` can simply
quote it. Use a regex that accepts either escaping style.
Travis puts the commit message in an environment variable, so if it
contains the string `_flag` this would match TRAVIS_COMMIT_MESSAGE.
That happened in ca91c201c3, so the
tests failed.
We simply tighten the regex a little more, and make a commit message
that doesn't include the string.
This allows code of the form `if jobs -q $some_pid` in scripts to check whether a previously started job is still running. Previously this would return the correct value, but also print an error message.
The invalid argument errors will still be printed.
Added test cases for both.
Because `command ./somedir/somecommand` is okay.
Fixes test failure from aa304cbd3d.
Child directories in $PATH are still not suggested, as was the main
intention of the commit that introduced the tests:
8a3cf144f Don't include child directories of $PATH in completions.
We have now entirely switched the script tests to littlecheck.
Note: This adjusts the complete_directories test, because it removes a
directory that was created before by a .in test. There's no real
change in behavior.
This does require the test directory be cleaned, or the tests will fail.
test_util gets to stay for a while longer, because it sets up the
testing env (locale and such).
This, together with the other testX, really just tests some basic
syntax. So let's just call it "basic".
Note that this file uses escaped newlines on purpose, so restyling it
would currently break it. I'm not sure what the best thing to do here is.
Instead of invoking littlecheck.py independently for each file, pass
all files at once. This amortizes the Python startup cost, and reduces
the total test time by ~15 seconds (!).
This isn't quite the old-style test, but it checks some of the line
continuation stuff.
Note that littlecheck ignores leading whitespace, so testing the
actual indentation requires some more effort.
Things like
```fish
\
echo foo
```
or
```fish
echo foo; \
echo bar
```
are a formatting blunder and should be handled.
This makes it so the escaped newline is removed, and the
semicolon/token_type_end handling will then put the statements on
different lines.
One case this doesn't handle brilliantly is an escaped newline after a
pipe:
```fish
echo foo | \
cat
```
is turned into
```fish
echo foo | cat
```
which here works great, but in long pipelines can cause issues.
Pipes at the end of the line cause fish to continue parsing on the
next line, so this can just be written as
```fish
echo foo |
cat
```
for now.
This tries to see if quotes guard some expansion from happening. If it
detects a "weird" character it'll leave the quotes in place, even in
some cases where it might not trigger.
So
for i in 'c' 'color'
turns into
for i in c color
The rationale here is that these quotes are useless, wasting
space (and line length), but more importantly that they are
superstitions. They don't do anything, but look like they do.
The counter argument is that they can be kept in case of later
changes, or that they make the intent clear - "this is supposed to be
a string we pass".
The problem is that under TSAN, the timing of signals becomes very weird and
exposes some real race conditions. We will need to re-design how signal
event handlers work.
This test launches two background processes and is sensitive to
interleaving of output. Fix it so that newlines are not output by
the background process.
Hopefully this fixes the flakiness of this test.
f8ba0ac5bf introduced a bug where INT handlers would themselves be
cancelled, due to the signal. Defer processing handlers until the
parser is ready to execute more fish script.
Fixes the interactive case of #6649.
If a background process runs a fish function which launches another
background process, ensure that these background procs get different
pgroups. Add a test for it.
This executes `fish --no-execute` a whole bunch of times in order to
find syntax errors in our fish scripts.
tests/ is exempt because it contains syntax errors on purpose.
This is a great idea in principle, but it takes ~4s on my system.
It used to error out when a command wasn't known, even when it was a
function that would only be discovered via autoloading.
Now we just accept that a command doesn't exist when no-execute is
given - we're not gonna execute it anyway.
Also, in the same breath stop counting empty commands after expansion
and empty wildcard expansions as errors - these depend on runtime
values, so we can't verify them without executing.
Fixes#977.
(note that it still executes "time", but that's another commit)
Appending to an fd doesn't really make sense, but we allowed the
syntax previously and it was actually used.
It's not too harmful to allow it, so let's just do that again.
For the record: Zsh also allows it, bash doesn't.
Fixes#6614
Glob ordering is used in a variety of places, including figuring out
conf.d and really needs to be stable.
Other ordering, like completions, is really just cosmetic and can
change if it makes for a nicer experience.
So we uncouple it by copying the wcsfilecmp from 3.0.2, which will
return the ordering to what it was in that release.
Fixes#6593
The `function --on-job-exit caller` feature allows a command substitution
to observe when the parent job exits. This has never worked very well - in
particular it is based on job IDs, so a function that observes this will
run multiple times. Implement it properly.
Do this by having a not-recycled "internal job id".
This is only used by psub, but ensure it works properly none-the-less.
This one tests a bunch of separate stuff, so we put it into a few
different files.
The main, new one is "slices.fish", which tests various index expressions.
This makes two changes:
1. Remove the 'brace_text_start' idea. The idea of 'brace_text_start' was
to prevent emitting `BRACE_SPACE` at the beginning or end of an item. But
we later strip these off anyways, so there is no apparent benefit. If we
are not doing brace expansion, this prevented emitting whitespace at the
beginning or end of an item, leading to #6564.
2. When performing brace expansion, only stomp the space character with
`BRACE_SPACE`; do not stomp newlines and tabs. This is because the fix in
came from a newline or tab literal, then we would have effectively
replaced a newline or tab with a space, so this is important for #6564 as
well. Moreover, it is not easy to place a literal newline or tab inside a
brace expansion, and users who do probably do not mean for it to be
stripped, so I believe this is a good change in general.
Fixes#6564
Just another version of the error. We still want to get a bug if it
ever triggers a *wrong* error, so we still list all the options
instead of going for `.*option:.*Z.*`.
Fixes#6554
Solaris/OpenIndiana/Illumos `rm` checks that and errors out.
In these cases we don't actually need it to be a part of $PWD as
it's just for cleanup, so we `cd` out before.
See #5472
See 1ee57e9244Fixes#6555Fixes#6558
Prior to this fix, fish was rather inconsistent in when $status gets set
in response to an error. For example, a failed expansion like "$foo["
would not modify $status.
This makes the following inter-related changes:
1. String expansion now directly returns the value to set for $status on
error. The value is always used.
2. parser_t::eval() now directly returns the proc_status_t, which cleans
up a lot of call sites.
3. We expose a new function exec_subshell_for_expand() which ignores
$status but returns errors specifically related to subshell expansion.
4. We reify the notion of "expansion breaking" errors. These include
command-not-found, expand syntax errors, and others.
The upshot is we are more consistent about always setting $status on
errors.
macOS `mktemp -d` likes to return symlinks. Guard against that possibility.
That allows the test to succeed when run directly, instead of through the
build target.
It was possible to start the new job and execute `jobs` again before
the job died (or we noticed it did), so the test would fail.
To properly test, we need to ensure the job has been removed. `wait`
should do it.