Prior to this change, when emitting gap text (comments, newlines, etc),
fish_indent would use the indentation of the text at the end of the gap.
But this has the wrong result for this case:
begin
command
# comment
end
as the comment would get the indent of the 'end'. Instead use the indent
computed for the gap text itself.
Addresses one case of #7252.
It's useless - `expect` has a timeout anyway, and it defaults to 5s,
so these 0.5s sleeps just mean it'll always take at least 0.5s.
Sometimes it is useful to let things settle before *sending* text, and
it would be nice to be able to set the timeout for each expect
separately, but just adding to the timeout isn't useful.
This one sometimes fails with a zombie detected, so I'm assuming it's
too fast for reaping to happen, so we add another 100ms sleep.
Yeah, this isn't great but...eh
This can be used to determine whether the previous command produced a real status, or just carried over the status from the command before it. Backgrounded commands and variable assignments will not increment status_generation, all other commands will.
This changes how fish attempts to protect itself from calling tcsetpgrp() too
aggressively. Recall that tcsetpgrp() will "force" itself, if SIGTTOU is
ignored (which it is in fish when job control is enabled).
Prior to this fix, we avoided SIGTTINs by only transferring the tty ownership
if fish was already the owner. This dated from a time before we had really
nailed down how pgroups should be assigned. Now we more deliberately assign a
job's pgroup so we don't need this conservative check.
However we still need logic to avoid transferring the tty if fish is not the
owner. The bad case is when job control is enabled while fish is running in the
background - here fish would transfer the tty and "steal" from the foreground
process.
So retain the checks of the current tty owner but migrate them to the point of
calling tcsetpgrp() itself.
Unfortunately this doesn't quite fix the issue with Pantheon
Terminal (#7913), as that somehow manages to re-set $VTE_VERSION by
the time littlecheck runs.
This switches fish_indent from parsing with parse_tree
to the new ast.
This is the most difficult transition because the new ast retains less
lexical information than the old parse tree. The strategy is:
1. Use parse_util_compute_indents to compute indenting for each token.
2. Compute the "gap text" between the text of significant tokens. This
contains whitespace, comments, etc.
3. "Fix up" the gap text while leaving the significant tokens alone.
I really kinda hate how insistent clang-format is to have line
breaks *IFF THE LINE IS TOO LONG*.
Like... lemme just add a break if it looks better, will you?
But it is the style at this time, so we shall tie an onion to our
belt.
A broken/missing optspec or `--` is a bug in the script using
argparse, an unknown option or invalid argument is a bug in using that script.
So in the former case print a stacktrace, because the person writing
the `argparse` call is at fault, in the latter don't.
Fixes#6703.
Note: This includes a super cheesy thing to print variable contents.
The expect version has one that's a bit more elaborate (featuring a
marker setup), but tbh that doesn't seem to be worth it.
If we do need it, we can add it, but it seems more likely we'd just do
`set -S`, or do it in a check instead.
With the new pexpect based framework, bind and pipeline expect tests can
be removed.
Amusingly the complete.fish check required the existence of bind.expect.
Fix the check at the same time.
Make it easier to use pexpect and to understand its error messages.
Switch to a style in tests using bound methods, which makes them
less noisy to write.
This adds a new interactive test framework based on Python's pexpect. This
is intended to supplant the TCL expect-based tests.
New tests go in `tests/pexpects/`. As a proof-of-concept, the
pipeline.expect test and the (gnarly) bind.expect test are ported to the
new framework.
There's a terrible number of fishscripts that start with
set path (dirname (status filename))
And that's really just a bit boring.
So let's let it be
set path (status dirname)
This is a function you can either execute once, interactively, or
stick in config.fish, and it will do the right thing.
Some options are included to choose some slightly different behavior,
like setting $PATH directly instead of $fish_user_paths, or moving
already existing components to the front/back instead of ignoring
them, or appending new components instead of prepending them.
The defaults were chosen because they are the most safe, and
especially because they allow it to be idempotent - running it again
and again and again won't change anything, it won't even run the
actual `set` because it skips that if all components are already in.
Fixes#6960.
Variables like $status and $history showed up in all scopes, including
universal, when querying with `set -q` or `set -S`.
This makes it so they all only count as set in global scope, because
we already only allow assignment to electric variables in global scope.
Fixes#7032
The default implementation will not print any output in that case, but this provides users with additional flexibility when it comes to customising the shell's behaviour.
This allows users to customise the behaviour of the shell by redefining the function. This is similar to how fish_title or fish_greeting behave, where the default implementation can be easily overridden.
The function receives as arguments the job id, command line, signal name and signal description.
Give string expansion an (optional) parent pgroup. This is threaded all
the way into eval(). This ensures that in a mixed pipeline like:
cmd | begin ; something (cmd2) ; end
that cmd2 and cmd have the same pgroup.
Add a test to ensure that command substitutions inherit pgroups
properly.
Fixes#6624
Changes it from
```
$fish_color_user: not set in local scope
$fish_color_user: set in global scope, unexported, with 1 elements
$fish_color_user[1]: length=3 value=|080|
$fish_color_user: set in universal scope, unexported, with 1 elements
$fish_color_user[1]: length=7 value=|brgreen|
```
(with the trailing empty line - not just a newline)
to
```
$fish_color_user: set in global scope, unexported, with 1 elements
$fish_color_user[1]: |080|
$fish_color_user: set in universal scope, unexported, with 1 elements
$fish_color_user[1]: |brgreen|
```
Prior to this fix, if job control is enabled but stdin is not a tty, we
would return an error from terminal_maybe_give_to_job which would cause us
to avoid waiting for the job. Instead just return notneeded.
Fixes#6573.
The description for an alias which already has escape sequences will
use backslash escapes for quoting; usually `string escape` can simply
quote it. Use a regex that accepts either escaping style.
Travis puts the commit message in an environment variable, so if it
contains the string `_flag` this would match TRAVIS_COMMIT_MESSAGE.
That happened in ca91c201c3, so the
tests failed.
We simply tighten the regex a little more, and make a commit message
that doesn't include the string.
This allows code of the form `if jobs -q $some_pid` in scripts to check whether a previously started job is still running. Previously this would return the correct value, but also print an error message.
The invalid argument errors will still be printed.
Added test cases for both.
Because `command ./somedir/somecommand` is okay.
Fixes test failure from aa304cbd3d.
Child directories in $PATH are still not suggested, as was the main
intention of the commit that introduced the tests:
8a3cf144f Don't include child directories of $PATH in completions.
We have now entirely switched the script tests to littlecheck.
Note: This adjusts the complete_directories test, because it removes a
directory that was created before by a .in test. There's no real
change in behavior.
This does require the test directory be cleaned, or the tests will fail.
test_util gets to stay for a while longer, because it sets up the
testing env (locale and such).
This, together with the other testX, really just tests some basic
syntax. So let's just call it "basic".
Note that this file uses escaped newlines on purpose, so restyling it
would currently break it. I'm not sure what the best thing to do here is.
Instead of invoking littlecheck.py independently for each file, pass
all files at once. This amortizes the Python startup cost, and reduces
the total test time by ~15 seconds (!).
This isn't quite the old-style test, but it checks some of the line
continuation stuff.
Note that littlecheck ignores leading whitespace, so testing the
actual indentation requires some more effort.
Things like
```fish
\
echo foo
```
or
```fish
echo foo; \
echo bar
```
are a formatting blunder and should be handled.
This makes it so the escaped newline is removed, and the
semicolon/token_type_end handling will then put the statements on
different lines.
One case this doesn't handle brilliantly is an escaped newline after a
pipe:
```fish
echo foo | \
cat
```
is turned into
```fish
echo foo | cat
```
which here works great, but in long pipelines can cause issues.
Pipes at the end of the line cause fish to continue parsing on the
next line, so this can just be written as
```fish
echo foo |
cat
```
for now.
This tries to see if quotes guard some expansion from happening. If it
detects a "weird" character it'll leave the quotes in place, even in
some cases where it might not trigger.
So
for i in 'c' 'color'
turns into
for i in c color
The rationale here is that these quotes are useless, wasting
space (and line length), but more importantly that they are
superstitions. They don't do anything, but look like they do.
The counter argument is that they can be kept in case of later
changes, or that they make the intent clear - "this is supposed to be
a string we pass".
The problem is that under TSAN, the timing of signals becomes very weird and
exposes some real race conditions. We will need to re-design how signal
event handlers work.
This test launches two background processes and is sensitive to
interleaving of output. Fix it so that newlines are not output by
the background process.
Hopefully this fixes the flakiness of this test.
f8ba0ac5bf introduced a bug where INT handlers would themselves be
cancelled, due to the signal. Defer processing handlers until the
parser is ready to execute more fish script.
Fixes the interactive case of #6649.
If a background process runs a fish function which launches another
background process, ensure that these background procs get different
pgroups. Add a test for it.
This executes `fish --no-execute` a whole bunch of times in order to
find syntax errors in our fish scripts.
tests/ is exempt because it contains syntax errors on purpose.
This is a great idea in principle, but it takes ~4s on my system.
It used to error out when a command wasn't known, even when it was a
function that would only be discovered via autoloading.
Now we just accept that a command doesn't exist when no-execute is
given - we're not gonna execute it anyway.
Also, in the same breath stop counting empty commands after expansion
and empty wildcard expansions as errors - these depend on runtime
values, so we can't verify them without executing.
Fixes#977.
(note that it still executes "time", but that's another commit)
Appending to an fd doesn't really make sense, but we allowed the
syntax previously and it was actually used.
It's not too harmful to allow it, so let's just do that again.
For the record: Zsh also allows it, bash doesn't.
Fixes#6614
Glob ordering is used in a variety of places, including figuring out
conf.d and really needs to be stable.
Other ordering, like completions, is really just cosmetic and can
change if it makes for a nicer experience.
So we uncouple it by copying the wcsfilecmp from 3.0.2, which will
return the ordering to what it was in that release.
Fixes#6593
The `function --on-job-exit caller` feature allows a command substitution
to observe when the parent job exits. This has never worked very well - in
particular it is based on job IDs, so a function that observes this will
run multiple times. Implement it properly.
Do this by having a not-recycled "internal job id".
This is only used by psub, but ensure it works properly none-the-less.
This one tests a bunch of separate stuff, so we put it into a few
different files.
The main, new one is "slices.fish", which tests various index expressions.
This makes two changes:
1. Remove the 'brace_text_start' idea. The idea of 'brace_text_start' was
to prevent emitting `BRACE_SPACE` at the beginning or end of an item. But
we later strip these off anyways, so there is no apparent benefit. If we
are not doing brace expansion, this prevented emitting whitespace at the
beginning or end of an item, leading to #6564.
2. When performing brace expansion, only stomp the space character with
`BRACE_SPACE`; do not stomp newlines and tabs. This is because the fix in
came from a newline or tab literal, then we would have effectively
replaced a newline or tab with a space, so this is important for #6564 as
well. Moreover, it is not easy to place a literal newline or tab inside a
brace expansion, and users who do probably do not mean for it to be
stripped, so I believe this is a good change in general.
Fixes#6564
Just another version of the error. We still want to get a bug if it
ever triggers a *wrong* error, so we still list all the options
instead of going for `.*option:.*Z.*`.
Fixes#6554
Solaris/OpenIndiana/Illumos `rm` checks that and errors out.
In these cases we don't actually need it to be a part of $PWD as
it's just for cleanup, so we `cd` out before.
See #5472
See 1ee57e9244Fixes#6555Fixes#6558
Prior to this fix, fish was rather inconsistent in when $status gets set
in response to an error. For example, a failed expansion like "$foo["
would not modify $status.
This makes the following inter-related changes:
1. String expansion now directly returns the value to set for $status on
error. The value is always used.
2. parser_t::eval() now directly returns the proc_status_t, which cleans
up a lot of call sites.
3. We expose a new function exec_subshell_for_expand() which ignores
$status but returns errors specifically related to subshell expansion.
4. We reify the notion of "expansion breaking" errors. These include
command-not-found, expand syntax errors, and others.
The upshot is we are more consistent about always setting $status on
errors.
macOS `mktemp -d` likes to return symlinks. Guard against that possibility.
That allows the test to succeed when run directly, instead of through the
build target.
It was possible to start the new job and execute `jobs` again before
the job died (or we noticed it did), so the test would fail.
To properly test, we need to ensure the job has been removed. `wait`
should do it.
complete -C'echo $HOM ' would complete $HOM instead of a new token.
Fixes another regression introduced in
6fb7f9b6b - Fix completion for builtins with subcommands
It's now good enough to do so.
We don't allow grid-alignment:
```fish
complete -c foo -s b -l barnanana -a '(something)'
complete -c foo -s z -a '(something)'
```
becomes
```fish
complete -c foo -s b -l barnanana -a '(something)'
complete -c foo -s z -a '(something)'
```
It's just more trouble than it is worth.
The one part I'd change:
We align and/or'd parts of an if-condition with the in-block code:
```fish
if true
and false
dosomething
end
```
becomes
```fish
if true
and false
dosomething
end
```
but it's not used terribly much and if we ever fix it we can just
reindent.
for-loops that were not inside a function could overwrite global
and universal variables with the loop variable. Avoid this by making
for-loop-variables local variables in their enclosing scope.
This means that if someone does:
set a global
for a in local; end
echo $a
The local $a will shadow the global one (but not be visible in child
scopes). Which is surprising, but less dangerous than the previous
behavior.
The detection whether the loop is running inside a function was failing
inside command substitutions. Remove this special handling of functions
alltogether, it's not needed anymore.
Fixes#6480
Store the entire function declaration, not just its job list.
This allows us to extract the body of the function complete with any
leading comments and indents.
Fixes#5285
In particular, this allows `true && time true`, or `true; and time true`,
and both `time not true` as well as `not time true` (like bash).
time is valid only as job _prefix_, so `true | time true` could call
`/bin/time` (same in bash)
See discussion in #6442
Extend the commit 8e17d29e04 to block processes, for example:
begin ; stuff ; end
or if/while blocks as well.
Note there's an existing optimization where we do not create a job for a
block if it has no redirections.
job_promote attempts to bring the most recently "touched" job to the front
of the job list. It did this via:
std::rotate(begin, job, end)
However this has the effect of pushing job-1 to the end. That is,
promoting '2' in [1, 2, 3] would result in [2, 3, 1].
Correct this by replacing it with:
std::rotate(begin, job, job+1);
now we get the desired [2, 1, 3].
Also add a test.
It looks like the last status already contains the signal that cancelled
execution.
Also make `fish -c something` always return the last exit status of
"something", instead of hardcoded 127 if exited or signalled.
Fixes#6444
This was previously required so that, if there was a redirection to a
file, we would fork a process to create the file even if there was no
output. For example `echo -n >/tmp/file.txt` would have to create
file.txt even though it would be empty.
However now we open the file before fork, so we no longer need special
logic around this.
Do this only when splitting on IFS characters which usually contains
whitespace characters --- read --delimiter is unchanged; it still
consumes no more than one delimiter per variable. This seems better,
because it allows arbitrary delimiters in the last field.
Fixes#6406
This adds a test for the obscure case where an fd is redirected to
itself. This is tricky because the dup2 will not clear the CLO_EXEC bit.
So do it manually; also posix_spawn can't be used in this case.
The IO cleanup left file redirections open in the child. For example,
/bin/cmd < file.txt would redirect stdin but also leave the file open.
Ensure these get closed properly.
Prior to this fix, a job would hold onto any IO redirections from its
parent. For example:
begin
echo a
end < file.txt
The "echo a" job would hold a reference to the I/O redirection.
The problem is that jobs then extend the life of pipes until the job is
cleaned up. This can prevent pipes from closing, leading to hangs.
Fix this by not storing the block IO; this ensures that jobs do not
prolong the life of pipes.
Fixes#6397
(command pwd) uses the system's implementation of pwd. At least the GNU
coreutils implementation defaults to -P, which resulted in symlinks being
expanded when switching between directories with nextd/prevd.
This did some weird unescaping to try to extract the first word.
So we're now more likely to be *correct*, and the alias benchmark is
about 20% *faster*.
Call it a win-win.
This splits a string into variables according to the shell's
tokenization rules, considering quoting, escaping etc.
This runs an automatic `unescape` on the string so it's presented like
it would be passed to the command. E.g.
printf '%s\n' a\ b
returns the tokens
printf
%s\n
a b
It might be useful to add another mode "--tokenize-raw" that doesn't
do that, but this seems to be the more useful of the two.
Fixes#3823.
This added the function offset *again*, but it's already included in
the line for the current file.
And yes, I have explicitly tested a function file with a function
defined at a later line.
Fixes#6350
Since #6287, bare variable assignments do not parse, which broke
the "Unsupported use of '='" error message.
This commit catches parse errors that occur on bare variable assignments.
When a statement node fails to parse, then we check if there is at least one
prefixing variable assignment. If so, we emit the old error message.
See also #6347
This adds initial support for statements with prefixed variable assignments.
Statments like this are supported:
a=1 b=$a echo $b # outputs 1
Just like in other shells, the left-hand side of each assignment must
be a valid variable identifier (no quoting/escaping). Array indexing
(PATH[1]=/bin ls $PATH) is *not* yet supported, but can be added fairly
easily.
The right hand side may be any valid string token, like a command
substitution, or a brace expansion.
Since `a=* foo` is equivalent to `begin set -lx a *; foo; end`,
the assignment, like `set`, uses nullglob behavior, e.g. below command
can safely be used to check if a directory is empty.
x=/nothing/{,.}* test (count $x) -eq 0
Generic file completion is done after the equal sign, so for example
pressing tab after something like `HOME=/` completes files in the
root directory
Subcommand completion works, so something like
`GIT_DIR=repo.git and command git ` correctly calls git completions
(but the git completion does not use the variable as of now).
The variable assignment is highlighted like an argument.
Closes#6048
Presently the completion engine ignores builtins that are part of the
fish syntax. This can be a problem when completing a string that was
based on the output of `commandline -p`. This changes completions to
treat these builtins like any other command.
This also disables generic (filename) completion inside comments and
after strings that do not tokenize.
Additionally, comments are stripped off the output of `commandline -p`.
Fixes#5415Fixes#2705
PATH and CDPATH have special behavior around empty elements. Express this
directly in env_stack_t::set rather than via variable dispatch; this is
cleaner.
This adds support for `fish_trace`, a new variable intended to serve the
same purpose as `set -x` as in bash. Setting this variable to anything
non-empty causes execution to be traced. In the future we may give more
specific meaning to the value of the variable.
The user's prompt is not traced unless you run it explicitly. Events are
also not traced because it is noisy; however autoloading is.
Fixes#3427
Until now, something like
`math '7 = 2'`
would complain about a "missing" operator.
Now we print an error about logical operators not being supported and
point the user towards `test`.
Fixes#6096
The `--entire` would enable output even though the `--quiet` should
have silenced it. These two don't make any sense together so print an
error, because the user could have just left off the `-q`.
This spewed errors because the `math` invocation got no second
operand:
Testing file checks/sigint.fish ... math: Error: Too few arguments
'1571487730 -'
but only if the `date` didn't do milliseconds, which is the case on
FreeBSD and NetBSD.
(also force the variable to be global - we don't want to have a
universal causing trouble here)
Universal exported variables (created by `set -xU`) used to show up
both as universal and global variable in child instances of fish.
As a result, when changing an exported universal variable, the
new value would only be visible after a new login (or deleting the
variable from global scope in each fish instance).
Additionally, something like `set -xU EDITOR vim -g` would be imported
into the global scope as a single word resulting in failures to
execute $EDITOR in fish.
We cannot simply give precedence to universal variables, because
another process might have exported the same variable. Instead, we
only skip importing a variable when it is equivalent to an exported
universal variable with the same name. We compare their values after
joining with spaces, hence skipping those imports does not change the
environment fish passes to its children. Only the representation in
fish is changed from `"vim -g"` to `vim -g`.
Closes#5258.
This eliminates the issue #5348 for universal variables.
Consider a group of short options, like -xzPARAM, where x and z are options and z takes an argument.
This commit enables completion of the argument to the last option (z), both within the same
token (-xzP) or in the next one (-xz P).
complete -C'-xz' will complete only parameters to z.
complete -C'-xz ' will complete only parameters to z if z requires a parameter
otherwise, it will also complete non-option parameters
To do so this implements a heuristic to differentiate such strings from single long options. To
detect whether our token contains some short options, we only require the first character after the
dash (here x) to be an option. Previously, all characters had to be short options. The last option
in our example is z. Everything after the last option is assumed to be a parameter to the last
option.
Assume there is also a single long option -x-foo, then complete -C'-x' will suggest both -x-foo and
-xy. However, when the single option x requires an argument, this will not suggest -x-foo.
However, I assume this will almost never happen in practise since completions very rarely mix
short and single long options.
Fixes#332
This stops reading argument names after another option appears. It does not break any previous uses and in fact fixes uses like
```fish
function foo --argument-names bar --description baz
```
* `function` command handles options after argument names (Fixes#6186)
* Removed unneccesary test
I got tired of seeing ' ... ok (0 sec)' so now with GNU date/gdate
installed there is millisecond output shown. One can get rough
nanoseconds from gdate.
This sequence can be generatd by control-spacebar. Allow it to be bound
properly.
To do this we must be sure that we never round-trip the key sequence
through a C string.
Meaning empty variables, command substitutions that don't print
anything.
A switch without an argument
```fish
switch
case ...
end
```
is still a syntax error, and more than one argument is still a runtime
error.
The none-argument matches either an empty-string `case ''` or a
catch-all `case '*'`.
Fixes#5677.
Fixes#4943.
This test uses universal variables, and so it can fail when run
multiple times.
It might be a good idea to do this in general, but for now let's just
try it here.
Previously when propagating explicitly separated output, we would early-out
if the buffer was empty, where empty meant contains no characters. However
it may contain one or more empty strings, in which case we should propagate
those strings.
Remove this footgun "empty" function and handle this properly.
Fixes#5987
I tested this manually (`littlecheck.py -s fish=fish tests/checks/eval.fish`) from the base directory, which means I got
"tests/checks/eval", while the real test gets "checks/eval".
I then reran `make test_fishscript`, but that didn't pull in the
updated test - we should really handle that better.
I'm gonna add more tests to this and I don't want to touch the old stuff.
Notice that this needs to have the output of the complete_directories
test adjusted because this one now runs later.
That's something we should take into account in future.
history now often writes to the history file asynchronously, but the history
test expects to find the text in the file immediately after running the
command. Hack a bit in history to make this test more reliable.
This required a bit of thinking.
What we do is we have one test that fakes $HOME, and then we do the
various config tests there.
The fake config we have is reused and we exercise all of the same codepaths.
This prints a green "ok" with the duration, just like the rest of the
tests.
Note that this clashes a bit with
https://github.com/ridiculousfish/littlecheck/pull/3.
(also don't check for python again and again and again)
This is a bit weird sometimes, e.g. to test the return status (that
fish actually *returns $status*), we use a #RUN line with %fish
invoking %fish, so we can use the substitution.
Still much nicer.
The missing scripts are those that rely on config.
Especially as, in this case, the documentation is quite massive.
Caught by porting string's test to littlecheck.
See #3404 - this was already supposed to be included.
This is a nice test (ha!) for how this works and what littlecheck can
do for us.
1. Input is now the actual file, not "Standard Input" anymore. So
any errors mentioning that now include the filename.
2. Regex are really nice for filenames, but especially for line
numbers
3. It's much nicer to have the output where it's created, instead of
needing to follow three files at the same time.
Instead of requiring a flag to enable newline trimming, invert it so the
flag (now `--no-trim-newlines`) disables newline trimming. This way our
default behavior matches that of sh's `"$(cmd)"`.
Also change newline trimming to trim all newlines instead of just one,
again to match sh's behavior.
The `string collect` subcommand behaves quite similarly in practice to
`string split0 -m 0` in that it doesn't split its output, but it also
takes an optional `--trim-newline` flag to trim a single trailing
newline off of the output.
See issue #159.
This adds support for .check files inside the tests directory. .check
files are tests designed to be run with littlecheck.
Port printf test to littlecheck and remove the printf.in test.
It's always a bit annoying that `*` requires quoting.
So we allow "x" as an alternative, only it needs to be followed by
whitespace to distinguish it from "0x" hexadecimal notation.
This makes the following changes:
1. Events in background threads are executed in those threads, instead of
being silently dropped
2. Blocked events are now per-parser instead of global
3. Events are posted in builtin_set instead of within the environment stack
The last one means that we no longer support event handlers for implicit
sets like (example) argv. Instead only the `set` builtin (and also `cd`)
post variable-change events.
Events from universal variable changes are still not fully rationalized.
When setting a variable without a specified scope, we should give priority
to an existing local or global above an existing universal variable with
the same name.
In 16fd780484 there was a regression that
made universal variables have priority.
Fixes#5883
Brace expansion with single words in it is quite useless - `HEAD@{0}`
expanding to `HEAD@0` breaks git.
So we complicate the rule slightly - if there is no variable expansion
or "," inside of braces, they are just treated as literal braces.
Note that this is technically backwards-incompatible, because
echo foo{0}
will now print `foo{0}` instead of `foo0`. However that's a
technicality because the braces were literally useless in that case.
Our tests needed to be adjusted, but that's because they are meant to
exercise this in weird ways.
I don't believe this will break any code in practice.
Fixes#5869.
This read something like `o=!_validate_int`, and the flag modifier
reading kept the pointer after the `!`, so it created a long flag
called `_validate_int`, which meant it would not only error out form
```fish
argparse 'i=!_validate_int' 'o=!_validate_int' -- $argv
```
with "Long flag '_validate_int' already defined", but also set
$_flag_validate_int.
Fixes#5864.
As mentioned in #2900, something like
```fish
test -n "$var"; and set -l foo $var
```
is sufficiently idiomatic that it should be allowable.
Also fixes some additional weirdness with semicolons.
This removes semicolons at the end of the line and collapses
consecutive ones, while replacing meaningful semicolons with newlines.
I.e.
```fish
echo;
```
becomes
```fish
echo
```
but
```fish
echo; echo
```
becomes
```fish
echo
echo
```
Fixes#5859.
This keeps all unknown options in $argv, so
```fish
argparse -i a/alpha -- -a banana -o val -w
```
results in $_flag_a set to banana, and $argv set to `-o val -w`.
This allows users to use multiple argparse passes, or to simply avoid
specifying all options e.g. in completions - `systemctl` has 46 of
them, most not having any effect on the completions.
Fixes#5367.
This is a long-standing issue with how `complete --do-complete` does
its argument parsing: It takes an optional argument, so it has to be
attached to the token like `complete --do-complete=foo` or (worse)
`complete -Cfoo`.
But since `complete` doesn't take any bare arguments otherwise (it
would error with "too many arguments" if you did `complete -C foo`) we
can just take one free argument as the argument to `--do-complete`.
It's more of a command than an option anyway, since it entirely
changes what the `complete` call _does_.
Prior to this fix, a job would only inherit a pgrp from its parent if the
first command were external. There seems to be no reason for this
restriction and this causes tcsetgrp() churn, potentially cuasing SIGTTIN.
Switch to unconditionally inheriting a pgrp from parents.
This should fix most of #5765, the only remaining question is
tcsetpgrp from builtins.
Pursuant to 0be7903859, there still
remained one issue with the test when run from within a symlinked
directory after fish gained support for cding into symlinks.
This change should make the test function OK both when the tests are run
out of a PWD containing a symlink in its hierarchy and when run
otherwise.
The final test in `realpath.in` was based on the no-longer-valid
assumption that $PWD cannot be a symlink. Since the recent changes in
fish 3.0 to allow `cd`ing into "virtual" directories preserving symlinks
as-is, when `make test` was run from a path that contained a symlink
component, this test would fail the `pwd-resolved-to-itself` check.
As the test is not designed to initialize then cd into an absolute path
guaranteed to not be symbolic, so this final check is just wrong.
This has been driving nuts for years. The output of the diff emitted
when a test fails was always reversed, because the diff tool is called
with `${difftool} ${new} ${old}` so all the `-` and `+` contexts are
reversed, and the highlights are all screwed up.
The output of a `make test` run should show what has changed from the
baseline/expected, not how the expected differs from the actual. When
considered from both the perspective of intentional changes to the test
outputs and failed test outputs, it is desirable to see how the test
output has changed from the previously expected, and not the other way
around.
(If you were used to the previous behavior, I apologize. But it was
wrong.)
This was printed basically everywhere.
The user knows what they executed on standard input.
A good example:
```fish
set c (subme 513)
```
used to print
```
fish: Too much data emitted by command substitution so it was discarded
set -l x (string repeat -n $argv x)
^
in function 'subme'
called on standard input
with parameter list '513'
in command substitution
called on standard input
```
and now it is
```
fish: Too much data emitted by command substitution so it was discarded
set -l x (string repeat -n $argv x)
^
in function 'subme' with arguments '513'
in command substitution
```
See #5434.
This printed things like
```
in function 'f'
called on standard input
in function 'd'
called on standard input
in function 'b'
called on standard input
in function 'a'
called on standard input
```
As a first step, it removes the empty lines so it's now
```
in function 'f'
called on standard input
in function 'd'
called on standard input
in function 'b'
called on standard input
in function 'a'
called on standard input
```
See #5434.
If a function process is deferred, allow it to be unbuffered.
This permits certain simple cases where functions are piped to external
commands to execute without buffering.
This is a somewhat-hacky stopgap measure that can't really be extended
to more general concurrent processes. However it is overall an improvement
in user experience that might help flush out some bugs too.
POSIX dictates here that incomplete conversions, like in
printf %d\n 15.2
or
printf %d 14g
are still printed along with any error.
This seems alright, as it allows users to silence stderr to accept incomplete conversions.
This commit implements it, but what's a bit weird is the ordering between stdout and stderr,
causing the error to be printed _after_, like
15
14
15.1: value not completely converted
14,2: value not completely converted
but that seems like a general issue with how we buffer the streams.
(I know that nonfatal_error is a copy of most of fatal_error - I tried
differently, and va_* is weird)
Fixes#5532.
Before this change, - was sorted with other punctuation before
A-Z. Now, it sorts above the rest of the characters.
This has a practical effect on completions, where when there are
both -s and --long with the same description, the short option
is now before the long option in the pager, which is what is now
selected when navigating `foo -<TAB>`. The long options can be
picked out with `foo --<TAB>`. Before, short options which
duplicated a long option literally could not be selected by
any means from the pager.
Fixes#5634
This tweaks wcsfilecmp such that certain punctuation characters will
come after A-Z.
A big win with `set <TAB>` - the __prefixed fish junk now comes
after the stuff users should care about.
This disables an extra round of escaping in the `string replace -r`
replacement string.
Currently, to add a backslash to an a or b (to "escape" it):
string replace -ra '([ab])' '\\\\\\\$1' a
7 backslashes!
This removes one of the layers, so now 3 or 4 works (each one escaped
for the single-quotes, so pcre receives two, which it reads as one literal):
string replace -ra '([ab])' '\\\\$1' a
This is backwards-incompatible as replacement strings will change
meaning, so we put it behind a feature flag.
The name is kinda crappy, though.
Fixes#5474.
As a simple replacement for `wc -l`.
This counts both lines on stdin _and_ arguments.
So if "file" has three lines, then `count a b c < file` will print 6.
And since it counts newlines, like wc, `echo -n foo | count` prints 0.
It turns out that `string split0` didn't actually ever do any
splitting. The arg_iterator_t already split stdin on NUL, and split0 just
performed an additional search that could never succeed (since
arguments from argv already can't contain NUL).
Let the arg_iterator_t not perform any splitting if asked, and then
let split0 split in 0.
One slight wart is that split0 ignores a trailing NUL, which normal
split doesn't.
Fixes#5701.
In a galaxy far, far away, event_blockage_t was intended to block only cetain
events. But it always just blocked everything. Eliminate the event block
mask.
On some systems, this sometimes uses unicode quotation marks.
Not on mine, but on Travis it does.
The only other workaround I can think of is setting locale to C, but
that implies not being able to test anything unicode-related in the
entire invocation tests.
So for now disable this test.
Illumos/OpenIndiana/SunOS/Solaris has an rm/rmdir that tries to
protect the user by not allowing them to delete $PWD.
Normally, this would be a good thing as deleting $PWD is a stupid
thing to do. Except in this case, we absolutely need to do that.
So instead we weasel around it by invoking an sh to cd out of the
directory to then invoke an `rmdir` to delete it. That should throw
off any attempts at protection (we could also have tried $PWD/. or
similar, but that's possibly still protected against).
This is the last failing test on
Illumos/OpenIndiana/SunOS/Solaris/afunnyquip, so:
Fixes#5472.
This tested #1728, where redirecting a directory (`begin; something;
end < .`) would cause `status` to misbehave.
Unfortunately, on Illumos/OpenIndiana/SunOS, this returns a different
error (EINVAL instead of EISDIR), so we can't check that with our test harness, because
we can't redirect it.
Since it's not important that this gives the same error across
systems (and indeed we provide no way of intercepting the error!),
use an invocation test instead, because that allows different output per-uname.
See #5472.
300ms was waaay too long, and even 100ms wasn't necessary.
Emacs' evil mode uses 10ms (0.01s), so let's stay a tad higher in case
some terminals are slow.
If anyone really wants to be able to type alt+h with escape, let them
raise the timeout.
Fixes#3904.
This is a large change to how io_buffers are filled. The essential problem
comes about with code like (example):
echo ( /bin/pwd )
The output of /bin/pwd must go to fish, not the tty. To arrange for this,
fish does the following:
1. Invoke pipe() to create a pipe.
2. Add an io_bufferfill_t redirection that owns the write end of the pipe.
3. After fork (or equiv), call dup2() to replace pwd's stdout with this pipe.
Now when /bin/pwd writes, it will send output to the read end of the pipe.
But who reads it?
Prior to this fix, fish would do the following in a loop:
1. select() on the pipe with a 10 msec timeout
2. waitpid(WNOHANG) on the pwd proc
This polling is ugly and confusing and is what is replaced here.
With this new change, fish now reads from the pipe via a background thread:
1. Spawn a background pthread, which select()s on the pipe's read end with
a long (100 msec) timeout.
2. In the foreground, waitpid() (allowing hanging) on the pwd proc.
The big win here is a major simplification of job_t::continue_job() since
it no longer has to worry about filling buffers. This will make things
easier for concurrent execution.
It may not be obvious why the background thread still needs a poll (100 msec).
The answer is for cases where the write end of the fd escapes, in particular
background processes invoked inside command substitutions. psub is perhaps
the only important case of this (other shells typically just hang here).
Using printf like
printf "The message"
is unsafe, because if the message contains any formatting characters,
they'll be interpreted.
In this case it's not all that important because the message contains
only filenames of our tests and static strings, but still.
A while loop now evaluates to the last executed command in the body, or
zero if the loop body is empty. This matches POSIX semantics.
Add a bunch of tricky tests.
See #4982
For some reason, we have two places where a variable can be read-only:
- By key in env.cpp:is_read_only(), which is checked via set*
- By flag on the actual env_var_t, which is checked e.g. in
parse_execution
The latter didn't happen for non-electric variables like hostname,
because they used the default constructor, because they were
constructed via operator[] (or some such C++-iness).
This caused for-loops to crash on an assert if they used a
non-electric read-only var like $hostname or $SHLVL.
Instead, we explicitly set the flag.
We might want to remove one of the two read-only checks, or something?
Fixes#5548.
There's just waaayy too many things that could go wrong with it, so it
annoys more than it helps, especially since we don't get any
indication what failed.
E.g. on FreeBSD, the test failed without a usable message just because
`tput` couldn't find an attribute (so colors were unset).
The one thing I was missing:
`echo -n` isn't POSIX. In practice, it appears the only shell to encounter this
is macOS' crusty old bash in sh-mode. Just replace it with `touch`.
This reverts commit fc5e8f9fec.
This makes the script worse, but it's good enough.
The required changes are:
- `shopt -s nullglob`, which we simply don't use (we have one glob, but that's
guaranteed to match because we ship the files)
- One array, which we replace with a direct use of the glob (plus it
used `echo` again?)
- The `function` word, which I'm still annoyed is even a thing!
- Variable indirection (`color=${!color_var}` - instead we pass the
value directly - which makes the script uglier!)
- One array, which we replace with a function
- A use of `type -t`, replaced with `command -v`
- A use of `${var:begin:end}` substring expansion, replaced with trickery.
- `set -o pipefail` is replaced with a function
Note that checkbashisms still complains about `command -v`, because
we're not using it with "-p". But we _want_ to check the current
$PATH, and `command -v` is POSIX.
This still uses `local`, which technically isn't in POSIX.
The tests now appear to pass in:
- bash
- dash
- zsh
- mksh
- busybox
Rather than killing the process with close, read EOF after sending the
"exit" command and wait for OS cleanup (per the expect examples).
Not cleaning up with wait caused expect to crash on all 32-bit platforms
including i586 and armv7l with "alloc: invalid block: 0xbf993ccb: 3d 3b".
64-bit platforms were not affected, for reasons that are not clear.
Turns out busybox diff (used on alpine) defaults to unified output,
which we can't use because that prints filenames, and those are
tempfiles made by psub.
Instead, we use builtins to print the first line and compare the others.
This isn't all that important, and it breaks on musl just because the message is different.
Just skip it for now, until we figure out how to better test this.
This `set TERM`. Which, if $TERM is inherited, is already exported,
but not if it isn't.
This is the case on sr.ht's arch images, so we failed without a TERM variable.
Return STATUS_INVALID_ARGS when failing due to evaluation errors,
so we can tell the difference between an error and falseness.
Add a test for the ERANGE error
This previously used /dev/tty to make sure we have `source` connected
to a terminal. Only as it turns out, FreeBSD doesn't have that (https://builds.sr.ht/~faho/job/15308).
So instead, let's just use the expect tests since stdin there is by
definition a terminal.
This broke fishtape, which did
somestuff | fish -c "source"
Because `source` didn't have a redirection, it refused to read from
stdin.
So, to keep the common issue of `source (command that does not print)`
from seeminly stopping fish, we instead actually check if stdin is a terminal.
The colors happening for the interactive tests didn't match the
expected output. For `history search` commands we test, have them
pipe through `cat` so the fishscript does not use a pager or try
to colorize.
realpath() will return NULL and sets errno if it fails.
We asserted that realpath(".") does not fail. We also didn't really
check that it was successful. Made sure we'll get a perror telling
us about what went wrong if something like this happens again.
Updated tests and added test case
Fixes#5351
Prior to this fix, cding into a symlink and then completing .. would complete
from the physical directory instead of the logical directory, which could not
actually be cd'd to. Teach cd completiond to use the logical directory.
For things like
source $undefined
or
source (nooutput)
it was quite annoying that it read from tty.
Instead we now require a "-" as the filename to read from the tty.
This does not apply to reading from stdin if it's redirected, so
something | source
still works.
Fixes#2633.
This adds flags --path and --unpath to builtin set, analogous to
--export and --unexport. These flags change whether a variable is
marked as a path variable.
Universal variables cannot yet be path variables.
This switches quoted expansion like "$foo" to use foo's delimiter instead of
space. The delimiter is space for normal variables and colonf or path variables.
Expansions like "$PATH" will now expand using ':'.
This commit begins to bake in a notion of path-style variables.
Prior to this fix, fish would export arrays as ASCII record separator
delimited, except for a whitelist (PATH, CDPATH, MANPATH). This is
surprising and awkward for other programs to deal with, and there's no way
to get similar behavior for other variables like GOPATH or LD_LIBRARY_PATH.
This commit does the following:
1. Exports all arrays as colon delimited strings, instead of RS.
2. Introduces a notion of "path variable." A path variable will be
"colon-delimited" which means it gets colon-separated in quoted expansion,
and automatically splits on colons. In this commit we only do the exporting
part.
Colons are not escaped in exporting; this is deliberate to support uses
like
`set -x PYTHONPATH "/foo:/bar"`
which ought to work (and already do, we don't want to make a compat break
here).
This switches fish to a "virtual" PWD, where it no longer uses getcwd to
discover its PWD but instead synthesizes it based on normalizing cd against
the $PWD variable.
Both pwd and $PWD contain the virtual path. pwd is taught about -P to
return the physical path, and -L the logical path (which is the default).
Fixes#3350
Mostly resolves#4862, though there remains the lingering question of
whether or not to emit a warning to /dev/tty or stderr when a
non-literal-zero index evaluates to zero.
This allows for marking certain bindings as part of a preset, which allows us to
- only erase those when switching presets
- go back to the preset binding when erasing a user binding
- only show user customization if requested
- make bare bind statements in config.fish work (!!!11elf!!!)
Fixes#5191.
Fixes#3699.
- Add support for:
- Jumping to the character before a target.
- Repeating the previous jump (same direction, same precision).
- Repeating the previous jump in the reverse order.
- Enhance vi bindings.
When running a builtin, if we are an interactive shell and stdin is a tty,
then acquire ownership of the terminal via tcgetpgrp() before running the
builtin, and set it back after.
Fixes#4540
This changes the behavior of builtin math to floating point by default.
If the result of a computation is an integer, then it will be printed as an
integer; otherwise it will be printed as a floating point decimal with up to
'scale' digits past the decimal point (default is 6, matching printf).
Trailing zeros are trimmed. Values are rounded following printf semantics.
Fixes#4478
This adds a new string command split0, which splits on zero bytes.
split0 has superpowers because its output is not further split on
newlines when used in command substitutions.
If just one of the range ends is negative, this now forces direction away from it.
I.e. if the beginning is negative, we go in reverse.
If the end is negative, we go forwards.
This fixes cases like
$var[2..-1]
if $var only has one element.
This partially reverts 5b489ca30f, with
carets acting as redirections unless the stderr-nocaret flag is set.
This flag is off by default but may be enabled on the command line:
fish --features stderr-nocaret
This introduces a new command line option --features which can be used for
enabling or disabling features for a particular fish session.
Examples:
fish --features stderr-nocaret
fish --features 3.0,no-stderr-nocaret
fish --features all
Note that the feature set cannot be changed in an existing session.
This teaches the status command to work with features.
'status features' will show a table listing all known features and whether
they are currently on or off.
`status test-feature` will test an individual feature, setting the exit status to
0 if the feature is on, 1 if off, 2 if unknown.
`read` with IFS empty was expected to set all parameters after the first
n filled variables to an empty string, but that was inconsistent with
the behavior of `read` everywhere else.
I'm not sure why fish differed from the spec with regards to the
behavior in the event of an empty IFS: we eschew IFS where possible, yet
here we adopt non-standard behavior splitting on every (unicode)
character instead of not splitting at all with IFS empty. We still do
that, but now the unset variables are treated as they normally would be,
i.e. cleared and not set to an empty string (which is what an empty
value between two IFS separators would contain).
This removes the caret as a shorthand for redirecting stderr.
Note that stderr may be redirected to a file via 2>/some/path...
and may be redirected with a pipe via 2>|.
Fixes#4394
Variables set in if and while conditions are in the enclosing block, not
the if/while statement block. For example:
if set -l var (somecommand) ; end
echo $var
will now work as expected.
Fixes#4820. Fixes#1212.
This promotes "and" and "or" from a type of statement to "job
decorators," as a possible prefix on a job. The point is to rationalize
how they interact with && and ||.
In the new world 'and' and 'or' apply to a entire job conjunction, i.e.
they have "lower precedence." Example:
if [ $age -ge 0 ] && [ $age -le 18 ]
or [ $age -ge 75 ] && [ $age -le 100 ]
echo "Child or senior"
end
This now reports "TOO_MANY_ARGS" instead of no error (and triggering
an assertion).
We might want to add a new error type or report the missing operator
before, but this is okay for now.
This enables some limited use of arguments for wrapping completions. The
simplest example is that complete gco -w 'git checkout' now works like
you would want: `gco <tab>` now invokes git's completions with the
`checkout` argument prepended.
Fixes#1976
`argparse`, `read`, `set`, `status`, `test` and `[` now can't be used
as function names anymore.
This is because (except for `test` and `[`) there is no way to wrap these properly, so any
function called that will be broken anyway.
For `test` (and `[`), there is nothing that can be added and there
have been confused users who created a function that then broke
everything.
Fixes#3000.
This switches function execution from the function's source code to
its stored node and pstree. This means we no longer have to re-parse
the function every time we execute it.
Some of these were failing on Travis quite often, and this is probably
the result of too tight a window.
E.g. one emacs test (transpose words, default timeout, short delay)
waited 250ms to enter something else, with a timeout of 300ms. That
meant a window of 50ms.