When setting a variable without a specified scope, we should give priority
to an existing local or global above an existing universal variable with
the same name.
In 16fd780484 there was a regression that
made universal variables have priority.
Fixes#5883
Brace expansion with single words in it is quite useless - `HEAD@{0}`
expanding to `HEAD@0` breaks git.
So we complicate the rule slightly - if there is no variable expansion
or "," inside of braces, they are just treated as literal braces.
Note that this is technically backwards-incompatible, because
echo foo{0}
will now print `foo{0}` instead of `foo0`. However that's a
technicality because the braces were literally useless in that case.
Our tests needed to be adjusted, but that's because they are meant to
exercise this in weird ways.
I don't believe this will break any code in practice.
Fixes#5869.
This read something like `o=!_validate_int`, and the flag modifier
reading kept the pointer after the `!`, so it created a long flag
called `_validate_int`, which meant it would not only error out form
```fish
argparse 'i=!_validate_int' 'o=!_validate_int' -- $argv
```
with "Long flag '_validate_int' already defined", but also set
$_flag_validate_int.
Fixes#5864.
As mentioned in #2900, something like
```fish
test -n "$var"; and set -l foo $var
```
is sufficiently idiomatic that it should be allowable.
Also fixes some additional weirdness with semicolons.
This removes semicolons at the end of the line and collapses
consecutive ones, while replacing meaningful semicolons with newlines.
I.e.
```fish
echo;
```
becomes
```fish
echo
```
but
```fish
echo; echo
```
becomes
```fish
echo
echo
```
Fixes#5859.
This keeps all unknown options in $argv, so
```fish
argparse -i a/alpha -- -a banana -o val -w
```
results in $_flag_a set to banana, and $argv set to `-o val -w`.
This allows users to use multiple argparse passes, or to simply avoid
specifying all options e.g. in completions - `systemctl` has 46 of
them, most not having any effect on the completions.
Fixes#5367.
This is a long-standing issue with how `complete --do-complete` does
its argument parsing: It takes an optional argument, so it has to be
attached to the token like `complete --do-complete=foo` or (worse)
`complete -Cfoo`.
But since `complete` doesn't take any bare arguments otherwise (it
would error with "too many arguments" if you did `complete -C foo`) we
can just take one free argument as the argument to `--do-complete`.
It's more of a command than an option anyway, since it entirely
changes what the `complete` call _does_.
Prior to this fix, a job would only inherit a pgrp from its parent if the
first command were external. There seems to be no reason for this
restriction and this causes tcsetgrp() churn, potentially cuasing SIGTTIN.
Switch to unconditionally inheriting a pgrp from parents.
This should fix most of #5765, the only remaining question is
tcsetpgrp from builtins.
Pursuant to 0be7903859, there still
remained one issue with the test when run from within a symlinked
directory after fish gained support for cding into symlinks.
This change should make the test function OK both when the tests are run
out of a PWD containing a symlink in its hierarchy and when run
otherwise.
The final test in `realpath.in` was based on the no-longer-valid
assumption that $PWD cannot be a symlink. Since the recent changes in
fish 3.0 to allow `cd`ing into "virtual" directories preserving symlinks
as-is, when `make test` was run from a path that contained a symlink
component, this test would fail the `pwd-resolved-to-itself` check.
As the test is not designed to initialize then cd into an absolute path
guaranteed to not be symbolic, so this final check is just wrong.
This has been driving nuts for years. The output of the diff emitted
when a test fails was always reversed, because the diff tool is called
with `${difftool} ${new} ${old}` so all the `-` and `+` contexts are
reversed, and the highlights are all screwed up.
The output of a `make test` run should show what has changed from the
baseline/expected, not how the expected differs from the actual. When
considered from both the perspective of intentional changes to the test
outputs and failed test outputs, it is desirable to see how the test
output has changed from the previously expected, and not the other way
around.
(If you were used to the previous behavior, I apologize. But it was
wrong.)
This was printed basically everywhere.
The user knows what they executed on standard input.
A good example:
```fish
set c (subme 513)
```
used to print
```
fish: Too much data emitted by command substitution so it was discarded
set -l x (string repeat -n $argv x)
^
in function 'subme'
called on standard input
with parameter list '513'
in command substitution
called on standard input
```
and now it is
```
fish: Too much data emitted by command substitution so it was discarded
set -l x (string repeat -n $argv x)
^
in function 'subme' with arguments '513'
in command substitution
```
See #5434.
This printed things like
```
in function 'f'
called on standard input
in function 'd'
called on standard input
in function 'b'
called on standard input
in function 'a'
called on standard input
```
As a first step, it removes the empty lines so it's now
```
in function 'f'
called on standard input
in function 'd'
called on standard input
in function 'b'
called on standard input
in function 'a'
called on standard input
```
See #5434.
If a function process is deferred, allow it to be unbuffered.
This permits certain simple cases where functions are piped to external
commands to execute without buffering.
This is a somewhat-hacky stopgap measure that can't really be extended
to more general concurrent processes. However it is overall an improvement
in user experience that might help flush out some bugs too.
POSIX dictates here that incomplete conversions, like in
printf %d\n 15.2
or
printf %d 14g
are still printed along with any error.
This seems alright, as it allows users to silence stderr to accept incomplete conversions.
This commit implements it, but what's a bit weird is the ordering between stdout and stderr,
causing the error to be printed _after_, like
15
14
15.1: value not completely converted
14,2: value not completely converted
but that seems like a general issue with how we buffer the streams.
(I know that nonfatal_error is a copy of most of fatal_error - I tried
differently, and va_* is weird)
Fixes#5532.
Before this change, - was sorted with other punctuation before
A-Z. Now, it sorts above the rest of the characters.
This has a practical effect on completions, where when there are
both -s and --long with the same description, the short option
is now before the long option in the pager, which is what is now
selected when navigating `foo -<TAB>`. The long options can be
picked out with `foo --<TAB>`. Before, short options which
duplicated a long option literally could not be selected by
any means from the pager.
Fixes#5634
This tweaks wcsfilecmp such that certain punctuation characters will
come after A-Z.
A big win with `set <TAB>` - the __prefixed fish junk now comes
after the stuff users should care about.
This disables an extra round of escaping in the `string replace -r`
replacement string.
Currently, to add a backslash to an a or b (to "escape" it):
string replace -ra '([ab])' '\\\\\\\$1' a
7 backslashes!
This removes one of the layers, so now 3 or 4 works (each one escaped
for the single-quotes, so pcre receives two, which it reads as one literal):
string replace -ra '([ab])' '\\\\$1' a
This is backwards-incompatible as replacement strings will change
meaning, so we put it behind a feature flag.
The name is kinda crappy, though.
Fixes#5474.
As a simple replacement for `wc -l`.
This counts both lines on stdin _and_ arguments.
So if "file" has three lines, then `count a b c < file` will print 6.
And since it counts newlines, like wc, `echo -n foo | count` prints 0.
It turns out that `string split0` didn't actually ever do any
splitting. The arg_iterator_t already split stdin on NUL, and split0 just
performed an additional search that could never succeed (since
arguments from argv already can't contain NUL).
Let the arg_iterator_t not perform any splitting if asked, and then
let split0 split in 0.
One slight wart is that split0 ignores a trailing NUL, which normal
split doesn't.
Fixes#5701.