This means cleaning out old universal variables is now just:
```fish
abbr --erase (abbr --list)
```
which makes upgrading much easier.
Note that this erases the currently defined variable and/or any
universal. It doesn't stop at the former because that makes it *easy*
to remove the universals (no running `abbr --erase` twice), and it
doesn't care about globals because, well, they would be gone on
restart anyway.
Fixes#9468.
This now means `abbr --add` has two modes:
```fish
abbr --add name --function foo --regex regex
```
```fish
abbr --add name --regex regex replacement
```
This is because `--function` was seen to be confusing as a boolean flag.
Unfortunately print_hints was true *by default* - so for all builtins
that didn't pass it it would now be false instead.
This resulted in the trailer missing, which includes the line number
and context. So if you ran a script that includes `bind -M` the error
message would now just be "bind: -M: option requires an argument",
with no indication as to where.
This reverts commit 8a50d47a46.
This would print
```
abbr -a -- dotdot --regex ^\\.\\.+\$ --function multicd
```
which expands "dotdot" to "--regex ^\\.\\.+\$...".
Instead, we move the name to right before the replacement, and move
the `--` before that:
```
abbr -a --regex ^\\.\\.+\$ --function -- dotdot multicd
```
It might be possible to improve that, but this at least round-trips.
This renames abbreviation triggers from `--trigger-on entry` and
`--trigger-on exec` to `--on-space` and `--on-enter`. These names are less
precise, as abbreviations trigger on any character that terminates a word
or any key binding that triggers exec, but they're also more human friendly
and that's a better tradeoff.
This adds support for the `--function` option of abbreviations, so that the
expansion of an abbreviation may be generated dynamically via a fish
function.
Prior to this change, abbreviations were stored as fish variables, often
universal. However we intend to add additional features to abbreviations
which would be very awkward to shoe-horn into variables.
Re-implement abbreviations using a builtin, managing them internally.
Existing abbreviations stored in universal variables are still imported,
for compatibility. However new abbreviations will need to be added to a
function. A follow-up commit will add it.
Now that abbr is a built-in, remove the abbr function; but leave the
abbr.fish file so that stale files from past installs do not override
the abbr builtin.
We have had multiple crashes for relative CDPATH entries. Commit 5e274066e
(Always return absolute path in path_get_cdpath, 2019-10-17) tried to fix
all of them but it failed to do justice to its title. Let's fix this to
actually return absolute paths, always. Take care to to normalize the path
because it is used for autosuggestions. The normalization is mostly relevant
for CDPATH=. (the default) but it doesn't hurt others.
Closes#9407
Inside a comment we offer plain file completions (or command completions if
the comment is in command position). However these completions are broken
because they don't consider any of the surrounding characters. For example
with a command line
echo # comment
^ cursor
we suggest file completions and insert them as
echo # comsomefile ment
Providing completions inside comments does not seem useful and it can be
misleading. Let's remove the completions; this should communicate better that
we are in a free-form comment that's not subject to fish syntax.
Closes#9320
This fixes#9321
IEEE Std 1003.1-2017 Issue 6 added optional error condition
[EINVAL] for if no conversion could be performed.
Switch back to wcstoimax/wcstoumax: do not work around the old FreeBSD
8 issue.
Add a test for printf '%d %d' 1 2 3
Like the pexpect-based pager compeltions test `complete-group-order.py`, but for
the `complete` builtin. Verifies the same sort/dedup rules that apply to the
pager are also applied to the output of `complete` and asserts the sort behavior
for multiple `complete -k` calls for the same command and with the same (or with
both passing) preconditions.
This addresses a long-standing TODO where `complete -C` output isn't
deduplicated.
With this patch, the same deduplication and sort procedure that is run on actual
pager completions is also executed for `complete -C` completions (with a `-C`
payload specified).
This makes it possible to use `complete -C` to test what completions will
actually be generated by the completions pager instead of it displaying
something completely divorced from reality, improving the productivity of fish
completions developers.
Note that completions that wouldn't be shown in the pager are also omitted from
the results, e.g. `test/buildroot/` and `test/fish_expand_test/` are omitted
from the check matches in `checks/complete_directories.fish` because even if
they were generated, the pager wouldn't have shown them. This again makes
reasoning about and debugging completions much easier and more sane.
Fixes ommitted newline char shown after complete -n'(foo)'
Also axes the 'contains syntax errors' line before the error.
Update tests
before
> complete -n'(foo)'
complete: Condition '(foo)' contained a syntax error
complete: Command substitutions not allowed⏎
after
> complete -n'(foo)'
complete: -n '(foo)': command substitutions not allowed here
Up to now, in normal locales \x was essentially the same as \X, except
that it errored if given a value > 0x7f.
That's kind of annoying and useless.
A subtle change is that `\xHH` now represents the character (if any)
encoded by the byte value "HH", so even for values <= 0x7f if that's
not the same as the ASCII value we would diverge.
I do not believe anyone has ever run fish on a system where that
distinction matters. It isn't a thing for UTF-8, it isn't a thing for
ASCII, it isn't a thing for UTF-16, it isn't a thing for any extended
ASCII scheme - ISO8859-X, it isn't a thing for SHIFT-JIS.
I am reasonably certain we are making that same assumption in other
places.
Fixes#1352
We forgot to decode (i.e. turn into nice wchar_t codepoints)
"byte_literal" escape sequences. This meant that e.g.
```fish
string match ö \Xc3\Xb6
math 5 \X2b 5
```
didn't work, but `math 5 \x2b 5` did, and would print the wonderful
error:
```
math: Error: Missing operator
'5 + 5'
^
```
So, instead, we decode eagerly.
Prior to 1811a2d, the return value for negative return codes was UB and I'd
witnessed both expected cases like -256 mapping to a $status of 0 and unexpected
cases like a return value of -1 mapping to a $status of 0. As such, this doesn't
test just one fixed return value but the entire range from negative multiples of
256 all the way down (rather, up!) to -1.
...for improved cross-platform support.
Following up on the work in c90ac7b. There was one more test that had mktemp in
the littlecheck "shebang" and this also removes a now-unnecessary `env` prefix.
All usages of `mktemp` must go through the (fish-only) `mktemp` test function
that abstracts over the differences across multiple platforms/flavors.
Tests can be easily run individually via `ninja -C build test_xxx` and there
isn't a good reason to randomly manually override $HOME and $XDG_CONFIG_HOME for
a test here and a test there.
If it's absolutely necessary, littlecheck.py should be extended to support a
`%temp` variable initialized to a temporary directory and that can be used
instead of calling out to the platform-provided `mktemp` via a subshell.
For unknown reasons, the i686 launchpad builders fail on this date,
but apparently not the others.
Let's just remove it, we've tested dates older than the epoch, this is
slightly redundant.
As discussed in #9221, a bug in the autocomplete that was fixed in 66391922
caused completions to be incorrectly suppressed. The dropped test/check was
inadvertently relying on the buggy behavior and expected a git invocation to
generate no completions but there are, in fact, completions now that the bug has
been resolved.
cc @faho: I'm not sure if you want to replace this with a different check that
actually doesn't yield any completions or if you're happy with it just being
dropped.
This fails on old Ubuntu with:
> touch: invalid date format ‘190112112040.39’
Because we don't actually need the seconds here, we just use minute
resolution. It's fine.
Also use `path mtime`, because that's a portable way to get the mtime.
I forgot `stat` is non-portable. There's no great way to portably get a
machine-readable representation of stat(2) for a file. I don't want to ship our
own lstat(2) wrapper executable just for this test and don't want to fork out to
python or perl for this either - I just wanted to get the tests to pass under
WSL :'(
Anyway, just give up and make it skip just for WSL. If another OS fails this
test in the future, the comments and existing workaround will make it easy to
figure out what the problem is and what needs to be done. We'll cross that
bridge when we get there.
It turns out that not all systems print an unsigned integer as the output of
`stat -c %Y xxx` and the leading `-` can be misinterpreted as a parameter to
`string match`.
There's no guarantee (nor requirement) that the filesystem support pre-epoch
modification dates. If it doesn't, the `test` tests were failing to get the
expected results.
Skip the test if it seems the fs doesn't support pre-epoch timestamps
(determined by pre-epoch mt of `oldest` evaluating to 0 or the unix epoch).
* Replace ";" with "\n" in alias-generated functions
This can let us add a "#" in our aliases to make
them ignore additional arguments.
* Update changelog about aliases that ignore arguments
* Update test for alias.fish
This is now compliant with the aliases that can
ignore arguments.
This errored out *later* because the result was infinite or NaN, but
it didn't actually stop evaluation.
I'm not sure if there is a way to get floating point math to turn an
infinity back into something that doesn't depend on a literal
infinity, but division by zero conceptually isn't a thing we can
support.
There's entire branches of maths dedicated to figuring out what
dividing by "basically zero" means and we don't have to get into it.
This is essentially the inverse of `string pad`.
Where that adds characters to get up to the specified width,
this adds an ellipsis to a string if it goes over a specific maximum width.
The char can be given, but defaults to our ellipsis string.
("…" if the locale can handle it and "..." otherwise)
If the ellipsis string is empty, it just truncates.
For arguments given via argv, it goes line-by-line,
because otherwise length makes no sense.
If "--no-newline" is given, it adds an ellipsis instead and removes all subsequent lines.
Like pad and `length --visible`, it goes by visible width,
skipping recognized escape sequences, as those have no influence on width.
The default target width is the shortest of the given widths that is non-zero.
If the ellipsis is already wider than the target width,
we truncate instead. This is safer overall, so we don't e.g. move into a new line.
This is especially important given our default ellipsis might be width 3.
When selecting items in the pager, only the latest of those items is kept
in the edit history, as so-called transient edit. Each new transient edit
evicts any old transient edit (via undo).
If the pager is closed by a command that performs another transient edit
(like history-token-search-backward) we thus inadvertently undo (= remove)
the token inserted by the pager. Fix this by closing a transient edit
session when closing the pager. Token search will start its own session.
Fixes#9160
This checked specifically for "| and" and "a=b" and then just gave the
error without a caret at all.
E.g. for a /tmp/broken.fish that contains
```fish
echo foo
echo foo | and cat
```
This would print:
```
/tmp/broken.fish (line 3): The 'and' command can not be used in a pipeline
warning: Error while reading file /tmp/broken.fish
```
without any indication other than the line number as to the location
of the error.
Now we do
```
/tmp/broken.fish (line 3): The 'and' command can not be used in a pipeline
echo foo | and cat
^~^
warning: Error while reading file /tmp/broken.fish
```
Another nice one:
```
fish --no-config -c 'echo notprinted; echo foo; a=b'
```
failed to give the error message!
(Note: Is it really a "warning" if we failed to read the one file we
wer told to?)
We should check if we should either centralize these error messages
completely, or always pass them and remove this "code" system, because
it's only used in some cases.
This skipped printing a "^" line if the start or length of the error
was longer than the source.
That seems like the correc thing at first glance, however it means
that the caret line isn't skipped *if the file goes on*.
So, for example
```fish
echo "$abc["
```
by itself, in a file or via `fish -c`, would not print an error, but
```fish
echo "$abc["
true
```
would. That's not a great way to print errors.
So instead we just.. imagine the start was at most at the end.
The underlying issue why `echo "$abc["` causes this is that `wcstol`
didn't move the end pointer for the index value (because there is no
number there). I'd fix this, but apparently some of
our recursive variable calls absolutely rely on this position value.
This stops us from loading the completions for e.g. `./foo` if there
is no `foo` in path.
This is because the completion scripts will call an unqualified `foo`,
and then error out.
This of course means if the script would work because it never calls
the command, we still don't load it.
Pathed completions via `complete --path` should be unaffected because
they aren't autoloaded anyway.
Workaround for #3117Fixes#9133
This was misguidedly "fixed" in
9e08609f85, which made printf error out
with any "-"-prefixed words as the first argument.
Note: This means currently `printf --help` doesn't print the help.
This also matches `echo`, and we currently don't have anything to make
a literal `--help` execute a builtin help except for keywords. Oh well.
Fixes#9132
This used to be kept, so e.g. testing it with
fish_read_limit=5 echo (string repeat -n 10 a)
would cause the prompt and such to error as well.
Also there was no good way to get back to the default value
afterwards.
* string repeat: Don't allocate repeated string all at once
This used to allocate one string and fill it with the necessary
repetitions, which could be a very very large string.
Now, it instead uses one buffer and fills it to a chunk size,
and then writes that.
This fixes:
1. We no longer crash with too large max/count values. Before they
caused a bad_alloc because we tried to fill all RAM.
2. We no longer fill all RAM if given a big-but-not-too-big value. You
could've caused fish to eat *most* of your RAM here.
3. It can start writing almost immediately, instead of waiting
potentially minutes to start.
Performance is about the same to slightly faster overall.
The previous fix was reverted because it broke another scenario. Add tests
for both scenarios.
The first test exposes another problem: autosuggestions are sometimes not
recomputed after selecting the first completion with Tab Tab. Fix that too.
Since the fix for #3892, this escaping style escapes
\n to \\n
as well as
\\ to \\\\
\' to \\'
I believe these two are the only printable characters that are escaped with
ESCAPE_NO_PRINTABLES.
The rationale is probably to keep the encoding unambiguous and reversible.
However that doesn't justify escaping the single quote. Probably this was
an accident, so let's revert that part.
This has the nice effect that single quotes will no longer be escaped
when rendered in the completion pager (which is consistent with other
special characters). Try it:
complete : -a "aaa\'\; aaaa\'\;" -f
Also this makes the error output of builtin bind consistent:
$ bind -e --preset \;
$ bind -e --preset \'
$ bind \;
bind: No binding found for sequence “;”
$ bind \'
bind: No binding found for sequence “'”
the last line is clearly better than the old version:
bind: No binding found for sequence “\'”
In general, the fact that ESCAPE_NO_PRINTABLES escapes the (printable)
backslash is weird but I guess it's fine because it looks more consistent to
users, even though the result is an undocumented subset of the fish language.
command_line_has_transient_edit tracks the actual command line, not the
pager search field. We accidentally reset it after modifying the search field
which causes unexpected behavior - the commandline added by the completion
pager remains even after I press Escape.
This was an inadvertent change from
cc632d6ae9.
Because we used wgetcwd directly before, we always got the "physical"
resolved $PWD.
There's an argument to be made to use the logical $PWD here as well
but I prefer not to make changes lik that in a random commit without
good reason.
This can be used to print the modification time, like `stat` with some
options.
The reason is that `stat` has caused us a number of portability
headaches:
1. It's not available everywhere by default
2. The versions are quite different
For instance, with GNU stat it's `stat -c '%Y'`, with macOS it's `stat
-f %m`.
So now checking a cache file can be done just with builtins.
This adds a line to `set --show`s output like
```
$PATH: originally inherited as |/home/alfa/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/var/lib/flatpak/exports/bin|
```
to help with debugging.
Note that this means keeping an additional copy of the original
environment around. At most this would be one ARG_MAX's worth, which
is about 2M.
This is sort of slow because it's called hundreds of times.
We used to have a cache, introduced in ad9b4290e, but it was removed
in fee5a9125a because it had
false-positives.
So what we do, because the issue is that this is called hundreds of
times per-commandline, we cache it keyed on the commandline.
This speeds up `complete -C'git sta'` by a factor of 2.3x.
It's still useful without, for instance to implement a command that
takes no options, or to check min-args or max-args.
(technically no optspecs, no min/max args and --ignore-unknown does
nothing, but that's a very specific error that we don't need to forbid)
Fixes#9006
Commit ad9b4290e optimized git completions by adding a completion that would
run on every completion request, which allows to precompute data used by
other completion entries. Unfortunately, the completion entry is not run
when the commandline contains a flag like `git -C`. If we didn't
already load git.fish, we'd error. Additionally, we got false positive
completions for `git diff -c`.
So this hack was a very bad idea. We should optimize in another way.
This is simply an error in test setup. There's a limit to how far we
can isolate them from the system.
(it's possible new cmake versions close fds automatically since I
can't reproduce the original issue via `ninja test` or `make test`)
Fixes#9017
This lacks the tmux-256color terminfo entry, leading to spurious
warnings like
warning: Could not set up terminal. <= no check matches
warning: TERM environment variable set to \'tmux-256color\'. <= no check matches
warning: Check that this terminal type is supported on this system. <= no check matches
warning: Using fallback terminal type \'ansi\'. <= no check matches
Git's pathspec system is kind of annoying:
> A pathspec that begins with a colon : has special meaning. In the short form, the leading colon : is followed by zero or more "magic signature" letters (which optionally is terminated by another colon :), and the remainder is the pattern to match against the path. The "magic signature" consists of ASCII symbols that are neither alphanumeric, glob, regex special characters nor colon. The optional colon that terminates the "magic signature" can be omitted if the pattern begins with a character that does not belong to "magic signature" symbol set and is not a colon.
So if we complete `:/foo`, that "works" because "f" is alphanumeric
and so the "/" is the only magic character here.
If, however the filename starts with a magic character, that's used as
a magic signature.
So we do what the docs say and terminate the magic signature after the
"/" (which means "from the repo root").
Fixes#9004
This makes it so
1. The informative status can work without showing untracked
files (previously it was disabled if bash.showUntrackedFiles was
false)
2. If untrackedfiles isn't explicitly enabled, we use -uno, so git
doesn't have to scan all the files.
In a large repository (like the FreeBSD ports repo), this can improve
performance by a factor of 5 or up.
In b0084c3fc4, we refactored out event handlers get removed. But this
also caused us to remove "one-shot" handlers even if they have not yet
been fired. Fix this.
When switching this to use `git status`, I neglected to use the
correct definition of what a "dirty" and a "staged" change is.
So this now showed already staged files still as "dirty".
Fixes#8986
This makes it so `complete -c foo -n test1 -n test2` registers *both*
conditions, and when it comes time to check the candidate, tries both,
in that order. If any fails it stops, if all succeed the completion is offered.
The reason for this is that it helps with caching - we have a
condition cache, but conditions like
```fish
test (count (commandline -opc)) -ge 2; and contains -- (commandline -opc)[2] length
test (count (commandline -opc)) -ge 2; and contains -- (commandline -opc)[2] sub
```
defeats it pretty easily, because the cache only looks at the entire
script as a string - it can't tell that the first `test` is the same
in both.
So this means we separate it into
```fish
complete -f -c string -n "test (count (commandline -opc)) -ge 2; and contains -- (commandline -opc)[2] length" -s V -l visible -d "Use the visible width, excluding escape sequences"
+complete -f -c string -n "test (count (commandline -opc)) -ge 2" -n "contains -- (commandline -opc)[2] length" -s V -l visible -d "Use the visible width, excluding escape sequences"
```
which allows the `test` to be cached.
In tests, this improves performance for the string completions by 30%
by reducing all the redundant `test` calls.
The `git` completions can also greatly benefit from this.
This adds a path builtin to deal with paths.
It offers the following subcommands:
filter to go through a list of paths and only print the ones that pass some filter - exist, are a directory, have read permission, ...
is as a shortcut for filter -q to only return true if one of the paths passed the filter
basename, dirname and extension to print certain parts of the path
change-extension to change the extension to a different one (as a string operation)
normalize and resolve to canonicalize the paths in various flavors
sort to sort paths, also only using the basename or dirname as a key
The definition of "extension" here was carefully considered and should line up with how extensions are actually used - ~/.bashrc doesn't have an extension, but ~/.conf.d does (".d").
These subcommands all compose well - they can read from arguments or stdin (like string), they can use null-delimited input or output (input is autodetected - if a NULL happens in the first PATH_MAX bytes it switches automatically).
It is both a failglob exception (so like set if a glob passed to it fails it just doesn't get any arguments for it instead of triggering an error), and passes output to command substitution buffers explicitly split (like string split0) so newlines are easy to handle.
This would still remove non-existent paths, which isn't a strict
inversion and contradicts the docs.
Currently, to only allow paths that exist but don't pass a type check,
you'd have to filter twice:
path filter -Z foo bar | path filter -vfz
If a shortcut for this becomes necessary we can add it later.
This is now added to the two commands that definitely deal with
relative paths.
It doesn't work for e.g. `path basename`, because after removing the
dirname prepending a "./" doesn't refer to the same file, and the
basename is also expected to not contain any slashes.
Because we now count the extension including the ".", we print an
empty entry.
This makes e.g.
```fish
set -l base (path change-extension '' $somefile)
set -l ext (path extension $somefile)
echo $base$ext
```
reconstruct the filename, and makes it easier to deal with files with
no extension.
This means "../" components are cancelled out even after non-existent
paths or files.
(the alternative is to error out, but being able to say `path resolve
/path/to/file/../../` over `path resolve (path dirname
/path/to/file)/../../` seems worth it?)
Yeah, the macOS tests fail because it's started in /private/var... with a
$PWD of /var.... So resolve canonicalizes the path, which makes it no
longer match $PWD.
Simply use pwd -P
This just goes back until it finds an existent path, resolves that,
and adds the normalized rest on top.
So if you try
/bin/foo/bar////../baz
and /bin exists as a symlink to /usr/bin, it would resolve that, and
normalize the rest, giving
/usr/bin/foo/baz
(note: We might want to add this to realpath as well?)
This includes the "." in what `path extension` prints.
This allows distinguishing between an empty extension (just `.`) and a
non-existent extension (no `.` at all).
This adds a "path" builtin that can handle paths.
Implemented so far:
- "path filter PATHS", filters paths according to existence and optionally type and permissions
- "path base" and "path dir", run basename and dirname, respectively
- "path extension PATHS", prints the extension, if any
- "path strip-extension", prints the path without the extension
- "path normalize PATHS", normalizes paths - removing "/./" components
- and such.
- "path real", does realpath - i.e. normalizing *and* link resolution.
Some of these - base, dir, {strip-,}extension and normalize operate on the paths only as strings, so they handle nonexistent paths. filter and real ignore any nonexistent paths.
All output is split explicitly, so paths with newlines in them are
handled correctly. Alternatively, all subcommands have a "--null-input"/"-z" and "--null-output"/"-Z" option to handle null-terminated input and create null-terminated output. So
find . -print0 | path base -z
prints the basename of all files in the current directory,
recursively.
With "-Z" it also prints it null-separated.
(if stdout is going to a command substitution, we probably want to
skip this)
All subcommands also have a "-q"/"--quiet" flag that tells them to skip output. They return true "when something happened". For match/filter that's when a file passed, for "base"/"dir"/"extension"/"strip-extension" that's when something about the path *changed*.
Filtering
---------
`filter` supports all the file*types* `test` has - "dir", "file", "link", "block"..., as well as the permissions - "read", "write", "exec" and things like "suid".
It is missing the tty check and the check for the file being non-empty. The former is best done via `isatty`, the latter I don't think I've ever seen used.
There currently is no way to only get "real" files, i.e. ignore links pointing to files.
Examples
--------
> path real /bin///sh
/usr/bin/bash
> path extension foo.mp4
mp4
> path extension ~/.config
(nothing, because ".config" isn't an extension.)
This teaches `--on-signal SIGINT` (and by extension `trap cmd SIGINT`)
to work properly in scripts, not just interactively. Note any such
function will suppress the default behavior of exiting. Do this for
SIGTERM as well.
Like `set` and `read` before it, `eval` can be used to set variables,
and so it can't be shadowed by a function without loss of
functionality.
So this forbids it.
Incidentally, this means we will no longer try to autoload an
`eval.fish` file that's left over from an old version, which would
have helped with #8963.
Previously, running `fish_add_path /foo /foo` would result in /foo
being added to $PATH twice.
Now we check that it hasn't already been given, so we skip the
second (and any further) occurence.
This *might* be a bit faster running under TSAN, otherwise it takes >
400 seconds on Github Actions.
If this doesn't work we need to disable it for TSAN.
To recap, this means `&` in the middle of a word no longer
backgrounds.
So:
```fish
echo foo&bar # prints foo&bar
echo foo& bar # backgrounds an echo that prints "foo" and runs "bar"
```
This can no longer be changed. If "no-stderr-nocaret" is in
$fish_features it will simply be ignored.
The "^" redirection that was deprecated in fish 3.0 is now gone for good.
Note: For testing reasons, it can still be set _internally_ by running
"feature_flags_t::set". We simply shouldn't do that.
When you do
```fish
set foo-bar baz
```
"foo-baz" isn't usable as a variable *name*. When you just say the
"variable" is invalid that could also be interpreted to be a special
type of variable or something.
String tokens are subdivided by command substitutions. Some syntax errors
can occur in the gap between two command substitutions. Make the caret point
to the start of that gap, instead of the token start.
When expanding command substitutions, we use a naïve way of detecting whether
the cmdsub has the optional leading dollar. We check if the last character was
a dollar, which breaks if it's an escaped dollar. We wrongly expand
\$(echo "") to the empty string. Fix this by checking if the dollar was escaped.
The parse_util_* functions have a bunch of output parameters. We should
return a parameter bag instead (I think I tried once and failed).
Given
set var a
echo "$var$(echo b)"
the double-quoted string is expanded right-to-left, so we construct an
intermediate "$varb". Since the variable "varb" is undefined, this wrongly
expands to the empty string (should be "ab"). Fix this by isolating the
expanded command substitution internally. We do the same when handling
unquoted command substitutions.
Fixes#8849
The read test is now failing on GitHub actions even though it passes on
my Mac. It may be due to differences in dd between these two
environments. Stop using dd and just use head.
The read.fish check has a test where it limits the amount of data passed to
`read` to 8192 bytes, and verifies that fish reads exactly that amount.
This check occasionally fails on the OBS builds; it's very hard to repro a
failure locally, but I finally did it.
The amount of data written is limited via `yes` and `dd`:
yes $line | dd bs=1024 count=(math "$fish_read_limit / 1024")
The bug is that `dd` outputs a fixed number of "blocks" where a block
corresponds to a single read. As `yes` and `dd` are running concurrently,
it may happen that `dd` performs a short read; this then counts as a single
block. So `dd` may output less than the desired amount of data.
This can be verified by removing the 2>/dev/null redirection; on a
successful run dd reports `8+0 records out`, on a failed run it reports
`7+1 records out` because one of the records was short.
Fix this by using `fullblock` so that dd will no longer count a short read
as a single block. `head` would probably be a simpler tool to use but we'll
do this for now.
Happily it's not a fish bug. No need to relnote it.
This was already apparently supposed to work, but didn't because we
just overrode errno again.
This now means that, if a correctly named candidate exists, we don't
start the command-not-found handler.
See #8804
This used to call exec_subshell, which has two issues:
1. It creates a command substitution block which shows up in a stack
trace
2. It does much more work than necessary
This removes a useless "in command substitution" from an error message
in an autoloaded file, and it speeds up autoloading a bit (not
measurable in actual benchmarks, but microbenchmarks are 2x).