...for improved cross-platform support.
Following up on the work in c90ac7b. There was one more test that had mktemp in
the littlecheck "shebang" and this also removes a now-unnecessary `env` prefix.
All usages of `mktemp` must go through the (fish-only) `mktemp` test function
that abstracts over the differences across multiple platforms/flavors.
Tests can be easily run individually via `ninja -C build test_xxx` and there
isn't a good reason to randomly manually override $HOME and $XDG_CONFIG_HOME for
a test here and a test there.
If it's absolutely necessary, littlecheck.py should be extended to support a
`%temp` variable initialized to a temporary directory and that can be used
instead of calling out to the platform-provided `mktemp` via a subshell.
For unknown reasons, the i686 launchpad builders fail on this date,
but apparently not the others.
Let's just remove it, we've tested dates older than the epoch, this is
slightly redundant.
As discussed in #9221, a bug in the autocomplete that was fixed in 66391922
caused completions to be incorrectly suppressed. The dropped test/check was
inadvertently relying on the buggy behavior and expected a git invocation to
generate no completions but there are, in fact, completions now that the bug has
been resolved.
cc @faho: I'm not sure if you want to replace this with a different check that
actually doesn't yield any completions or if you're happy with it just being
dropped.
This fails on old Ubuntu with:
> touch: invalid date format ‘190112112040.39’
Because we don't actually need the seconds here, we just use minute
resolution. It's fine.
Also use `path mtime`, because that's a portable way to get the mtime.
I forgot `stat` is non-portable. There's no great way to portably get a
machine-readable representation of stat(2) for a file. I don't want to ship our
own lstat(2) wrapper executable just for this test and don't want to fork out to
python or perl for this either - I just wanted to get the tests to pass under
WSL :'(
Anyway, just give up and make it skip just for WSL. If another OS fails this
test in the future, the comments and existing workaround will make it easy to
figure out what the problem is and what needs to be done. We'll cross that
bridge when we get there.
It turns out that not all systems print an unsigned integer as the output of
`stat -c %Y xxx` and the leading `-` can be misinterpreted as a parameter to
`string match`.
There's no guarantee (nor requirement) that the filesystem support pre-epoch
modification dates. If it doesn't, the `test` tests were failing to get the
expected results.
Skip the test if it seems the fs doesn't support pre-epoch timestamps
(determined by pre-epoch mt of `oldest` evaluating to 0 or the unix epoch).
* Replace ";" with "\n" in alias-generated functions
This can let us add a "#" in our aliases to make
them ignore additional arguments.
* Update changelog about aliases that ignore arguments
* Update test for alias.fish
This is now compliant with the aliases that can
ignore arguments.
This errored out *later* because the result was infinite or NaN, but
it didn't actually stop evaluation.
I'm not sure if there is a way to get floating point math to turn an
infinity back into something that doesn't depend on a literal
infinity, but division by zero conceptually isn't a thing we can
support.
There's entire branches of maths dedicated to figuring out what
dividing by "basically zero" means and we don't have to get into it.
This is essentially the inverse of `string pad`.
Where that adds characters to get up to the specified width,
this adds an ellipsis to a string if it goes over a specific maximum width.
The char can be given, but defaults to our ellipsis string.
("…" if the locale can handle it and "..." otherwise)
If the ellipsis string is empty, it just truncates.
For arguments given via argv, it goes line-by-line,
because otherwise length makes no sense.
If "--no-newline" is given, it adds an ellipsis instead and removes all subsequent lines.
Like pad and `length --visible`, it goes by visible width,
skipping recognized escape sequences, as those have no influence on width.
The default target width is the shortest of the given widths that is non-zero.
If the ellipsis is already wider than the target width,
we truncate instead. This is safer overall, so we don't e.g. move into a new line.
This is especially important given our default ellipsis might be width 3.
When selecting items in the pager, only the latest of those items is kept
in the edit history, as so-called transient edit. Each new transient edit
evicts any old transient edit (via undo).
If the pager is closed by a command that performs another transient edit
(like history-token-search-backward) we thus inadvertently undo (= remove)
the token inserted by the pager. Fix this by closing a transient edit
session when closing the pager. Token search will start its own session.
Fixes#9160
This checked specifically for "| and" and "a=b" and then just gave the
error without a caret at all.
E.g. for a /tmp/broken.fish that contains
```fish
echo foo
echo foo | and cat
```
This would print:
```
/tmp/broken.fish (line 3): The 'and' command can not be used in a pipeline
warning: Error while reading file /tmp/broken.fish
```
without any indication other than the line number as to the location
of the error.
Now we do
```
/tmp/broken.fish (line 3): The 'and' command can not be used in a pipeline
echo foo | and cat
^~^
warning: Error while reading file /tmp/broken.fish
```
Another nice one:
```
fish --no-config -c 'echo notprinted; echo foo; a=b'
```
failed to give the error message!
(Note: Is it really a "warning" if we failed to read the one file we
wer told to?)
We should check if we should either centralize these error messages
completely, or always pass them and remove this "code" system, because
it's only used in some cases.
This skipped printing a "^" line if the start or length of the error
was longer than the source.
That seems like the correc thing at first glance, however it means
that the caret line isn't skipped *if the file goes on*.
So, for example
```fish
echo "$abc["
```
by itself, in a file or via `fish -c`, would not print an error, but
```fish
echo "$abc["
true
```
would. That's not a great way to print errors.
So instead we just.. imagine the start was at most at the end.
The underlying issue why `echo "$abc["` causes this is that `wcstol`
didn't move the end pointer for the index value (because there is no
number there). I'd fix this, but apparently some of
our recursive variable calls absolutely rely on this position value.
This stops us from loading the completions for e.g. `./foo` if there
is no `foo` in path.
This is because the completion scripts will call an unqualified `foo`,
and then error out.
This of course means if the script would work because it never calls
the command, we still don't load it.
Pathed completions via `complete --path` should be unaffected because
they aren't autoloaded anyway.
Workaround for #3117Fixes#9133
This was misguidedly "fixed" in
9e08609f85, which made printf error out
with any "-"-prefixed words as the first argument.
Note: This means currently `printf --help` doesn't print the help.
This also matches `echo`, and we currently don't have anything to make
a literal `--help` execute a builtin help except for keywords. Oh well.
Fixes#9132
This used to be kept, so e.g. testing it with
fish_read_limit=5 echo (string repeat -n 10 a)
would cause the prompt and such to error as well.
Also there was no good way to get back to the default value
afterwards.
* string repeat: Don't allocate repeated string all at once
This used to allocate one string and fill it with the necessary
repetitions, which could be a very very large string.
Now, it instead uses one buffer and fills it to a chunk size,
and then writes that.
This fixes:
1. We no longer crash with too large max/count values. Before they
caused a bad_alloc because we tried to fill all RAM.
2. We no longer fill all RAM if given a big-but-not-too-big value. You
could've caused fish to eat *most* of your RAM here.
3. It can start writing almost immediately, instead of waiting
potentially minutes to start.
Performance is about the same to slightly faster overall.
The previous fix was reverted because it broke another scenario. Add tests
for both scenarios.
The first test exposes another problem: autosuggestions are sometimes not
recomputed after selecting the first completion with Tab Tab. Fix that too.
Since the fix for #3892, this escaping style escapes
\n to \\n
as well as
\\ to \\\\
\' to \\'
I believe these two are the only printable characters that are escaped with
ESCAPE_NO_PRINTABLES.
The rationale is probably to keep the encoding unambiguous and reversible.
However that doesn't justify escaping the single quote. Probably this was
an accident, so let's revert that part.
This has the nice effect that single quotes will no longer be escaped
when rendered in the completion pager (which is consistent with other
special characters). Try it:
complete : -a "aaa\'\; aaaa\'\;" -f
Also this makes the error output of builtin bind consistent:
$ bind -e --preset \;
$ bind -e --preset \'
$ bind \;
bind: No binding found for sequence “;”
$ bind \'
bind: No binding found for sequence “'”
the last line is clearly better than the old version:
bind: No binding found for sequence “\'”
In general, the fact that ESCAPE_NO_PRINTABLES escapes the (printable)
backslash is weird but I guess it's fine because it looks more consistent to
users, even though the result is an undocumented subset of the fish language.
command_line_has_transient_edit tracks the actual command line, not the
pager search field. We accidentally reset it after modifying the search field
which causes unexpected behavior - the commandline added by the completion
pager remains even after I press Escape.
This was an inadvertent change from
cc632d6ae9.
Because we used wgetcwd directly before, we always got the "physical"
resolved $PWD.
There's an argument to be made to use the logical $PWD here as well
but I prefer not to make changes lik that in a random commit without
good reason.
This can be used to print the modification time, like `stat` with some
options.
The reason is that `stat` has caused us a number of portability
headaches:
1. It's not available everywhere by default
2. The versions are quite different
For instance, with GNU stat it's `stat -c '%Y'`, with macOS it's `stat
-f %m`.
So now checking a cache file can be done just with builtins.
This adds a line to `set --show`s output like
```
$PATH: originally inherited as |/home/alfa/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/var/lib/flatpak/exports/bin|
```
to help with debugging.
Note that this means keeping an additional copy of the original
environment around. At most this would be one ARG_MAX's worth, which
is about 2M.
This is sort of slow because it's called hundreds of times.
We used to have a cache, introduced in ad9b4290e, but it was removed
in fee5a9125a because it had
false-positives.
So what we do, because the issue is that this is called hundreds of
times per-commandline, we cache it keyed on the commandline.
This speeds up `complete -C'git sta'` by a factor of 2.3x.
It's still useful without, for instance to implement a command that
takes no options, or to check min-args or max-args.
(technically no optspecs, no min/max args and --ignore-unknown does
nothing, but that's a very specific error that we don't need to forbid)
Fixes#9006
Commit ad9b4290e optimized git completions by adding a completion that would
run on every completion request, which allows to precompute data used by
other completion entries. Unfortunately, the completion entry is not run
when the commandline contains a flag like `git -C`. If we didn't
already load git.fish, we'd error. Additionally, we got false positive
completions for `git diff -c`.
So this hack was a very bad idea. We should optimize in another way.
This is simply an error in test setup. There's a limit to how far we
can isolate them from the system.
(it's possible new cmake versions close fds automatically since I
can't reproduce the original issue via `ninja test` or `make test`)
Fixes#9017
This lacks the tmux-256color terminfo entry, leading to spurious
warnings like
warning: Could not set up terminal. <= no check matches
warning: TERM environment variable set to \'tmux-256color\'. <= no check matches
warning: Check that this terminal type is supported on this system. <= no check matches
warning: Using fallback terminal type \'ansi\'. <= no check matches
Git's pathspec system is kind of annoying:
> A pathspec that begins with a colon : has special meaning. In the short form, the leading colon : is followed by zero or more "magic signature" letters (which optionally is terminated by another colon :), and the remainder is the pattern to match against the path. The "magic signature" consists of ASCII symbols that are neither alphanumeric, glob, regex special characters nor colon. The optional colon that terminates the "magic signature" can be omitted if the pattern begins with a character that does not belong to "magic signature" symbol set and is not a colon.
So if we complete `:/foo`, that "works" because "f" is alphanumeric
and so the "/" is the only magic character here.
If, however the filename starts with a magic character, that's used as
a magic signature.
So we do what the docs say and terminate the magic signature after the
"/" (which means "from the repo root").
Fixes#9004
This makes it so
1. The informative status can work without showing untracked
files (previously it was disabled if bash.showUntrackedFiles was
false)
2. If untrackedfiles isn't explicitly enabled, we use -uno, so git
doesn't have to scan all the files.
In a large repository (like the FreeBSD ports repo), this can improve
performance by a factor of 5 or up.
In b0084c3fc4, we refactored out event handlers get removed. But this
also caused us to remove "one-shot" handlers even if they have not yet
been fired. Fix this.
When switching this to use `git status`, I neglected to use the
correct definition of what a "dirty" and a "staged" change is.
So this now showed already staged files still as "dirty".
Fixes#8986
This makes it so `complete -c foo -n test1 -n test2` registers *both*
conditions, and when it comes time to check the candidate, tries both,
in that order. If any fails it stops, if all succeed the completion is offered.
The reason for this is that it helps with caching - we have a
condition cache, but conditions like
```fish
test (count (commandline -opc)) -ge 2; and contains -- (commandline -opc)[2] length
test (count (commandline -opc)) -ge 2; and contains -- (commandline -opc)[2] sub
```
defeats it pretty easily, because the cache only looks at the entire
script as a string - it can't tell that the first `test` is the same
in both.
So this means we separate it into
```fish
complete -f -c string -n "test (count (commandline -opc)) -ge 2; and contains -- (commandline -opc)[2] length" -s V -l visible -d "Use the visible width, excluding escape sequences"
+complete -f -c string -n "test (count (commandline -opc)) -ge 2" -n "contains -- (commandline -opc)[2] length" -s V -l visible -d "Use the visible width, excluding escape sequences"
```
which allows the `test` to be cached.
In tests, this improves performance for the string completions by 30%
by reducing all the redundant `test` calls.
The `git` completions can also greatly benefit from this.
This adds a path builtin to deal with paths.
It offers the following subcommands:
filter to go through a list of paths and only print the ones that pass some filter - exist, are a directory, have read permission, ...
is as a shortcut for filter -q to only return true if one of the paths passed the filter
basename, dirname and extension to print certain parts of the path
change-extension to change the extension to a different one (as a string operation)
normalize and resolve to canonicalize the paths in various flavors
sort to sort paths, also only using the basename or dirname as a key
The definition of "extension" here was carefully considered and should line up with how extensions are actually used - ~/.bashrc doesn't have an extension, but ~/.conf.d does (".d").
These subcommands all compose well - they can read from arguments or stdin (like string), they can use null-delimited input or output (input is autodetected - if a NULL happens in the first PATH_MAX bytes it switches automatically).
It is both a failglob exception (so like set if a glob passed to it fails it just doesn't get any arguments for it instead of triggering an error), and passes output to command substitution buffers explicitly split (like string split0) so newlines are easy to handle.
This would still remove non-existent paths, which isn't a strict
inversion and contradicts the docs.
Currently, to only allow paths that exist but don't pass a type check,
you'd have to filter twice:
path filter -Z foo bar | path filter -vfz
If a shortcut for this becomes necessary we can add it later.
This is now added to the two commands that definitely deal with
relative paths.
It doesn't work for e.g. `path basename`, because after removing the
dirname prepending a "./" doesn't refer to the same file, and the
basename is also expected to not contain any slashes.
Because we now count the extension including the ".", we print an
empty entry.
This makes e.g.
```fish
set -l base (path change-extension '' $somefile)
set -l ext (path extension $somefile)
echo $base$ext
```
reconstruct the filename, and makes it easier to deal with files with
no extension.
This means "../" components are cancelled out even after non-existent
paths or files.
(the alternative is to error out, but being able to say `path resolve
/path/to/file/../../` over `path resolve (path dirname
/path/to/file)/../../` seems worth it?)
Yeah, the macOS tests fail because it's started in /private/var... with a
$PWD of /var.... So resolve canonicalizes the path, which makes it no
longer match $PWD.
Simply use pwd -P
This just goes back until it finds an existent path, resolves that,
and adds the normalized rest on top.
So if you try
/bin/foo/bar////../baz
and /bin exists as a symlink to /usr/bin, it would resolve that, and
normalize the rest, giving
/usr/bin/foo/baz
(note: We might want to add this to realpath as well?)
This includes the "." in what `path extension` prints.
This allows distinguishing between an empty extension (just `.`) and a
non-existent extension (no `.` at all).
This adds a "path" builtin that can handle paths.
Implemented so far:
- "path filter PATHS", filters paths according to existence and optionally type and permissions
- "path base" and "path dir", run basename and dirname, respectively
- "path extension PATHS", prints the extension, if any
- "path strip-extension", prints the path without the extension
- "path normalize PATHS", normalizes paths - removing "/./" components
- and such.
- "path real", does realpath - i.e. normalizing *and* link resolution.
Some of these - base, dir, {strip-,}extension and normalize operate on the paths only as strings, so they handle nonexistent paths. filter and real ignore any nonexistent paths.
All output is split explicitly, so paths with newlines in them are
handled correctly. Alternatively, all subcommands have a "--null-input"/"-z" and "--null-output"/"-Z" option to handle null-terminated input and create null-terminated output. So
find . -print0 | path base -z
prints the basename of all files in the current directory,
recursively.
With "-Z" it also prints it null-separated.
(if stdout is going to a command substitution, we probably want to
skip this)
All subcommands also have a "-q"/"--quiet" flag that tells them to skip output. They return true "when something happened". For match/filter that's when a file passed, for "base"/"dir"/"extension"/"strip-extension" that's when something about the path *changed*.
Filtering
---------
`filter` supports all the file*types* `test` has - "dir", "file", "link", "block"..., as well as the permissions - "read", "write", "exec" and things like "suid".
It is missing the tty check and the check for the file being non-empty. The former is best done via `isatty`, the latter I don't think I've ever seen used.
There currently is no way to only get "real" files, i.e. ignore links pointing to files.
Examples
--------
> path real /bin///sh
/usr/bin/bash
> path extension foo.mp4
mp4
> path extension ~/.config
(nothing, because ".config" isn't an extension.)
This teaches `--on-signal SIGINT` (and by extension `trap cmd SIGINT`)
to work properly in scripts, not just interactively. Note any such
function will suppress the default behavior of exiting. Do this for
SIGTERM as well.
Like `set` and `read` before it, `eval` can be used to set variables,
and so it can't be shadowed by a function without loss of
functionality.
So this forbids it.
Incidentally, this means we will no longer try to autoload an
`eval.fish` file that's left over from an old version, which would
have helped with #8963.
Previously, running `fish_add_path /foo /foo` would result in /foo
being added to $PATH twice.
Now we check that it hasn't already been given, so we skip the
second (and any further) occurence.
This *might* be a bit faster running under TSAN, otherwise it takes >
400 seconds on Github Actions.
If this doesn't work we need to disable it for TSAN.
To recap, this means `&` in the middle of a word no longer
backgrounds.
So:
```fish
echo foo&bar # prints foo&bar
echo foo& bar # backgrounds an echo that prints "foo" and runs "bar"
```
This can no longer be changed. If "no-stderr-nocaret" is in
$fish_features it will simply be ignored.
The "^" redirection that was deprecated in fish 3.0 is now gone for good.
Note: For testing reasons, it can still be set _internally_ by running
"feature_flags_t::set". We simply shouldn't do that.
When you do
```fish
set foo-bar baz
```
"foo-baz" isn't usable as a variable *name*. When you just say the
"variable" is invalid that could also be interpreted to be a special
type of variable or something.
String tokens are subdivided by command substitutions. Some syntax errors
can occur in the gap between two command substitutions. Make the caret point
to the start of that gap, instead of the token start.
When expanding command substitutions, we use a naïve way of detecting whether
the cmdsub has the optional leading dollar. We check if the last character was
a dollar, which breaks if it's an escaped dollar. We wrongly expand
\$(echo "") to the empty string. Fix this by checking if the dollar was escaped.
The parse_util_* functions have a bunch of output parameters. We should
return a parameter bag instead (I think I tried once and failed).
Given
set var a
echo "$var$(echo b)"
the double-quoted string is expanded right-to-left, so we construct an
intermediate "$varb". Since the variable "varb" is undefined, this wrongly
expands to the empty string (should be "ab"). Fix this by isolating the
expanded command substitution internally. We do the same when handling
unquoted command substitutions.
Fixes#8849
The read test is now failing on GitHub actions even though it passes on
my Mac. It may be due to differences in dd between these two
environments. Stop using dd and just use head.
The read.fish check has a test where it limits the amount of data passed to
`read` to 8192 bytes, and verifies that fish reads exactly that amount.
This check occasionally fails on the OBS builds; it's very hard to repro a
failure locally, but I finally did it.
The amount of data written is limited via `yes` and `dd`:
yes $line | dd bs=1024 count=(math "$fish_read_limit / 1024")
The bug is that `dd` outputs a fixed number of "blocks" where a block
corresponds to a single read. As `yes` and `dd` are running concurrently,
it may happen that `dd` performs a short read; this then counts as a single
block. So `dd` may output less than the desired amount of data.
This can be verified by removing the 2>/dev/null redirection; on a
successful run dd reports `8+0 records out`, on a failed run it reports
`7+1 records out` because one of the records was short.
Fix this by using `fullblock` so that dd will no longer count a short read
as a single block. `head` would probably be a simpler tool to use but we'll
do this for now.
Happily it's not a fish bug. No need to relnote it.
This was already apparently supposed to work, but didn't because we
just overrode errno again.
This now means that, if a correctly named candidate exists, we don't
start the command-not-found handler.
See #8804
This used to call exec_subshell, which has two issues:
1. It creates a command substitution block which shows up in a stack
trace
2. It does much more work than necessary
This removes a useless "in command substitution" from an error message
in an autoloaded file, and it speeds up autoloading a bit (not
measurable in actual benchmarks, but microbenchmarks are 2x).
* Implement fish_wcstod_underscores
* Add fish_wcstod_underscores unit tests
* Switch to using fish_wcstod_underscores in tinyexpr
* Add tests for math builtin underscore separator functionality
* Add documentation for underscore separators for math builtin
* Add a changelog entry for underscore numeric separators
We can't always read in chunks because we often can't bear to
overread:
```fish
echo foo\nbar | begin
read -l foo
read -l bar
end
```
needs to have the first read read `foo` and the second read `bar`. So
here we can only read one byte at a time.
However, when we are directly redirected:
```fish
echo foo | read foo
```
we can, because the data is only for us anyway. The stream will be
closed after, so anything not read just goes away. Nobody else is
there to read.
This dramatically speeds up `read` of long lines through a pipe. How
much depends on the length of the line.
With lines of 5000 characters it's about 15x, with lines of 50
characters about 2x, lines of 5 characters about 1.07x.
See #8542.
This is the simple fix - if we have no valid digit, we have nothing to
return. So instead of returning a NULL, we return an error.
This is already the case for invalid octal escapes (like `\777`).
Fixes#8545
This reverts commits:
2d9e51b43ed1d9f147ec346ce8081b
The box drawing because it's entangled with the rest and we don't
currently use this anywhere I know of. Nor was it gated on terminfo,
so it could have broken things, for subjectively little gain.
Fixes#8727.
A history search ends when you move the cursor, but the commandline inserted by
history search is still marked as transient. This means that the next history
search will clear the transient commandline. This means we are dropping an undo
point, for example:
echo 11
echo 1
echo autosuggestion
echo^P # commandline is "echo 1"
^A # stop history search
^P # commandline is "echo 11"
^Z # Bug: commandline goes back to "echo", but it should be "echo 1"
In the worst case, we are switching from line-search to token-search (see
the attached test case). Clearing the transient edit means the line is gone
and only the token is left on the command line.
Today, a command like "var=val status " has custom completions
because we skip over the var=val variable override when detecting
the command token.
However if the custom completions read the commandline state (via
"commandline -opc") they do see they variable override, which breaks
them, most likely. Try "a=b git ".
For completions of wrapped commands, we already set a transient
commandline. Do the same for commands with leading variable overrides;
then git completions for "a=b git " will think the commandline is
"git ".
Previously, when we got an unknown option with --ignore-unknown, we
would increment woptind but still try to read the same contents.
This means in e.g.
```
argparse -i h -- -ooo -h
```
The `-h` would also be skipped as an option, because after the first
`-o` getopt reads the other two `-o` and skips that many options.
This could be handled more extensively in wgetopt, but the simpler fix
is to just skip to the next argv entry once we have an unknown option
- there's nothing more we can do with it anyway!
Additionally, document this and clearly explain that we currently
don't transform the option.
Fixes#8637
fish_git_prompt may run certain git commands which may invoke certain
external programs as specified `.git/config`. Prevent this by suppressing
certain git config options.
This affects the caret position. In an expression like
123 456
we previously reported:
123 456
^ missing operator
Now we do:
123 456
^ missing operator
We do it on the first space, which should be acceptable.
(no need for a changelog entry, we have already ignored #8511)
Only show the shebang warning for .fish commands.
Use the phrase "interpreter directive" as the formal name for the
shebang.
Switch from windows to Windows for the operating system.
"not not return 34" exits with 34, not 1. This behavior is pretty
surprising but benign. I think it's very unlikely that anyone relies
on the opposite behavior, because using two "not" decorators in one
job is weird, and code that compares not's raw exit code is rare.
The behavior doesn't match our docs, but it's not worth changing the
docs because that would confuse newcomers. Add a test to cement the
behavior and a comment to explain this is intentional.
I considered adding the comment at
parse_execution_context_t::populate_not_process where this behavior
is implemented but the field defintion seems even better, because I
expect programmers to read that first.
Closes#8377
Commit e40eba358 (Treat text following quoted command substitution
as quoted) made parse_util_locate_cmdsubst_range() aware of quoted
command substitutions, by skipping surrounding text via quote_end().
However, it was not quite right. We fail to properly parse
two consecutive command substitutions in the same string,
because we don't maintain the quoting context across calls to
parse_util_locate_cmdsubst_range(). Let's track that bit in a
parameter. This allows us to get rid of the quote_end() hack.
Also apply this to the other place where we call
parse_util_locate_cmdsubst_range() in a loop (highlighting).
Fixes#8500
This fixes a regression about where we report errors:
echo error(here
old: ^
fixed: ^
Commit 0c22f67bd (Remove the old parser bits, 2020-07-02) removed
uses of "error_offset_within_token" so we always report errors at
token start. Add it back, hopefully restoring the 3.1.2 behavior.
Note that for cases like
echo "$("
we report "unbalanced quotes" because we treat the $( as double
quote. Giving a better error seems hard because of the ambguity -
we don't know if quote is meant to be inside or outside the command
substitution.
If you make a script called `foo` somewhere in $PATH, and did not give
it a shebang, this would end up calling
sh foo
instead of
sh /usr/bin/foo
which might not match up.
Especially if the path is e.g. `--version` or `-` that would end up
being misinterpreted *by sh*.
So instead we simply pass the actual_cmd to sh, because we need it
anyway to get it to fail to execute before.
For some reason, the window dimension parameters are ignored by tmux.
Not even an extra "resize-pane -x 80 -y 10" helps. So let's just drop
that assumption from our tests.
When the completion pager fills up all lines of the screen, we subtract
from the pager size the number of lines occupied by the prompt +
command line buffer (typically 1), so the command line is always
visible. However, we only subtract the number of lines *before* the
cursor, so on some multiline commandlines we draw a pager that is
too large for our screen, clobbering the commandline rendering.
Fix this by counting all lines.
Fixes#8509
Possibly fixes#8405
A command like "printf nonewline | sed s/x/y/" does not print a
concluding newline, whereas "printf nnl | string replace x y" does.
This is an edge case -- usually the user input does have a newline at
the end -- but it seems still better for this command to just forward
the user's data.
Teach most string subcommands to check if stdin is missing the trailing
newline, and stop adding one in that case.
This does not apply when input is read from commandline arguments.
* Most subcommands stop adding the final newline, because they don't
really care about newlines, so besides their normal processing,
they just want to preserve user input. They are:
* string collect
* string escape/unescape
* string join¹
* string lower/upper
* string pad
* string replace
* string repeat
* string sub
* string trim
* string match keeps adding the newline, following "grep". Additionally,
for string match --regex, it's important to output capture groups
separated by newlines, resulting in multiple output lines for an
input line. So it is not obvious where to leave out the newline.
* string split/split0 keep adding the newline for the same reason --
they are meant to output multiple elements for a single input line.
¹) string join0 is not changed because it already printed a trailing
zero byte instead of the trailing newline. This is consistent
with other tools like "find -print0".
Closes#3847
A «complete -C '~/fish-shell/build/fish '» fails to load custom
completions because we do not expand the ~, so
complete_param_for_command() thinks that this command is invalid.
Expand command tokens before loading custom completions.
Fixes#8442
Currently,
set -q --unpath PATH
simply ignores the "--unpath" bit (and same for "--path").
This changes it, so just like exportedness you can check pathness.
This finds the first broken component, to help people figure out where
they misspelt something.
E.g.
```
echo foo >/usr/lob/systemd/system/machines.target.wants/var-lib-machines.mount
```
will now show:
```
warning: Path '/usr/lob' does not exist
```
which would help with seeing that it should be "/usr/lib".
On a commandline like "ls arg" (cursor at end) we do not expand
abbrevations on enter. OTOH, on "ls " we do expand. This can be
frustrating because it means that the two obvious ways to suppress
abbrevation expansion (C-Space or post-expansion C-Z) cannot be used to
suppress expansion of a command without arguments. (One workaround is
"ls #".)
Only expand-on-execute if the cursor is at the command name (no space
in between).
This is a strict improvement for realistic scenarios, because if there
is a space, the user has already expressed the intent to not expand
the abbreviation. (I hope no one is using recursive abbreviations.)
Closes#8423
This was supposed to act like `type -q` or `command -q`, in that it
returns 0 if at least 1 exists.
But because it used the wrong variable it didn't.
Fixes#8431.
This fixes printing octal and hex values that are negative or larger
than UINT_MAX.
Negative values get a leading -, like:
> math --base hex -10
-0xa
Fixes#8417.
Commit ec3d3a481 (Support "$(cmd)" command substitution without line
splitting, 2021-07-02) started treating an input string like
"a$()b" as if it were "a"$()"b". Yet, we do not actually insert the
virtual quotes. Instead we just adapted the definition of when quotes
are closed - hence the changes to quote_end().
parse_util_locate_cmdsubst_range() is aware
of the changes to quote_end() but some of its
callers like parse_util_detect_errors_in_argument() and
highlighter_t::color_as_argument() are not. They split strings at
command substitution boundaries without handling the special quoting
rules. (Only the expansion logic did it right.)
Fix this by handling the special quoting rules inside
parse_util_locate_cmdsubst_range(). This is a bit hacky since it
makes it harder for callers to process some substrings in between
command substitutions, but that's okay because current callers only
care about what's inside the command substitutions.
Fixes#8394
Since #4376, for-loops would set the loop variable outside, so it
stays valid.
They did this by doing the equivalent of
```fish
set -l foo $foo
for foo in 1 2 3
```
And that first imaginary `set -l` would also fire a set-event.
Since there's no use for it and the variable isn't actually set, we
remove it.
Fixes#8384.
widechar_width no longer classifies U+1F41F as widened-in-9, so the
width no longer changes.
Since we're interested in testing the change here, we need a different
emoji.
Just use 🥁, which was introduced in 9 as wide, and therefore widened
in 9.
Like the $status commit, this would add the offset to already existing
errors, so
```fish
(foo)
(bar)
something
```
would see the "(foo)" error, store the correct error location, then
see the "(bar)" error, and *add the offset of (bar)* to the "(foo)"
error location.
Solve this by making a new error list and appending it to the existing
ones.
There's a few other ways to solve this, including:
- Stopping after the first error (we only display the first anyway, I
think?)
- Making it so the source location has an "absolute" flag that shows
the offset has already been added (but do we ever need to add two offsets?)
I went with the simpler fix.
This would break the location of any prior errors without doing
anything of value.
E.g.
```fish
echo foo | exec grep # this exec is not allowed!
$status
somethingelse # The error might be found here!
```
Would apply the offset of `$status` to the offset of `exec`, locating
the error for `exec` somewhere after $status!
Prior to this change, tmux based tests would call 'isolated-tmux' which would
initialize tmux on first call, an admitted "evil hack." Switch to requiring
an explicit call to 'isolated-tmux-start' which then defines 'isolated-tmux'
and other functions. Add some loop-until-prompt logic into
'isolated-tmux-start'. This improves reliability of the tmux tests on systems
under load; at least it makes the tests pass in the background on my Mac.
Remove the '$sleep' variable, to be replaced with 'tmux-sleep'.
This makes it so we treat backspaces as width -1, but never go below a
0 total width when talking about *lines*, like in screen or string
length --visible.
Fixes#8277.
When cd is passed a broken symlink, this changes the error message from
"no such directory" to "broken symbolic link". This scenario probably
won't happen very often since completion won't suggest broken symlinks
but it can't hurt to give a good error.
Fish used to do this until 7ac5932. This logic used to be in
path_get_cdpath, however, that is only used for highlighting, so we
don't need error messages there. Changing cd is enough.
Reword from "rotten" to "broken" since that's what file(1) uses.
Clean-up leftovers from old "rotten" code (nomen est omen).
See #8264
This currently changes builtin realpath with the "-s" option:
builtin realpath -s ///tmp
previously would print "///tmp", now it prints "/tmp".
The only thing "allow_leading_double_slashes" does is allow *two*
slashes.
This is important for `path match`, to be introduced in #8265.
The tmux-prompt test would sometimes fail because the first call was:
isolated-tmux capture-pane -p
this would run a capture-pane which would race with starting fish
itself; occasionally the pane would be empty since fish has not yet
drawn a prompt. Add a loop to give fish time to draw the prompt.
This used the *logical* $PWD, but realpath would operate on the
physical $PWD if given ".", even with -s. This makes this test fail if the $PWD is
logically different from physical.
This was long overdue since the setup logic is much more complex than
the actual tests.
tmux-prompt.fish had extra logic to protect against XDG_CONFIG_HOME
with leading double double-dot. I believe this is no longer necessary
with the new test driver.
We still use our own temp dir because we want to be able to run this
independently of the test driver, This can be useful for debugging
tests. For example we can insert a "$tmux attach" command in a test,
and then run
build/fish -C 'source tests/test_functions/isolated-tmux.fish' tests/checks/tmux-bind.fish
This allows to inspect the state of the test and debug interactively.
Attaching to the terminal doesn't work when running inside littlecheck
because littlecheck consumes our output and doesn't give us a terminal.
(Maybe there's an easy way to fix that?)
On request of a team member, this patches `basic.fish` to no longer
depend on being invoked by the test driver and started up in a $PWD that
points to a clean temporary directory.
This was requested by a team member who would like for some tests to
remain invokable (in thier own $HOME) directly via littlecheck without
relying on the test driver to prep the environment.
A comment explaining the rationale is also added so this doesn't get
passed down as folklore "you need to include this for tests to run" even
though no one understands why.
Tests are now executed in a test-specific temporary directory, so test
output on failure should be reproducible/reusable as-is without needing
to have TMPDIR defined (as it only exists by default under macOS).
Instead of trying to assert that there are no zombies when the test
starts (which often fails) and to prevent conflating existing or
irrelevant zombies with the ones we are interested in checking for,
have `ps` also emit the parent process id and filter its output to
include only children of the current fish instance.
Aside from the fact that the shared state could cause problems, tests
were randomly assuming it would be created where that wasn't the case.
In particular, `redirect.fish` and `basic.fish` were failing on only
macOS because `../test/temp` didn't exist yet - it would be created by
other tests later.
This disables job control inside command substitutions. Prior to this
change, a cmdsub might get its own process group. This caused it to fail
to cancel loops properly. For example:
while true ; echo (sleep 5) ; end
could not be control-C cancelled, because the signal would go to sleep,
and so the loop would continue on. The simplest way to fix this is to
match other shells and not use job control in cmdsubs.
Related is #1362
* commandline: Add --is-valid option to query whether it's syntactically complete
This means querying when the commandline is in a state that it could
be executed. Because our `execute` bind function also inserts a
newline if it isn't.
One case that's not handled right now: `execute` also expands
abbreviations, those can technically make the commandline invalid
again.
Unfortunately we have no real way to *check* without doing the
replacement.
Also since abbreviations are only available in command position when
you _execute_ them the commandline will most likely be valid.
This is enough to make transient prompts work:
```fish
function reset-transient --on-event fish_postexec
set -g TRANSIENT 0
end
function maybe_execute
if commandline --is-valid
set -g TRANSIENT 1
commandline -f repaint
else
set -g TRANSIENT 0
end
commandline -f execute
end
bind \r maybe_execute
```
and then in `fish_prompt` react to $TRANSIENT being set to 1.
Because we are, ultimately, interested in how many cells a string
occupies, we *have* to handle carriage return (`\r`) and line
feed (`\n`).
A carriage return sets the current tally to 0, and only the longest
tally is kept. The idea here is that the last position is the same as
the last position of the longest string. So:
abcdef\r123
ends up looking like
123def
which is the same width as abcdef, 6.
A line feed meanwhile means we flush the current tally and start a new
one. Every line is printed separately, even if it's given as one.
That's because, well, counting the width over multiple lines
doesn't *help*.
As a sidenote: This is necessarily imperfect, because, while we may
know the width of the terminal ($COLUMNS), we don't know the current
cursor position. So we can only give the width, and the user can then
figure something out on their own.
But for the common case of figuring out how wide the prompt is, this
should do.
* Add `set --function`
This makes the function's scope available, even inside of blocks. Outside of blocks it's the toplevel local scope.
This removes the need to declare variables locally before use, and will probably end up being the main way variables get set.
E.g.:
```fish
set -l thing
if condition
set thing one
else
set thing two
end
```
could be written as
```fish
if condition
set -f thing one
else
set -f thing two
end
```
Note: Many scripts shipped with fish use workarounds like `and`/`or`
instead of `if`, so it isn't easy to find good examples.
Also, if there isn't an else-branch in that above, just with
```fish
if condition
set -f thing one
end
```
that means something different from setting it before! Now, if
`condition` isn't true, it would use a global (or universal) variable of
te same name!
Some more interesting parts:
Because it *is* a local scope, setting a variable `-f` and
`-l` in the toplevel of a function ends up the same:
```fish
function foo2
set -l foo bar
set -f foo baz # modifies the *same* variable!
end
```
but setting it locally inside a block creates a new local variable
that shadows the function-scoped variable:
```fish
function foo3
set -f foo bar
begin
set -l foo banana
# $foo is banana
end
# $foo is bar again
end
```
This is how local variables already work. "Local" is actually "block-scoped".
Also `set --show` will only show the closest local scope, so it won't
show a shadowed function-level variable. Again, this is how local
variables already work, and could be done as a separate change.
As a fun tidbit, functions with --no-scope-shadowing can now use this to set variables in the calling function. That's probably okay given that it's already an escape hatch (but to be clear: if it turns out to problematic I reserve the right to remove it).
Fixes#565
Fixes some regressions from 35ca42413 ("Simplify some parse_util functions").
The tmux tests are not beautiful but I find them easy to write.
Probably a pexpect test would also be enough here?
for PWD in foo; true; end
prints:
>..src/parse_execution.cpp:461: end_execution_reason_t parse_execution_context_t::run_for_statement(const ast::for_header_t&, const ast::job_list_t&): Assertion `retval == ENV_OK' failed.
because this used the wrong way to see if something is read-only.
This allows us to test that `test` takes numbers with decimal point even in comma-using locales,
to stop those pesky americans from breaking everything again.
(and yes, we use french to keep myself honest)
Through a mechanism I don't entirely understand, $PWD is sometimes
writable (so that `cd` can change it) and sometimes not.
In this case we ended up with it writable, which is wrong.
See #8179.
This didn't do all the syntax checks, so something like
fish -c 'echo foo; and $status'
complained of a missing command `0` (i.e. $status), and
fish -c 'echo foo | exec grep'
hit an assert!
So we do what read_ni does, parse each command into an ast, run
parse_util_detect_errors on it if it worked and then eval the ast.
It is possible to do this neater by modifying parser::eval, but I
can't find where.
This is slightly unclean. Even tho it would otherwise be syntactically
valid, using $status as a command is very very very likely to be an
error, like
if not $status
We have reports of this surprisingly regularly, including #2773.
Because $status can only ever be a value from 0 to 255, it is also
very unlikely to be an actual command, and that command is very
unlikely to do what you want.
So we simply point the user towards the "conditions" help section,
that should explain things.
This is opt-in through a new feature flag "ampersand-nobg-in-token".
When this flag and "qmark-noglob" are enabled, this command no longer
needs quoting:
curl https://example.com/thing?foo=bar&duran=duran
Compared to the previous approach e1570a4 ("Let '&' only separate as
the first char of a word"), this has some advantages:
1. "&&" and "&>" are no longer affected. They are still special, even
if used between tokens without spaces, like "echo bar&>foo".
Maybe this is not really *better*, but it avoids risking to annoy
users by breaking the old variant.
2. "&" is still special if at the end of a token, like in "sleep 1&".
Word movement is not affected by the semantics change, so Alt-F and
friends still stop at every "&".
Currently, if a "return" is given outside of a function, we'd just
throw an error.
That always struck me as a bit weird, given that scripts can also
return a value.
So simply let "return" outside also exit the script, kinda like "exit"
does.
However, unlike "exit" it doesn't quit an interactive shell - it seems
weird to have "return" do that as well. It sets $status, so it can be
used to quickly set that, in case you want to test something.
In the variable handler, we just go through the entire thing and keep
every element once.
If there's a duplicate, we set it again, which calls the handler
again.
This takes a bit of time, to be paid on each startup. On my system,
with 100 already deduplicated elements, that's about 4ms (compared to
~17ms for adding them to $PATH).
It's also semantically more complicated - now this variable
specifically is deduplicated? Do we just want "unique" variables that
can't have duplicates?
However: This entirely removes the pathological case of appending to
$fish_user_paths in config.fish (which should be an FAQ entry!), and the implementation is quite simple.
This adds a hack to the parser. Given a command
echo "x$()y z"
we virtually insert double quotes before and after the command
substitution, so the command internally looks like
echo "x"$()"y z"
This hack allows to reuse the existing logic for handling (recursive)
command substitutions.
This makes the quoting syntax more complex; external highlighters
should consider adding this if possible.
The upside (more Bash compatibility) seems worth it.
Closes#159
In some setups (eg. macports) $tmpdir can expand to more than
100 symbols and tests fail with 'socket file name too long'
errors.
Using relative path to socket file fixes the issue.
* string: Allow `collect --no-empty` to avoid empty ellision
Currently we still have that issue where
test -n (thing | string collect)
can return true if `thing` doesn't print anything, because the
collected argument will still be removed.
So, what we do is allow `--no-empty` to be used, in which case we
print one empty argument.
This means
test -n (thing | string collect -n)
can now be safely used.
"no-empty" isn't the best name for this flag, but string's design
really incentivizes reusing names, and it's not *terrible*.
* Switch to `--allow-empty`
`--no-empty` does the exact opposite for `string split` and split0.
Since `-a`/`--allow-empty` already exists, use it.
The tmux-prompt test was failing when run more than once, because
XDG_DATA_HOME has a leading double-dot, causing the uvars file to
leak across sessions. Descend more deeply into our tmpdir to isolate
our XDG_DATA_HOME.
This reverts commit b56b230076.
which somehow made us miss repaints on uvar notifications.
The commit was a workaround for a polling bug which was later properly
fixed by 7c5b8b855 ("Use the uvar notifier pipe timestamp to avoid
excessive polling"), so it's no longer necessary.
Add a system test. If I had a better understanding of the bug I could
probably write a better test.
Fixes#8088
We used to warn about PATH and CDPATH that are not valid directories,
but only if they contain colons.
However, the warning was a false positive because we would split
those values by colons anyway. So there is nothing left we want to
warn about.
Fixes#8095
sigint2 would hang (probably because of different semantics in signal
delivery?)
wcstod isn't implemented correctly, so math can't do hex numbers.
OpenBSD only passes the filename as argv[0] and doesn't give us another feature I know of, so status fish-path can't work.
* Try to set LC_CTYPE to something UTF-8 capable
When fish is started with LC_CTYPE=C (even just effectively, often via
LC_ALL=C!), it's basically broken. There's no way to handle non-ASCII
characters with a C locale unless we want to write our
locale-independent replacements for all of the system functions.
Since we're not going to do that, let's try to find *some locale* for
LC_CTYPE.
We already do that in __fish_setlocale, but that's
- a bit of a weird thing that reads unstandardized system
configuration files
- allows setting locale to C explicitly
So it's still easily possible to end up in a broken configuration.
Now, the issue with this is that there is (AFAICT) no portable way to
get a list of all allowed locales and C.UTF-8 is not standardized, so
we have no one locale to fall back on and are forced to try a few. The
list we have here is quite arbitrary, but it's a start.
Python does something similar and only tries C.UTF-8, C.utf8 and
"UTF-8".
Once C.UTF-8 is (hopefully) standardized, that will just start
working (tm).
Note that we do not *export* the fixed LC_CTYPE variable, so external
programs still have to deal with the C locale, but we have no real
business messing with the user's environment.
To turn it off: $fish_allow_singlebyte_locale, if set to something true (like "1"),
will re-run the locale initialization and skip the bit where we force
LC_CTYPE to be utf8-capable.
This is mainly used in our tests, but might also be useful if people
are trying to do something weird.
The hope is that the noshebang test was fixed on old glibc
through e74b9d53df. Revert the previous optimistic attempts to
fix these through adding sleeps and subshells.
This reverts commit b3da0bd5a2.
This reverts commit 8a86d3452f.
This is an attempt to solve the test failures on Launchpad's CI.
I'm assuming when we do a redirection like
foo > file
and then try to execute `file` immediately afterwards, we either
haven't written it soon enough or closed the file, so we get a "text
file busy" error.
So, when we do that in a new fish the file should be closed once it
quits.
See #8021.
When you try to execute a file directly after you've written to it,
you might, on some systems, get a "text file busy" error.
So we unfortunately have to sleep to avoid it.
See #8021 for where this was added,
537b3f6cb1 for the same problem.
Now that `$last_pid` is never fish's pid, we no longer need to force
jobs to run in their own pgroup. Restore the job control behavior to
what it was prior, so that signals may be delivered properly in
non-interactive mode.
This reverts commit 3255999794
Prior to this change, a function with an on-job-exit event handler must be
added with the pgid of the job. But sometimes the pgid of the job is fish
itself (if job control is disabled) and the previous commit made last_pid
an actual pid from the job, instead of its pgroup.
Switch on-job-exit to accept any pid from the job (except fish itself).
This allows it to be used directly with $last_pid, except that it now
works if job control is off. This is implemented by "resolving" the pid to
the internal job id at the point the event handler is added.
Also switch to passing the last pid of the job, rather than its pgroup.
This aligns better with $last_pid.
It is possible to run a function when a process exits via `function
--on-process-exit`, or when a job exits via `function --on-job-exits`.
Internally these were distinguished by the pid in the event: if it was
positive, then it was a process exit. If negative, it represents a pgid
and is a job exit. If zero, it fires for both jobs and processes, which is
pretty weird.
Switch to tracking these explicitly. Separate out the --on-process-exit
and --on-job-exit event types into separate types. Stop negating pgids as
well.
This switches builtin_wait from waiting on jobs in the active job list, to
waiting on the wait handles. The wait handles may be either derived from
the job list itself, or from saved wait handles from jobs that exited in
the background.
Fixes#7210
Prior to this fix, an escaped character like \x41 (hex for ascii A)
was interpreted the same was as A, so that $\x41 would be the same
as $A. Fix this by inserting an INTERNAL_SEPARATOR before these escapes,
so that we no longer treat it as part of the variable name.
This also affects brackets; don't treat echo $foo\1331\135 the same as
echo $foo[1].
Fixes#7969
Just add some extra sleep time so it hopefully also works when the
CI system is overloaded. This succeeded >60 times in the CI, without
a single failure.
In case it legitimately fails again, we should provide simple steps
to reproduce the failure interactively (using "tmux attach").
The uvar issue only triggered because two fish are started - one is
running the tmux-complete script, the other one is running inside tmux.
We could reduce the complexity of this test by writing it in a
different language, like sh or python.
Reproducible at least on Linux, where the "named pipe" universal
variable notifier is used:
rm -rf build/test/xdg_config
XDG_CONFIG_HOME=build/test/xdg_config ./build/fish -c "xterm -e ./build/fish"
The child fish reacts to keyboard input with a noticeable initial
delay. This is because the universal variable file is polled over
a million times, even when I immediately press Control-D. This polling
prevents readb() from handling keyboard input.
Before commit 939aba02d ("Refactor input_common.cpp:readb"), readb()
reacted to keyboard input even when there were universal variable
notifications. Restore this behavior, but make sure to call the
universal variable notifier after the new "prepare_to_select" logic.
Maybe the problem is in the notifier but the old behavior was sane.
Fixes the problems described in
7a556ec6f2 (commitcomment-49773677)
Adding "-d uvars-file" to the reproducesr shows that we are checking
the uvar file repeatedly:
uvar-file: universal log sync
uvar-file: universal log sync elided based on fast stat()
uvar-file: universal log no modifications
This only uses the functions fish ships with, but still doesn't allow
any *customization*, which is the point of no-config.
This makes it a lot more usable, given that the actual normal prompt
and things are there.
This still doesn't set any colors, because we don't run
__fish_config_interactive because we don't read config.fish (any
config.fish), because that would run the snippets.
In many cases we currently discard escaped newlines, since they
are often unnecessary (when used around &|;). Escaped newlines
are useful for structuring argument lists. Allow them for variable
assignments since they are similar.
Closes#7955