All usages of `mktemp` must go through the (fish-only) `mktemp` test function
that abstracts over the differences across multiple platforms/flavors.
Tests can be easily run individually via `ninja -C build test_xxx` and there
isn't a good reason to randomly manually override $HOME and $XDG_CONFIG_HOME for
a test here and a test there.
If it's absolutely necessary, littlecheck.py should be extended to support a
`%temp` variable initialized to a temporary directory and that can be used
instead of calling out to the platform-provided `mktemp` via a subshell.
For unknown reasons, the i686 launchpad builders fail on this date,
but apparently not the others.
Let's just remove it, we've tested dates older than the epoch, this is
slightly redundant.
As discussed in #9221, a bug in the autocomplete that was fixed in 66391922
caused completions to be incorrectly suppressed. The dropped test/check was
inadvertently relying on the buggy behavior and expected a git invocation to
generate no completions but there are, in fact, completions now that the bug has
been resolved.
cc @faho: I'm not sure if you want to replace this with a different check that
actually doesn't yield any completions or if you're happy with it just being
dropped.
This fails on old Ubuntu with:
> touch: invalid date format ‘190112112040.39’
Because we don't actually need the seconds here, we just use minute
resolution. It's fine.
Also use `path mtime`, because that's a portable way to get the mtime.
I forgot `stat` is non-portable. There's no great way to portably get a
machine-readable representation of stat(2) for a file. I don't want to ship our
own lstat(2) wrapper executable just for this test and don't want to fork out to
python or perl for this either - I just wanted to get the tests to pass under
WSL :'(
Anyway, just give up and make it skip just for WSL. If another OS fails this
test in the future, the comments and existing workaround will make it easy to
figure out what the problem is and what needs to be done. We'll cross that
bridge when we get there.
It turns out that not all systems print an unsigned integer as the output of
`stat -c %Y xxx` and the leading `-` can be misinterpreted as a parameter to
`string match`.
There's no guarantee (nor requirement) that the filesystem support pre-epoch
modification dates. If it doesn't, the `test` tests were failing to get the
expected results.
Skip the test if it seems the fs doesn't support pre-epoch timestamps
(determined by pre-epoch mt of `oldest` evaluating to 0 or the unix epoch).
* Replace ";" with "\n" in alias-generated functions
This can let us add a "#" in our aliases to make
them ignore additional arguments.
* Update changelog about aliases that ignore arguments
* Update test for alias.fish
This is now compliant with the aliases that can
ignore arguments.
When fish runs with job control enabled, it transfers ownership of the
tty to a child process, and then reclaims the tty after the process
exits. If job control is disabled then fish does not transfer or reclaim
the tty.
It may happen that the child process creates a pgroup and then transfers
the tty to it. In that case fish will not attempt to reclaim the tty, as
fish did not transfer it. Then when fish reads from stdin it will
receive SIGTTIN instead of data.
Fix this by unconditionally claiming the tty in readline().
Fixes#9181
This errored out *later* because the result was infinite or NaN, but
it didn't actually stop evaluation.
I'm not sure if there is a way to get floating point math to turn an
infinity back into something that doesn't depend on a literal
infinity, but division by zero conceptually isn't a thing we can
support.
There's entire branches of maths dedicated to figuring out what
dividing by "basically zero" means and we don't have to get into it.
This is essentially the inverse of `string pad`.
Where that adds characters to get up to the specified width,
this adds an ellipsis to a string if it goes over a specific maximum width.
The char can be given, but defaults to our ellipsis string.
("…" if the locale can handle it and "..." otherwise)
If the ellipsis string is empty, it just truncates.
For arguments given via argv, it goes line-by-line,
because otherwise length makes no sense.
If "--no-newline" is given, it adds an ellipsis instead and removes all subsequent lines.
Like pad and `length --visible`, it goes by visible width,
skipping recognized escape sequences, as those have no influence on width.
The default target width is the shortest of the given widths that is non-zero.
If the ellipsis is already wider than the target width,
we truncate instead. This is safer overall, so we don't e.g. move into a new line.
This is especially important given our default ellipsis might be width 3.
When selecting items in the pager, only the latest of those items is kept
in the edit history, as so-called transient edit. Each new transient edit
evicts any old transient edit (via undo).
If the pager is closed by a command that performs another transient edit
(like history-token-search-backward) we thus inadvertently undo (= remove)
the token inserted by the pager. Fix this by closing a transient edit
session when closing the pager. Token search will start its own session.
Fixes#9160
This starts two sleep processes and expects them to be killed on
SIGHUP.
Unfortunately, if this ever fails the second run will also fail
because it'll see the old sleep still lying around (because it'll run
for 130 seconds).
So, what we do is:
1. Keep the pids for these specific sleeps
2. Check if any of them are still running (and only fail for them)
3. Kill them from python
Fixes#9152
This checked specifically for "| and" and "a=b" and then just gave the
error without a caret at all.
E.g. for a /tmp/broken.fish that contains
```fish
echo foo
echo foo | and cat
```
This would print:
```
/tmp/broken.fish (line 3): The 'and' command can not be used in a pipeline
warning: Error while reading file /tmp/broken.fish
```
without any indication other than the line number as to the location
of the error.
Now we do
```
/tmp/broken.fish (line 3): The 'and' command can not be used in a pipeline
echo foo | and cat
^~^
warning: Error while reading file /tmp/broken.fish
```
Another nice one:
```
fish --no-config -c 'echo notprinted; echo foo; a=b'
```
failed to give the error message!
(Note: Is it really a "warning" if we failed to read the one file we
wer told to?)
We should check if we should either centralize these error messages
completely, or always pass them and remove this "code" system, because
it's only used in some cases.
This skipped printing a "^" line if the start or length of the error
was longer than the source.
That seems like the correc thing at first glance, however it means
that the caret line isn't skipped *if the file goes on*.
So, for example
```fish
echo "$abc["
```
by itself, in a file or via `fish -c`, would not print an error, but
```fish
echo "$abc["
true
```
would. That's not a great way to print errors.
So instead we just.. imagine the start was at most at the end.
The underlying issue why `echo "$abc["` causes this is that `wcstol`
didn't move the end pointer for the index value (because there is no
number there). I'd fix this, but apparently some of
our recursive variable calls absolutely rely on this position value.
This stops us from loading the completions for e.g. `./foo` if there
is no `foo` in path.
This is because the completion scripts will call an unqualified `foo`,
and then error out.
This of course means if the script would work because it never calls
the command, we still don't load it.
Pathed completions via `complete --path` should be unaffected because
they aren't autoloaded anyway.
Workaround for #3117Fixes#9133
This was misguidedly "fixed" in
9e08609f85, which made printf error out
with any "-"-prefixed words as the first argument.
Note: This means currently `printf --help` doesn't print the help.
This also matches `echo`, and we currently don't have anything to make
a literal `--help` execute a builtin help except for keywords. Oh well.
Fixes#9132
This used to be kept, so e.g. testing it with
fish_read_limit=5 echo (string repeat -n 10 a)
would cause the prompt and such to error as well.
Also there was no good way to get back to the default value
afterwards.
* string repeat: Don't allocate repeated string all at once
This used to allocate one string and fill it with the necessary
repetitions, which could be a very very large string.
Now, it instead uses one buffer and fills it to a chunk size,
and then writes that.
This fixes:
1. We no longer crash with too large max/count values. Before they
caused a bad_alloc because we tried to fill all RAM.
2. We no longer fill all RAM if given a big-but-not-too-big value. You
could've caused fish to eat *most* of your RAM here.
3. It can start writing almost immediately, instead of waiting
potentially minutes to start.
Performance is about the same to slightly faster overall.
The previous fix was reverted because it broke another scenario. Add tests
for both scenarios.
The first test exposes another problem: autosuggestions are sometimes not
recomputed after selecting the first completion with Tab Tab. Fix that too.
Since the fix for #3892, this escaping style escapes
\n to \\n
as well as
\\ to \\\\
\' to \\'
I believe these two are the only printable characters that are escaped with
ESCAPE_NO_PRINTABLES.
The rationale is probably to keep the encoding unambiguous and reversible.
However that doesn't justify escaping the single quote. Probably this was
an accident, so let's revert that part.
This has the nice effect that single quotes will no longer be escaped
when rendered in the completion pager (which is consistent with other
special characters). Try it:
complete : -a "aaa\'\; aaaa\'\;" -f
Also this makes the error output of builtin bind consistent:
$ bind -e --preset \;
$ bind -e --preset \'
$ bind \;
bind: No binding found for sequence “;”
$ bind \'
bind: No binding found for sequence “'”
the last line is clearly better than the old version:
bind: No binding found for sequence “\'”
In general, the fact that ESCAPE_NO_PRINTABLES escapes the (printable)
backslash is weird but I guess it's fine because it looks more consistent to
users, even though the result is an undocumented subset of the fish language.
command_line_has_transient_edit tracks the actual command line, not the
pager search field. We accidentally reset it after modifying the search field
which causes unexpected behavior - the commandline added by the completion
pager remains even after I press Escape.
This was an inadvertent change from
cc632d6ae9.
Because we used wgetcwd directly before, we always got the "physical"
resolved $PWD.
There's an argument to be made to use the logical $PWD here as well
but I prefer not to make changes lik that in a random commit without
good reason.
This can be used to print the modification time, like `stat` with some
options.
The reason is that `stat` has caused us a number of portability
headaches:
1. It's not available everywhere by default
2. The versions are quite different
For instance, with GNU stat it's `stat -c '%Y'`, with macOS it's `stat
-f %m`.
So now checking a cache file can be done just with builtins.
This adds a line to `set --show`s output like
```
$PATH: originally inherited as |/home/alfa/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/var/lib/flatpak/exports/bin|
```
to help with debugging.
Note that this means keeping an additional copy of the original
environment around. At most this would be one ARG_MAX's worth, which
is about 2M.
This is sort of slow because it's called hundreds of times.
We used to have a cache, introduced in ad9b4290e, but it was removed
in fee5a9125a because it had
false-positives.
So what we do, because the issue is that this is called hundreds of
times per-commandline, we cache it keyed on the commandline.
This speeds up `complete -C'git sta'` by a factor of 2.3x.
It's still useful without, for instance to implement a command that
takes no options, or to check min-args or max-args.
(technically no optspecs, no min/max args and --ignore-unknown does
nothing, but that's a very specific error that we don't need to forbid)
Fixes#9006
Commit ad9b4290e optimized git completions by adding a completion that would
run on every completion request, which allows to precompute data used by
other completion entries. Unfortunately, the completion entry is not run
when the commandline contains a flag like `git -C`. If we didn't
already load git.fish, we'd error. Additionally, we got false positive
completions for `git diff -c`.
So this hack was a very bad idea. We should optimize in another way.
This is simply an error in test setup. There's a limit to how far we
can isolate them from the system.
(it's possible new cmake versions close fds automatically since I
can't reproduce the original issue via `ninja test` or `make test`)
Fixes#9017
This lacks the tmux-256color terminfo entry, leading to spurious
warnings like
warning: Could not set up terminal. <= no check matches
warning: TERM environment variable set to \'tmux-256color\'. <= no check matches
warning: Check that this terminal type is supported on this system. <= no check matches
warning: Using fallback terminal type \'ansi\'. <= no check matches
Git's pathspec system is kind of annoying:
> A pathspec that begins with a colon : has special meaning. In the short form, the leading colon : is followed by zero or more "magic signature" letters (which optionally is terminated by another colon :), and the remainder is the pattern to match against the path. The "magic signature" consists of ASCII symbols that are neither alphanumeric, glob, regex special characters nor colon. The optional colon that terminates the "magic signature" can be omitted if the pattern begins with a character that does not belong to "magic signature" symbol set and is not a colon.
So if we complete `:/foo`, that "works" because "f" is alphanumeric
and so the "/" is the only magic character here.
If, however the filename starts with a magic character, that's used as
a magic signature.
So we do what the docs say and terminate the magic signature after the
"/" (which means "from the repo root").
Fixes#9004
This makes it so
1. The informative status can work without showing untracked
files (previously it was disabled if bash.showUntrackedFiles was
false)
2. If untrackedfiles isn't explicitly enabled, we use -uno, so git
doesn't have to scan all the files.
In a large repository (like the FreeBSD ports repo), this can improve
performance by a factor of 5 or up.
In b0084c3fc4, we refactored out event handlers get removed. But this
also caused us to remove "one-shot" handlers even if they have not yet
been fired. Fix this.
When switching this to use `git status`, I neglected to use the
correct definition of what a "dirty" and a "staged" change is.
So this now showed already staged files still as "dirty".
Fixes#8986
This makes it so `complete -c foo -n test1 -n test2` registers *both*
conditions, and when it comes time to check the candidate, tries both,
in that order. If any fails it stops, if all succeed the completion is offered.
The reason for this is that it helps with caching - we have a
condition cache, but conditions like
```fish
test (count (commandline -opc)) -ge 2; and contains -- (commandline -opc)[2] length
test (count (commandline -opc)) -ge 2; and contains -- (commandline -opc)[2] sub
```
defeats it pretty easily, because the cache only looks at the entire
script as a string - it can't tell that the first `test` is the same
in both.
So this means we separate it into
```fish
complete -f -c string -n "test (count (commandline -opc)) -ge 2; and contains -- (commandline -opc)[2] length" -s V -l visible -d "Use the visible width, excluding escape sequences"
+complete -f -c string -n "test (count (commandline -opc)) -ge 2" -n "contains -- (commandline -opc)[2] length" -s V -l visible -d "Use the visible width, excluding escape sequences"
```
which allows the `test` to be cached.
In tests, this improves performance for the string completions by 30%
by reducing all the redundant `test` calls.
The `git` completions can also greatly benefit from this.
This adds a path builtin to deal with paths.
It offers the following subcommands:
filter to go through a list of paths and only print the ones that pass some filter - exist, are a directory, have read permission, ...
is as a shortcut for filter -q to only return true if one of the paths passed the filter
basename, dirname and extension to print certain parts of the path
change-extension to change the extension to a different one (as a string operation)
normalize and resolve to canonicalize the paths in various flavors
sort to sort paths, also only using the basename or dirname as a key
The definition of "extension" here was carefully considered and should line up with how extensions are actually used - ~/.bashrc doesn't have an extension, but ~/.conf.d does (".d").
These subcommands all compose well - they can read from arguments or stdin (like string), they can use null-delimited input or output (input is autodetected - if a NULL happens in the first PATH_MAX bytes it switches automatically).
It is both a failglob exception (so like set if a glob passed to it fails it just doesn't get any arguments for it instead of triggering an error), and passes output to command substitution buffers explicitly split (like string split0) so newlines are easy to handle.
This would still remove non-existent paths, which isn't a strict
inversion and contradicts the docs.
Currently, to only allow paths that exist but don't pass a type check,
you'd have to filter twice:
path filter -Z foo bar | path filter -vfz
If a shortcut for this becomes necessary we can add it later.
This is now added to the two commands that definitely deal with
relative paths.
It doesn't work for e.g. `path basename`, because after removing the
dirname prepending a "./" doesn't refer to the same file, and the
basename is also expected to not contain any slashes.
Because we now count the extension including the ".", we print an
empty entry.
This makes e.g.
```fish
set -l base (path change-extension '' $somefile)
set -l ext (path extension $somefile)
echo $base$ext
```
reconstruct the filename, and makes it easier to deal with files with
no extension.
This means "../" components are cancelled out even after non-existent
paths or files.
(the alternative is to error out, but being able to say `path resolve
/path/to/file/../../` over `path resolve (path dirname
/path/to/file)/../../` seems worth it?)
Yeah, the macOS tests fail because it's started in /private/var... with a
$PWD of /var.... So resolve canonicalizes the path, which makes it no
longer match $PWD.
Simply use pwd -P
This just goes back until it finds an existent path, resolves that,
and adds the normalized rest on top.
So if you try
/bin/foo/bar////../baz
and /bin exists as a symlink to /usr/bin, it would resolve that, and
normalize the rest, giving
/usr/bin/foo/baz
(note: We might want to add this to realpath as well?)
This includes the "." in what `path extension` prints.
This allows distinguishing between an empty extension (just `.`) and a
non-existent extension (no `.` at all).
This adds a "path" builtin that can handle paths.
Implemented so far:
- "path filter PATHS", filters paths according to existence and optionally type and permissions
- "path base" and "path dir", run basename and dirname, respectively
- "path extension PATHS", prints the extension, if any
- "path strip-extension", prints the path without the extension
- "path normalize PATHS", normalizes paths - removing "/./" components
- and such.
- "path real", does realpath - i.e. normalizing *and* link resolution.
Some of these - base, dir, {strip-,}extension and normalize operate on the paths only as strings, so they handle nonexistent paths. filter and real ignore any nonexistent paths.
All output is split explicitly, so paths with newlines in them are
handled correctly. Alternatively, all subcommands have a "--null-input"/"-z" and "--null-output"/"-Z" option to handle null-terminated input and create null-terminated output. So
find . -print0 | path base -z
prints the basename of all files in the current directory,
recursively.
With "-Z" it also prints it null-separated.
(if stdout is going to a command substitution, we probably want to
skip this)
All subcommands also have a "-q"/"--quiet" flag that tells them to skip output. They return true "when something happened". For match/filter that's when a file passed, for "base"/"dir"/"extension"/"strip-extension" that's when something about the path *changed*.
Filtering
---------
`filter` supports all the file*types* `test` has - "dir", "file", "link", "block"..., as well as the permissions - "read", "write", "exec" and things like "suid".
It is missing the tty check and the check for the file being non-empty. The former is best done via `isatty`, the latter I don't think I've ever seen used.
There currently is no way to only get "real" files, i.e. ignore links pointing to files.
Examples
--------
> path real /bin///sh
/usr/bin/bash
> path extension foo.mp4
mp4
> path extension ~/.config
(nothing, because ".config" isn't an extension.)
This teaches `--on-signal SIGINT` (and by extension `trap cmd SIGINT`)
to work properly in scripts, not just interactively. Note any such
function will suppress the default behavior of exiting. Do this for
SIGTERM as well.
Like `set` and `read` before it, `eval` can be used to set variables,
and so it can't be shadowed by a function without loss of
functionality.
So this forbids it.
Incidentally, this means we will no longer try to autoload an
`eval.fish` file that's left over from an old version, which would
have helped with #8963.
Previously, running `fish_add_path /foo /foo` would result in /foo
being added to $PATH twice.
Now we check that it hasn't already been given, so we skip the
second (and any further) occurence.
This *might* be a bit faster running under TSAN, otherwise it takes >
400 seconds on Github Actions.
If this doesn't work we need to disable it for TSAN.
Curses variables like `enter_italics_mode` are secretly defined to
dereference through the `cur_term` variable. Be sure we do not read or
write these curses variables if cur_term is NULL. See #8873, #8875.
Add a regression test.
To recap, this means `&` in the middle of a word no longer
backgrounds.
So:
```fish
echo foo&bar # prints foo&bar
echo foo& bar # backgrounds an echo that prints "foo" and runs "bar"
```
This can no longer be changed. If "no-stderr-nocaret" is in
$fish_features it will simply be ignored.
The "^" redirection that was deprecated in fish 3.0 is now gone for good.
Note: For testing reasons, it can still be set _internally_ by running
"feature_flags_t::set". We simply shouldn't do that.
Prior to this change, if you tab-completed a token with a wildcard (glob), we
would invoke ordinary completions. Instead, expand the wildcard, replacing
the wildcard with the result of expansions. If the wildcard fails to expand,
flash the command line to signal an error and do not modify it.
Example:
> touch file(seq 4)
> echo file*<tab>
becomes:
> echo file1 file2 file3 file4
whereas before the tab would have just added a space.
Some things to note:
1. If the expansion would produce more than 256 items, we flash the command
line and do nothing, since it would make the commandline overfull.
2. The wildcard token can be brought back through Undo (ctrl-Z).
3. This only kicks in if the wildcard is in the "path component
containing the cursor." If the wildcard is in a previous component,
we continue using completions as normal.
Fixes#954.
When you do
```fish
set foo-bar baz
```
"foo-baz" isn't usable as a variable *name*. When you just say the
"variable" is invalid that could also be interpreted to be a special
type of variable or something.
String tokens are subdivided by command substitutions. Some syntax errors
can occur in the gap between two command substitutions. Make the caret point
to the start of that gap, instead of the token start.
When expanding command substitutions, we use a naïve way of detecting whether
the cmdsub has the optional leading dollar. We check if the last character was
a dollar, which breaks if it's an escaped dollar. We wrongly expand
\$(echo "") to the empty string. Fix this by checking if the dollar was escaped.
The parse_util_* functions have a bunch of output parameters. We should
return a parameter bag instead (I think I tried once and failed).
Given
set var a
echo "$var$(echo b)"
the double-quoted string is expanded right-to-left, so we construct an
intermediate "$varb". Since the variable "varb" is undefined, this wrongly
expands to the empty string (should be "ab"). Fix this by isolating the
expanded command substitution internally. We do the same when handling
unquoted command substitutions.
Fixes#8849
The read test is now failing on GitHub actions even though it passes on
my Mac. It may be due to differences in dd between these two
environments. Stop using dd and just use head.
The read.fish check has a test where it limits the amount of data passed to
`read` to 8192 bytes, and verifies that fish reads exactly that amount.
This check occasionally fails on the OBS builds; it's very hard to repro a
failure locally, but I finally did it.
The amount of data written is limited via `yes` and `dd`:
yes $line | dd bs=1024 count=(math "$fish_read_limit / 1024")
The bug is that `dd` outputs a fixed number of "blocks" where a block
corresponds to a single read. As `yes` and `dd` are running concurrently,
it may happen that `dd` performs a short read; this then counts as a single
block. So `dd` may output less than the desired amount of data.
This can be verified by removing the 2>/dev/null redirection; on a
successful run dd reports `8+0 records out`, on a failed run it reports
`7+1 records out` because one of the records was short.
Fix this by using `fullblock` so that dd will no longer count a short read
as a single block. `head` would probably be a simpler tool to use but we'll
do this for now.
Happily it's not a fish bug. No need to relnote it.
This was already apparently supposed to work, but didn't because we
just overrode errno again.
This now means that, if a correctly named candidate exists, we don't
start the command-not-found handler.
See #8804
This used to call exec_subshell, which has two issues:
1. It creates a command substitution block which shows up in a stack
trace
2. It does much more work than necessary
This removes a useless "in command substitution" from an error message
in an autoloaded file, and it speeds up autoloading a bit (not
measurable in actual benchmarks, but microbenchmarks are 2x).
Cancellation groups were meant to reflect the following idea: if you ran a
simple block:
begin
cmd1
cmd2
end
then under job control, cmd1 and cmd2 would get separate groups; however if
either exits due to SIGINT or SIGQUIT we also want to propagate that to the
outer block. So the outermost block and its interior jobs would share a
cancellation group. However this is more complex than necessary; it's
sufficient for the execution context to just store an int internally.
This ought not to affect anything user-visible.
* Implement fish_wcstod_underscores
* Add fish_wcstod_underscores unit tests
* Switch to using fish_wcstod_underscores in tinyexpr
* Add tests for math builtin underscore separator functionality
* Add documentation for underscore separators for math builtin
* Add a changelog entry for underscore numeric separators
We can't always read in chunks because we often can't bear to
overread:
```fish
echo foo\nbar | begin
read -l foo
read -l bar
end
```
needs to have the first read read `foo` and the second read `bar`. So
here we can only read one byte at a time.
However, when we are directly redirected:
```fish
echo foo | read foo
```
we can, because the data is only for us anyway. The stream will be
closed after, so anything not read just goes away. Nobody else is
there to read.
This dramatically speeds up `read` of long lines through a pipe. How
much depends on the length of the line.
With lines of 5000 characters it's about 15x, with lines of 50
characters about 2x, lines of 5 characters about 1.07x.
See #8542.
This is the simple fix - if we have no valid digit, we have nothing to
return. So instead of returning a NULL, we return an error.
This is already the case for invalid octal escapes (like `\777`).
Fixes#8545
This reverts commits:
2d9e51b43ed1d9f147ec346ce8081b
The box drawing because it's entangled with the rest and we don't
currently use this anywhere I know of. Nor was it gated on terminfo,
so it could have broken things, for subjectively little gain.
Fixes#8727.
A history search ends when you move the cursor, but the commandline inserted by
history search is still marked as transient. This means that the next history
search will clear the transient commandline. This means we are dropping an undo
point, for example:
echo 11
echo 1
echo autosuggestion
echo^P # commandline is "echo 1"
^A # stop history search
^P # commandline is "echo 11"
^Z # Bug: commandline goes back to "echo", but it should be "echo 1"
In the worst case, we are switching from line-search to token-search (see
the attached test case). Clearing the transient edit means the line is gone
and only the token is left on the command line.
Say the user has a multi-char binding (typically an escape sequence), and a
signal arrives partway through the binding. The signal has an event handler
which enques some readline event, for example, `repaint`. Prior to this
change, the readline event would cause the multi-char binding to fail. This
would cause bits of the escape sequence to be printed to the screen.
Fix this by noticing when a sequence was "interrupted" by a non-char event,
and then rotating a sequence of such interruptions to the front of the
queue.
Fixes#8628
Today, a command like "var=val status " has custom completions
because we skip over the var=val variable override when detecting
the command token.
However if the custom completions read the commandline state (via
"commandline -opc") they do see they variable override, which breaks
them, most likely. Try "a=b git ".
For completions of wrapped commands, we already set a transient
commandline. Do the same for commands with leading variable overrides;
then git completions for "a=b git " will think the commandline is
"git ".
`read` allows specifying the initial command line text. This was
text got accidentally ignored starting in a32248277f. Fix this
regression and add a test.
Fixes#8633
Previously, when we got an unknown option with --ignore-unknown, we
would increment woptind but still try to read the same contents.
This means in e.g.
```
argparse -i h -- -ooo -h
```
The `-h` would also be skipped as an option, because after the first
`-o` getopt reads the other two `-o` and skips that many options.
This could be handled more extensively in wgetopt, but the simpler fix
is to just skip to the next argv entry once we have an unknown option
- there's nothing more we can do with it anyway!
Additionally, document this and clearly explain that we currently
don't transform the option.
Fixes#8637
fish_git_prompt may run certain git commands which may invoke certain
external programs as specified `.git/config`. Prevent this by suppressing
certain git config options.
This affects the caret position. In an expression like
123 456
we previously reported:
123 456
^ missing operator
Now we do:
123 456
^ missing operator
We do it on the first space, which should be acceptable.
(no need for a changelog entry, we have already ignored #8511)
Only show the shebang warning for .fish commands.
Use the phrase "interpreter directive" as the formal name for the
shebang.
Switch from windows to Windows for the operating system.
"not not return 34" exits with 34, not 1. This behavior is pretty
surprising but benign. I think it's very unlikely that anyone relies
on the opposite behavior, because using two "not" decorators in one
job is weird, and code that compares not's raw exit code is rare.
The behavior doesn't match our docs, but it's not worth changing the
docs because that would confuse newcomers. Add a test to cement the
behavior and a comment to explain this is intentional.
I considered adding the comment at
parse_execution_context_t::populate_not_process where this behavior
is implemented but the field defintion seems even better, because I
expect programmers to read that first.
Closes#8377
Commit e40eba358 (Treat text following quoted command substitution
as quoted) made parse_util_locate_cmdsubst_range() aware of quoted
command substitutions, by skipping surrounding text via quote_end().
However, it was not quite right. We fail to properly parse
two consecutive command substitutions in the same string,
because we don't maintain the quoting context across calls to
parse_util_locate_cmdsubst_range(). Let's track that bit in a
parameter. This allows us to get rid of the quote_end() hack.
Also apply this to the other place where we call
parse_util_locate_cmdsubst_range() in a loop (highlighting).
Fixes#8500
This fixes a regression about where we report errors:
echo error(here
old: ^
fixed: ^
Commit 0c22f67bd (Remove the old parser bits, 2020-07-02) removed
uses of "error_offset_within_token" so we always report errors at
token start. Add it back, hopefully restoring the 3.1.2 behavior.
Note that for cases like
echo "$("
we report "unbalanced quotes" because we treat the $( as double
quote. Giving a better error seems hard because of the ambguity -
we don't know if quote is meant to be inside or outside the command
substitution.
If you make a script called `foo` somewhere in $PATH, and did not give
it a shebang, this would end up calling
sh foo
instead of
sh /usr/bin/foo
which might not match up.
Especially if the path is e.g. `--version` or `-` that would end up
being misinterpreted *by sh*.
So instead we simply pass the actual_cmd to sh, because we need it
anyway to get it to fail to execute before.
For some reason, the window dimension parameters are ignored by tmux.
Not even an extra "resize-pane -x 80 -y 10" helps. So let's just drop
that assumption from our tests.
When the completion pager fills up all lines of the screen, we subtract
from the pager size the number of lines occupied by the prompt +
command line buffer (typically 1), so the command line is always
visible. However, we only subtract the number of lines *before* the
cursor, so on some multiline commandlines we draw a pager that is
too large for our screen, clobbering the commandline rendering.
Fix this by counting all lines.
Fixes#8509
Possibly fixes#8405
A command like "printf nonewline | sed s/x/y/" does not print a
concluding newline, whereas "printf nnl | string replace x y" does.
This is an edge case -- usually the user input does have a newline at
the end -- but it seems still better for this command to just forward
the user's data.
Teach most string subcommands to check if stdin is missing the trailing
newline, and stop adding one in that case.
This does not apply when input is read from commandline arguments.
* Most subcommands stop adding the final newline, because they don't
really care about newlines, so besides their normal processing,
they just want to preserve user input. They are:
* string collect
* string escape/unescape
* string join¹
* string lower/upper
* string pad
* string replace
* string repeat
* string sub
* string trim
* string match keeps adding the newline, following "grep". Additionally,
for string match --regex, it's important to output capture groups
separated by newlines, resulting in multiple output lines for an
input line. So it is not obvious where to leave out the newline.
* string split/split0 keep adding the newline for the same reason --
they are meant to output multiple elements for a single input line.
¹) string join0 is not changed because it already printed a trailing
zero byte instead of the trailing newline. This is consistent
with other tools like "find -print0".
Closes#3847
A «complete -C '~/fish-shell/build/fish '» fails to load custom
completions because we do not expand the ~, so
complete_param_for_command() thinks that this command is invalid.
Expand command tokens before loading custom completions.
Fixes#8442
Currently,
set -q --unpath PATH
simply ignores the "--unpath" bit (and same for "--path").
This changes it, so just like exportedness you can check pathness.
* fish_key_reader: Simplify default output
It now only prints the bind statement. Timing information and such is
relegated to a separate "verbose" mode.
* Adjust fish_key_reader docs
* Adjust tests
This finds the first broken component, to help people figure out where
they misspelt something.
E.g.
```
echo foo >/usr/lob/systemd/system/machines.target.wants/var-lib-machines.mount
```
will now show:
```
warning: Path '/usr/lob' does not exist
```
which would help with seeing that it should be "/usr/lib".
On a commandline like "ls arg" (cursor at end) we do not expand
abbrevations on enter. OTOH, on "ls " we do expand. This can be
frustrating because it means that the two obvious ways to suppress
abbrevation expansion (C-Space or post-expansion C-Z) cannot be used to
suppress expansion of a command without arguments. (One workaround is
"ls #".)
Only expand-on-execute if the cursor is at the command name (no space
in between).
This is a strict improvement for realistic scenarios, because if there
is a space, the user has already expressed the intent to not expand
the abbreviation. (I hope no one is using recursive abbreviations.)
Closes#8423
This was supposed to act like `type -q` or `command -q`, in that it
returns 0 if at least 1 exists.
But because it used the wrong variable it didn't.
Fixes#8431.
This allows rebinding escape in the user list without breaking e.g.
arrow keys (which send escape and then `[A` and similar, so escape is
a prefix of them).
Fixes#8428.
This fixes printing octal and hex values that are negative or larger
than UINT_MAX.
Negative values get a leading -, like:
> math --base hex -10
-0xa
Fixes#8417.
Commit ec3d3a481 (Support "$(cmd)" command substitution without line
splitting, 2021-07-02) started treating an input string like
"a$()b" as if it were "a"$()"b". Yet, we do not actually insert the
virtual quotes. Instead we just adapted the definition of when quotes
are closed - hence the changes to quote_end().
parse_util_locate_cmdsubst_range() is aware
of the changes to quote_end() but some of its
callers like parse_util_detect_errors_in_argument() and
highlighter_t::color_as_argument() are not. They split strings at
command substitution boundaries without handling the special quoting
rules. (Only the expansion logic did it right.)
Fix this by handling the special quoting rules inside
parse_util_locate_cmdsubst_range(). This is a bit hacky since it
makes it harder for callers to process some substrings in between
command substitutions, but that's okay because current callers only
care about what's inside the command substitutions.
Fixes#8394
Since #4376, for-loops would set the loop variable outside, so it
stays valid.
They did this by doing the equivalent of
```fish
set -l foo $foo
for foo in 1 2 3
```
And that first imaginary `set -l` would also fire a set-event.
Since there's no use for it and the variable isn't actually set, we
remove it.
Fixes#8384.
widechar_width no longer classifies U+1F41F as widened-in-9, so the
width no longer changes.
Since we're interested in testing the change here, we need a different
emoji.
Just use 🥁, which was introduced in 9 as wide, and therefore widened
in 9.
fish might use XDG_RUNTIME_DIR for the uvar notifier fifo, so this
makes sure that tests are isolated.
Also set permissions to comply with the XDG basedir spec.
Like the $status commit, this would add the offset to already existing
errors, so
```fish
(foo)
(bar)
something
```
would see the "(foo)" error, store the correct error location, then
see the "(bar)" error, and *add the offset of (bar)* to the "(foo)"
error location.
Solve this by making a new error list and appending it to the existing
ones.
There's a few other ways to solve this, including:
- Stopping after the first error (we only display the first anyway, I
think?)
- Making it so the source location has an "absolute" flag that shows
the offset has already been added (but do we ever need to add two offsets?)
I went with the simpler fix.
This would break the location of any prior errors without doing
anything of value.
E.g.
```fish
echo foo | exec grep # this exec is not allowed!
$status
somethingelse # The error might be found here!
```
Would apply the offset of `$status` to the offset of `exec`, locating
the error for `exec` somewhere after $status!
Prior to this change, tmux based tests would call 'isolated-tmux' which would
initialize tmux on first call, an admitted "evil hack." Switch to requiring
an explicit call to 'isolated-tmux-start' which then defines 'isolated-tmux'
and other functions. Add some loop-until-prompt logic into
'isolated-tmux-start'. This improves reliability of the tmux tests on systems
under load; at least it makes the tests pass in the background on my Mac.
Remove the '$sleep' variable, to be replaced with 'tmux-sleep'.
This makes it so we treat backspaces as width -1, but never go below a
0 total width when talking about *lines*, like in screen or string
length --visible.
Fixes#8277.
When cd is passed a broken symlink, this changes the error message from
"no such directory" to "broken symbolic link". This scenario probably
won't happen very often since completion won't suggest broken symlinks
but it can't hurt to give a good error.
Fish used to do this until 7ac5932. This logic used to be in
path_get_cdpath, however, that is only used for highlighting, so we
don't need error messages there. Changing cd is enough.
Reword from "rotten" to "broken" since that's what file(1) uses.
Clean-up leftovers from old "rotten" code (nomen est omen).
See #8264
This currently changes builtin realpath with the "-s" option:
builtin realpath -s ///tmp
previously would print "///tmp", now it prints "/tmp".
The only thing "allow_leading_double_slashes" does is allow *two*
slashes.
This is important for `path match`, to be introduced in #8265.
This lets us run non-fish targets (such as `fish_tests`) under a clean
test environment without running into the fish-specific payload
configuration now carried out by `test_driver.sh` which expects a
`.fish` payload that it will run under a deterministically configured
instance of fish, running in an environment initialized by
`test_env.sh`.
This should fix the problem with in-tree builds leaving detritus behind
after a `make test` when `fish_tests` would be executed without
`test_driver.sh` - it is now executed under `test_env.sh` instead.
The tmux-prompt test would sometimes fail because the first call was:
isolated-tmux capture-pane -p
this would run a capture-pane which would race with starting fish
itself; occasionally the pane would be empty since fish has not yet
drawn a prompt. Add a loop to give fish time to draw the prompt.
On macOS, the tests would often fail because calls to `pkill` would "leak"
across tests: kill processes run by other tests. This is because on macOS,
the -P argument to pkill must come before the process name. On Linux it
doesn't matter.
This improves test reliability on Mac.
For littlecheck/pexpect this just unconditionally enables color.
I have no idea what happens if you run cmake outside of a terminal
, but the worst that can happen is that *errors* have color
escapes in them.
If someone figures out how to get cmake to tell us if it's running in
a terminal, we can add a check.
This used the *logical* $PWD, but realpath would operate on the
physical $PWD if given ".", even with -s. This makes this test fail if the $PWD is
logically different from physical.
This was long overdue since the setup logic is much more complex than
the actual tests.
tmux-prompt.fish had extra logic to protect against XDG_CONFIG_HOME
with leading double double-dot. I believe this is no longer necessary
with the new test driver.
We still use our own temp dir because we want to be able to run this
independently of the test driver, This can be useful for debugging
tests. For example we can insert a "$tmux attach" command in a test,
and then run
build/fish -C 'source tests/test_functions/isolated-tmux.fish' tests/checks/tmux-bind.fish
This allows to inspect the state of the test and debug interactively.
Attaching to the terminal doesn't work when running inside littlecheck
because littlecheck consumes our output and doesn't give us a terminal.
(Maybe there's an easy way to fix that?)
On request of a team member, this patches `basic.fish` to no longer
depend on being invoked by the test driver and started up in a $PWD that
points to a clean temporary directory.
This was requested by a team member who would like for some tests to
remain invokable (in thier own $HOME) directly via littlecheck without
relying on the test driver to prep the environment.
A comment explaining the rationale is also added so this doesn't get
passed down as folklore "you need to include this for tests to run" even
though no one understands why.
Tests are now executed in a test-specific temporary directory, so test
output on failure should be reproducible/reusable as-is without needing
to have TMPDIR defined (as it only exists by default under macOS).
Instead of trying to assert that there are no zombies when the test
starts (which often fails) and to prevent conflating existing or
irrelevant zombies with the ones we are interested in checking for,
have `ps` also emit the parent process id and filter its output to
include only children of the current fish instance.
Aside from the fact that the shared state could cause problems, tests
were randomly assuming it would be created where that wasn't the case.
In particular, `redirect.fish` and `basic.fish` were failing on only
macOS because `../test/temp` didn't exist yet - it would be created by
other tests later.
Even though we are using CMake's ctest for testing, we still define our
own `make test` target rather than use its default for many reasons:
* CMake doesn't run tests in-proc or even add each tests as an
individual node in the ninja dependency tree, instead it just bundles
all tests into a target called `test` that always just shells out to
`ctest`, so there are no build-related benefits to not doing that
ourselves.
* CMake devs insist that it is appropriate for `make test` to never
depend on `make all`, i.e. running `make test` does not require any
of the binaries to be built before testing.
* The only way to have a test depend on a binary is to add a fake test
with a name like "build_fish" that executes CMake recursively to
build the `fish` target.
* It is not possible to set top-level CTest options/settings such as
CTEST_PARALLEL_LEVEL from within the CMake configuration file.
* Circling back to the point about individual tests not being actual
Makefile targets, CMake does not offer any way to execute a named
test via the `make`/`ninja`/whatever interface; the only way to
manually invoke test `foo` is to to manually run `ctest` and specify
a regex matching `foo` as an argument, e.g. `ctest -R ^foo$`... which
is really crazy.
With this patch, it is now possible to execute any single test by name,
by invoking the build directly, e.g. to run the `universal.fish` check:
`cmake --build build --target universal.fish` or
`ninja -C build universal.fish`. Unfortunately, this is not integrated
into the Makefile wrapper, so `make universal.fish` won't work (although
this can potentially be hacked around).
Fixes#8232.
Note that this needed to have expect_prompt used in the pexpect test -
we might want to add a "catchup" there so you can just ignore the
prompt counter for a bit and pick it back up later.
This disables job control inside command substitutions. Prior to this
change, a cmdsub might get its own process group. This caused it to fail
to cancel loops properly. For example:
while true ; echo (sleep 5) ; end
could not be control-C cancelled, because the signal would go to sleep,
and so the loop would continue on. The simplest way to fix this is to
match other shells and not use job control in cmdsubs.
Related is #1362
* commandline: Add --is-valid option to query whether it's syntactically complete
This means querying when the commandline is in a state that it could
be executed. Because our `execute` bind function also inserts a
newline if it isn't.
One case that's not handled right now: `execute` also expands
abbreviations, those can technically make the commandline invalid
again.
Unfortunately we have no real way to *check* without doing the
replacement.
Also since abbreviations are only available in command position when
you _execute_ them the commandline will most likely be valid.
This is enough to make transient prompts work:
```fish
function reset-transient --on-event fish_postexec
set -g TRANSIENT 0
end
function maybe_execute
if commandline --is-valid
set -g TRANSIENT 1
commandline -f repaint
else
set -g TRANSIENT 0
end
commandline -f execute
end
bind \r maybe_execute
```
and then in `fish_prompt` react to $TRANSIENT being set to 1.
Because we are, ultimately, interested in how many cells a string
occupies, we *have* to handle carriage return (`\r`) and line
feed (`\n`).
A carriage return sets the current tally to 0, and only the longest
tally is kept. The idea here is that the last position is the same as
the last position of the longest string. So:
abcdef\r123
ends up looking like
123def
which is the same width as abcdef, 6.
A line feed meanwhile means we flush the current tally and start a new
one. Every line is printed separately, even if it's given as one.
That's because, well, counting the width over multiple lines
doesn't *help*.
As a sidenote: This is necessarily imperfect, because, while we may
know the width of the terminal ($COLUMNS), we don't know the current
cursor position. So we can only give the width, and the user can then
figure something out on their own.
But for the common case of figuring out how wide the prompt is, this
should do.
* Add `set --function`
This makes the function's scope available, even inside of blocks. Outside of blocks it's the toplevel local scope.
This removes the need to declare variables locally before use, and will probably end up being the main way variables get set.
E.g.:
```fish
set -l thing
if condition
set thing one
else
set thing two
end
```
could be written as
```fish
if condition
set -f thing one
else
set -f thing two
end
```
Note: Many scripts shipped with fish use workarounds like `and`/`or`
instead of `if`, so it isn't easy to find good examples.
Also, if there isn't an else-branch in that above, just with
```fish
if condition
set -f thing one
end
```
that means something different from setting it before! Now, if
`condition` isn't true, it would use a global (or universal) variable of
te same name!
Some more interesting parts:
Because it *is* a local scope, setting a variable `-f` and
`-l` in the toplevel of a function ends up the same:
```fish
function foo2
set -l foo bar
set -f foo baz # modifies the *same* variable!
end
```
but setting it locally inside a block creates a new local variable
that shadows the function-scoped variable:
```fish
function foo3
set -f foo bar
begin
set -l foo banana
# $foo is banana
end
# $foo is bar again
end
```
This is how local variables already work. "Local" is actually "block-scoped".
Also `set --show` will only show the closest local scope, so it won't
show a shadowed function-level variable. Again, this is how local
variables already work, and could be done as a separate change.
As a fun tidbit, functions with --no-scope-shadowing can now use this to set variables in the calling function. That's probably okay given that it's already an escape hatch (but to be clear: if it turns out to problematic I reserve the right to remove it).
Fixes#565
Fixes some regressions from 35ca42413 ("Simplify some parse_util functions").
The tmux tests are not beautiful but I find them easy to write.
Probably a pexpect test would also be enough here?
for PWD in foo; true; end
prints:
>..src/parse_execution.cpp:461: end_execution_reason_t parse_execution_context_t::run_for_statement(const ast::for_header_t&, const ast::job_list_t&): Assertion `retval == ENV_OK' failed.
because this used the wrong way to see if something is read-only.
This allows us to test that `test` takes numbers with decimal point even in comma-using locales,
to stop those pesky americans from breaking everything again.
(and yes, we use french to keep myself honest)
Through a mechanism I don't entirely understand, $PWD is sometimes
writable (so that `cd` can change it) and sometimes not.
In this case we ended up with it writable, which is wrong.
See #8179.
This didn't do all the syntax checks, so something like
fish -c 'echo foo; and $status'
complained of a missing command `0` (i.e. $status), and
fish -c 'echo foo | exec grep'
hit an assert!
So we do what read_ni does, parse each command into an ast, run
parse_util_detect_errors on it if it worked and then eval the ast.
It is possible to do this neater by modifying parser::eval, but I
can't find where.
This is slightly unclean. Even tho it would otherwise be syntactically
valid, using $status as a command is very very very likely to be an
error, like
if not $status
We have reports of this surprisingly regularly, including #2773.
Because $status can only ever be a value from 0 to 255, it is also
very unlikely to be an actual command, and that command is very
unlikely to do what you want.
So we simply point the user towards the "conditions" help section,
that should explain things.
This is opt-in through a new feature flag "ampersand-nobg-in-token".
When this flag and "qmark-noglob" are enabled, this command no longer
needs quoting:
curl https://example.com/thing?foo=bar&duran=duran
Compared to the previous approach e1570a4 ("Let '&' only separate as
the first char of a word"), this has some advantages:
1. "&&" and "&>" are no longer affected. They are still special, even
if used between tokens without spaces, like "echo bar&>foo".
Maybe this is not really *better*, but it avoids risking to annoy
users by breaking the old variant.
2. "&" is still special if at the end of a token, like in "sleep 1&".
Word movement is not affected by the semantics change, so Alt-F and
friends still stop at every "&".
Currently, if a "return" is given outside of a function, we'd just
throw an error.
That always struck me as a bit weird, given that scripts can also
return a value.
So simply let "return" outside also exit the script, kinda like "exit"
does.
However, unlike "exit" it doesn't quit an interactive shell - it seems
weird to have "return" do that as well. It sets $status, so it can be
used to quickly set that, in case you want to test something.
Today the reader exposes its internals directly, e.g. to the commandline
builtin. This is of course not thread safe. For example in concurrent
execution, running `commandline` twice in separate threads would cause a
race and likely a crash.
Fix this by factoring all the commandline state into a new type
'commandline_state_t'. Make it a singleton (there is only one command
line
after all) and protect it with a lock.
No user visible change here.
In the variable handler, we just go through the entire thing and keep
every element once.
If there's a duplicate, we set it again, which calls the handler
again.
This takes a bit of time, to be paid on each startup. On my system,
with 100 already deduplicated elements, that's about 4ms (compared to
~17ms for adding them to $PATH).
It's also semantically more complicated - now this variable
specifically is deduplicated? Do we just want "unique" variables that
can't have duplicates?
However: This entirely removes the pathological case of appending to
$fish_user_paths in config.fish (which should be an FAQ entry!), and the implementation is quite simple.
This adds a hack to the parser. Given a command
echo "x$()y z"
we virtually insert double quotes before and after the command
substitution, so the command internally looks like
echo "x"$()"y z"
This hack allows to reuse the existing logic for handling (recursive)
command substitutions.
This makes the quoting syntax more complex; external highlighters
should consider adding this if possible.
The upside (more Bash compatibility) seems worth it.
Closes#159
This apparently doesn't work at all under Github Actions with tsan, so let's skip it.
If anyone feels the need to dig deeper into this, have at it. I find
this distracting.
When the user presses control-C, fish marks a cancellation signal which
prevents fish script from running, allowing it to properly unwind.
Prior to this commit, the signal was cleared in the reader. However this
missed the case where a binding would set $fish_bind_mode which would
trigger event handlers: the event handlers would be skipped because of
the cancellation flag was still set. This is similar to #6937.
Let's clear the flag earlier, as soon as we it's set, in inputter_t.
Fixes#8125.
In some setups (eg. macports) $tmpdir can expand to more than
100 symbols and tests fail with 'socket file name too long'
errors.
Using relative path to socket file fixes the issue.
* string: Allow `collect --no-empty` to avoid empty ellision
Currently we still have that issue where
test -n (thing | string collect)
can return true if `thing` doesn't print anything, because the
collected argument will still be removed.
So, what we do is allow `--no-empty` to be used, in which case we
print one empty argument.
This means
test -n (thing | string collect -n)
can now be safely used.
"no-empty" isn't the best name for this flag, but string's design
really incentivizes reusing names, and it's not *terrible*.
* Switch to `--allow-empty`
`--no-empty` does the exact opposite for `string split` and split0.
Since `-a`/`--allow-empty` already exists, use it.
The tmux-prompt test was failing when run more than once, because
XDG_DATA_HOME has a leading double-dot, causing the uvars file to
leak across sessions. Descend more deeply into our tmpdir to isolate
our XDG_DATA_HOME.
This reverts commit b56b230076.
which somehow made us miss repaints on uvar notifications.
The commit was a workaround for a polling bug which was later properly
fixed by 7c5b8b855 ("Use the uvar notifier pipe timestamp to avoid
excessive polling"), so it's no longer necessary.
Add a system test. If I had a better understanding of the bug I could
probably write a better test.
Fixes#8088
We used to warn about PATH and CDPATH that are not valid directories,
but only if they contain colons.
However, the warning was a false positive because we would split
those values by colons anyway. So there is nothing left we want to
warn about.
Fixes#8095
sigint2 would hang (probably because of different semantics in signal
delivery?)
wcstod isn't implemented correctly, so math can't do hex numbers.
OpenBSD only passes the filename as argv[0] and doesn't give us another feature I know of, so status fish-path can't work.
* Try to set LC_CTYPE to something UTF-8 capable
When fish is started with LC_CTYPE=C (even just effectively, often via
LC_ALL=C!), it's basically broken. There's no way to handle non-ASCII
characters with a C locale unless we want to write our
locale-independent replacements for all of the system functions.
Since we're not going to do that, let's try to find *some locale* for
LC_CTYPE.
We already do that in __fish_setlocale, but that's
- a bit of a weird thing that reads unstandardized system
configuration files
- allows setting locale to C explicitly
So it's still easily possible to end up in a broken configuration.
Now, the issue with this is that there is (AFAICT) no portable way to
get a list of all allowed locales and C.UTF-8 is not standardized, so
we have no one locale to fall back on and are forced to try a few. The
list we have here is quite arbitrary, but it's a start.
Python does something similar and only tries C.UTF-8, C.utf8 and
"UTF-8".
Once C.UTF-8 is (hopefully) standardized, that will just start
working (tm).
Note that we do not *export* the fixed LC_CTYPE variable, so external
programs still have to deal with the C locale, but we have no real
business messing with the user's environment.
To turn it off: $fish_allow_singlebyte_locale, if set to something true (like "1"),
will re-run the locale initialization and skip the bit where we force
LC_CTYPE to be utf8-capable.
This is mainly used in our tests, but might also be useful if people
are trying to do something weird.
The hope is that the noshebang test was fixed on old glibc
through e74b9d53df. Revert the previous optimistic attempts to
fix these through adding sleeps and subshells.
This reverts commit b3da0bd5a2.
This reverts commit 8a86d3452f.
This is an attempt to solve the test failures on Launchpad's CI.
I'm assuming when we do a redirection like
foo > file
and then try to execute `file` immediately afterwards, we either
haven't written it soon enough or closed the file, so we get a "text
file busy" error.
So, when we do that in a new fish the file should be closed once it
quits.
See #8021.
When you try to execute a file directly after you've written to it,
you might, on some systems, get a "text file busy" error.
So we unfortunately have to sleep to avoid it.
See #8021 for where this was added,
537b3f6cb1 for the same problem.
Now that `$last_pid` is never fish's pid, we no longer need to force
jobs to run in their own pgroup. Restore the job control behavior to
what it was prior, so that signals may be delivered properly in
non-interactive mode.
This reverts commit 3255999794
Prior to this change, a function with an on-job-exit event handler must be
added with the pgid of the job. But sometimes the pgid of the job is fish
itself (if job control is disabled) and the previous commit made last_pid
an actual pid from the job, instead of its pgroup.
Switch on-job-exit to accept any pid from the job (except fish itself).
This allows it to be used directly with $last_pid, except that it now
works if job control is off. This is implemented by "resolving" the pid to
the internal job id at the point the event handler is added.
Also switch to passing the last pid of the job, rather than its pgroup.
This aligns better with $last_pid.
It is possible to run a function when a process exits via `function
--on-process-exit`, or when a job exits via `function --on-job-exits`.
Internally these were distinguished by the pid in the event: if it was
positive, then it was a process exit. If negative, it represents a pgid
and is a job exit. If zero, it fires for both jobs and processes, which is
pretty weird.
Switch to tracking these explicitly. Separate out the --on-process-exit
and --on-job-exit event types into separate types. Stop negating pgids as
well.
This switches builtin_wait from waiting on jobs in the active job list, to
waiting on the wait handles. The wait handles may be either derived from
the job list itself, or from saved wait handles from jobs that exited in
the background.
Fixes#7210
Prior to this fix, an escaped character like \x41 (hex for ascii A)
was interpreted the same was as A, so that $\x41 would be the same
as $A. Fix this by inserting an INTERNAL_SEPARATOR before these escapes,
so that we no longer treat it as part of the variable name.
This also affects brackets; don't treat echo $foo\1331\135 the same as
echo $foo[1].
Fixes#7969
This is the last time I'm doing this before I rip these particular
tests out.
As far as I know there is no actual *problem* here, this is just
failing through a combination of macOS and Github Actions being slow
as molasses.
So it is wasting our time and therefore worse than not having these
tests at all, especially since they very rarely fail for good reasons.
We would leave some escape delay tests intact with generous timeouts, which would provide 90%
of the coverage with 10% of the hassle.
This simply checks if the parser requested exit after running any
binding scripts (in read_normal_chars).
I think this means we no longer need the `exit` bind function.
Fixes#7967.
Just add some extra sleep time so it hopefully also works when the
CI system is overloaded. This succeeded >60 times in the CI, without
a single failure.
In case it legitimately fails again, we should provide simple steps
to reproduce the failure interactively (using "tmux attach").
The uvar issue only triggered because two fish are started - one is
running the tmux-complete script, the other one is running inside tmux.
We could reduce the complexity of this test by writing it in a
different language, like sh or python.
Reproducible at least on Linux, where the "named pipe" universal
variable notifier is used:
rm -rf build/test/xdg_config
XDG_CONFIG_HOME=build/test/xdg_config ./build/fish -c "xterm -e ./build/fish"
The child fish reacts to keyboard input with a noticeable initial
delay. This is because the universal variable file is polled over
a million times, even when I immediately press Control-D. This polling
prevents readb() from handling keyboard input.
Before commit 939aba02d ("Refactor input_common.cpp:readb"), readb()
reacted to keyboard input even when there were universal variable
notifications. Restore this behavior, but make sure to call the
universal variable notifier after the new "prepare_to_select" logic.
Maybe the problem is in the notifier but the old behavior was sane.
Fixes the problems described in
7a556ec6f2 (commitcomment-49773677)
Adding "-d uvars-file" to the reproducesr shows that we are checking
the uvar file repeatedly:
uvar-file: universal log sync
uvar-file: universal log sync elided based on fast stat()
uvar-file: universal log no modifications
This only uses the functions fish ships with, but still doesn't allow
any *customization*, which is the point of no-config.
This makes it a lot more usable, given that the actual normal prompt
and things are there.
This still doesn't set any colors, because we don't run
__fish_config_interactive because we don't read config.fish (any
config.fish), because that would run the snippets.
In many cases we currently discard escaped newlines, since they
are often unnecessary (when used around &|;). Escaped newlines
are useful for structuring argument lists. Allow them for variable
assignments since they are similar.
Closes#7955
This would print the default "Argument is invalid" error string, which
is *true* but not super obvious, because `test` doesn't always perform
numeric conversion, and that's the bit that failed here.
This refactors the behavior of string match with capture groups to
correctly handle multiple arguments. Now the variable capture applies to
the first match, as documented. Fixes#7938.
string match is documented as setting an unset variable if a capture group
is unmatched in an otherwise matched regex, and if the `--all` flag is not
provided. However prior to this fix, it instead set a variable containing
the empty string as a single value. Correct the implementation to match
the documentation.
Note that if the `--all` flag is provided we continue to set empty
strings, which is documented.
This removes the relative XDG paths, which could have potentially
confused tmux, and also starts the window with the correct size
instead of adjusting the size afterwards.
Make it an ordinary struct wrapping a vector, instead of a template.
This is in preparation for using it more widely, for matching bindings
as well as mouse CSI sequences.
Also add some mouse-disabling tests.
This runs in 100ms increments, so there's not a lot of harm in trying
longer - it should take the same time everywhere it succeeded before.
But I've reproduced failures on FreeBSD 13 on sr.ht, so there's at
least one platform where a total time of 1 second isn't enough.
Now we do 50 tries, which is 5 seconds.
This could have been one iteration off, e.g.
```fish
function on-winch --on-signal winch
echo $LINES
end
```
Resize the terminal, it'll print e.g.
24
then run `echo $LINES` interactively, it might have a different answer.
This isn't beautiful, but it works. A better solution might be to make
the termsize vars electric and just always update them on read?
Fixes#7926.
Also switches the default status order for non-informative to the informative one:
stagedstate invalidstate dirtystate untrackedfiles stashstate
instead of
dirty staged stash untracked
Things like
```fish
complete command -n '__fish_seen_subcommand_from subcommand'
--force-files
```
would not be obeyed because we only checked force-files when there was
an option.
Fixes#7920.
When fish starts, it notices which pgroup owns the tty, and then it
restores that pgroup's tty ownership when it exits. However if fish does
not own the tty, then (on Mac at least) the tcsetpgrp call triggers a
SIGSTOP and fish will hang while trying to exit.
The first change is to ignore SIGTTOU instead of defaulting it. This
prevents the hang; however it risks re-introducing #7060.
The second change somewhat mitigates the risk of the first: only do the
restore if the initial pgroup is different than fish's pgroup. This
prevents some useless calls which might potentially steal the tty from
another process (e.g. in #7060).
This correctly sets $status when a builtin succeeds but its output fails;
for example if the output is redirected to a file and that write fails.
Fixes#7857
* math: Make function parentheses optional
It's a bit annoying to use parentheses here because that requires
quoting or escaping.
This allows the parens to be omitted, so
math sin pi
is the same as
math 'sin(pi)'
Function calls have the lowest precedence, so
math sin 2 + 6
is the same as
math 'sin(2 + 6)'
* Add more tests
* Add a note to the docs
* even moar docs
Moar docca
* moar tests
Call me Nikola Testla
It's not super clear what $SHLVL is useful for, but the current
definition is essentially
"number of shells in the parent processes + 1"
which isn't *super useful*?
Bash's behavior here is a bit weird in that it increments $SHLVL
basically always, but since it auto-execs the last process it will
decrement it again, so in practice it's often not incremented.
E.g.
```
> echo $SHLVL
1
> bash -c 'echo $SHLVL; bash'
2
>> echo $SHLVL
2
```
Both bashes here end up having the same $SHLVL because this is
equivalent to `echo $SHLVL; exec bash`. Running `echo $SHLVL` and then
`bash -c 'echo $SHLVL'` in an interactive bash will have a different
result (1 and 2) because that doesn't *exec* the inner bash.
That's not something we want to get into, so what we do is increment
$SHLVL in every interactive fish. Non-interactive fish will simply
import the existing value.
That means if you had e.g. a bash that runs a fish script that ends up
opening a new fish session, you would have a $SHLVL of *2* - one for the
bash, and one for the inner fish.
We key this off is_interactive_session() (which can also be enabled
via `fish -i`) because it's easy and because `fish -i` is asking for
fish to be, in some form, "interactive".
That means most of the time $SHLVL will be "how many shells am I deep,
how often do I have to `exit`", except for when you specifically asked
for a fish to be "interactive". If that's a problem, we can rethink it.
Fixes#7864.
- Check for special characters *before* attempting to parse
- Also ignore lines with `{` and `*`
- Also skip lines with `<<` because that might be a heredoc (or a
- `<<<` herestring)
Fixes#7874.
This cleans up some exit code processing. Previously a failed exec
would produce exit code 125 unconditionally, while a failed posix_spawn
would produce exit code 1 (!).
With this change, fish reports exit code 126 for not-executable, and 127
for file-not-found. This matches bash.
We have no idea why this was even a thing. For now simply set it to
"all"/"full" (why these two names? no idea) at startup and allow
changing it later.
Settting it *immediately* when defining the variable sets it too soon
because we don't have the interactive signal handlers
enabled (including the one for SIGTTOU), so let's first settle for
this little piece of awkwardness.
This needs widespread testing, so we merge it early, immediately after
the release.
Fixes#5036Fixes#5832Fixes#7721
(and probably numerous others)
I believe they are both equivalent for our particular purpose, since we
only care about enforcing the size fish sees.
`resize-window` was only introduced in tmux 2.9, which isn't available
at least on Ubuntu 18.04 LTS (currently using tmux 2.6) and probably
many others.
(Clever idea to use tmux here!)
Consider
$ complete -c foo -a 'aab aaB' -f
$ foo A<TAB>
since 28d67c8 we would insert the common prefix AND show the pager.
Due to case-insensitive comparison, "b/B" was considered to be part
of the prefix. Since the prefix is added to each pager item [1]
we get wrong results. Fix this by removing the insensitive comparison
between completions - I don't think it was of much use anyway.
Commandline tokens are still matched case-insensitively, this is
just about completions.
Test this by running interactive fish inside tmux (pexpect's terminal
emulation not have enough capabilities). Also add tests for recent
interactive regressions #7526 and #7738.
Closes#3978
[1]: b38a23a would solve this differently by giving every pager item
its own prefix, but was reverted since it needs more fixes.
This should cover most cases - the user didn't install the docs and is
trying to view the man page via __fish_print_help, so we don't have a
way to show anything.
But `help thing` will fall back to the online version of the docs,
which should work if there's an internet connection.
See #7824.