This adds a "path" builtin that can handle paths.
Implemented so far:
- "path filter PATHS", filters paths according to existence and optionally type and permissions
- "path base" and "path dir", run basename and dirname, respectively
- "path extension PATHS", prints the extension, if any
- "path strip-extension", prints the path without the extension
- "path normalize PATHS", normalizes paths - removing "/./" components
- and such.
- "path real", does realpath - i.e. normalizing *and* link resolution.
Some of these - base, dir, {strip-,}extension and normalize operate on the paths only as strings, so they handle nonexistent paths. filter and real ignore any nonexistent paths.
All output is split explicitly, so paths with newlines in them are
handled correctly. Alternatively, all subcommands have a "--null-input"/"-z" and "--null-output"/"-Z" option to handle null-terminated input and create null-terminated output. So
find . -print0 | path base -z
prints the basename of all files in the current directory,
recursively.
With "-Z" it also prints it null-separated.
(if stdout is going to a command substitution, we probably want to
skip this)
All subcommands also have a "-q"/"--quiet" flag that tells them to skip output. They return true "when something happened". For match/filter that's when a file passed, for "base"/"dir"/"extension"/"strip-extension" that's when something about the path *changed*.
Filtering
---------
`filter` supports all the file*types* `test` has - "dir", "file", "link", "block"..., as well as the permissions - "read", "write", "exec" and things like "suid".
It is missing the tty check and the check for the file being non-empty. The former is best done via `isatty`, the latter I don't think I've ever seen used.
There currently is no way to only get "real" files, i.e. ignore links pointing to files.
Examples
--------
> path real /bin///sh
/usr/bin/bash
> path extension foo.mp4
mp4
> path extension ~/.config
(nothing, because ".config" isn't an extension.)
Like `set` and `read` before it, `eval` can be used to set variables,
and so it can't be shadowed by a function without loss of
functionality.
So this forbids it.
Incidentally, this means we will no longer try to autoload an
`eval.fish` file that's left over from an old version, which would
have helped with #8963.
Previously, running `fish_add_path /foo /foo` would result in /foo
being added to $PATH twice.
Now we check that it hasn't already been given, so we skip the
second (and any further) occurence.
This *might* be a bit faster running under TSAN, otherwise it takes >
400 seconds on Github Actions.
If this doesn't work we need to disable it for TSAN.
To recap, this means `&` in the middle of a word no longer
backgrounds.
So:
```fish
echo foo&bar # prints foo&bar
echo foo& bar # backgrounds an echo that prints "foo" and runs "bar"
```
This can no longer be changed. If "no-stderr-nocaret" is in
$fish_features it will simply be ignored.
The "^" redirection that was deprecated in fish 3.0 is now gone for good.
Note: For testing reasons, it can still be set _internally_ by running
"feature_flags_t::set". We simply shouldn't do that.
When you do
```fish
set foo-bar baz
```
"foo-baz" isn't usable as a variable *name*. When you just say the
"variable" is invalid that could also be interpreted to be a special
type of variable or something.
String tokens are subdivided by command substitutions. Some syntax errors
can occur in the gap between two command substitutions. Make the caret point
to the start of that gap, instead of the token start.
When expanding command substitutions, we use a naïve way of detecting whether
the cmdsub has the optional leading dollar. We check if the last character was
a dollar, which breaks if it's an escaped dollar. We wrongly expand
\$(echo "") to the empty string. Fix this by checking if the dollar was escaped.
The parse_util_* functions have a bunch of output parameters. We should
return a parameter bag instead (I think I tried once and failed).
Given
set var a
echo "$var$(echo b)"
the double-quoted string is expanded right-to-left, so we construct an
intermediate "$varb". Since the variable "varb" is undefined, this wrongly
expands to the empty string (should be "ab"). Fix this by isolating the
expanded command substitution internally. We do the same when handling
unquoted command substitutions.
Fixes#8849
The read test is now failing on GitHub actions even though it passes on
my Mac. It may be due to differences in dd between these two
environments. Stop using dd and just use head.
The read.fish check has a test where it limits the amount of data passed to
`read` to 8192 bytes, and verifies that fish reads exactly that amount.
This check occasionally fails on the OBS builds; it's very hard to repro a
failure locally, but I finally did it.
The amount of data written is limited via `yes` and `dd`:
yes $line | dd bs=1024 count=(math "$fish_read_limit / 1024")
The bug is that `dd` outputs a fixed number of "blocks" where a block
corresponds to a single read. As `yes` and `dd` are running concurrently,
it may happen that `dd` performs a short read; this then counts as a single
block. So `dd` may output less than the desired amount of data.
This can be verified by removing the 2>/dev/null redirection; on a
successful run dd reports `8+0 records out`, on a failed run it reports
`7+1 records out` because one of the records was short.
Fix this by using `fullblock` so that dd will no longer count a short read
as a single block. `head` would probably be a simpler tool to use but we'll
do this for now.
Happily it's not a fish bug. No need to relnote it.
This was already apparently supposed to work, but didn't because we
just overrode errno again.
This now means that, if a correctly named candidate exists, we don't
start the command-not-found handler.
See #8804
This used to call exec_subshell, which has two issues:
1. It creates a command substitution block which shows up in a stack
trace
2. It does much more work than necessary
This removes a useless "in command substitution" from an error message
in an autoloaded file, and it speeds up autoloading a bit (not
measurable in actual benchmarks, but microbenchmarks are 2x).
* Implement fish_wcstod_underscores
* Add fish_wcstod_underscores unit tests
* Switch to using fish_wcstod_underscores in tinyexpr
* Add tests for math builtin underscore separator functionality
* Add documentation for underscore separators for math builtin
* Add a changelog entry for underscore numeric separators
We can't always read in chunks because we often can't bear to
overread:
```fish
echo foo\nbar | begin
read -l foo
read -l bar
end
```
needs to have the first read read `foo` and the second read `bar`. So
here we can only read one byte at a time.
However, when we are directly redirected:
```fish
echo foo | read foo
```
we can, because the data is only for us anyway. The stream will be
closed after, so anything not read just goes away. Nobody else is
there to read.
This dramatically speeds up `read` of long lines through a pipe. How
much depends on the length of the line.
With lines of 5000 characters it's about 15x, with lines of 50
characters about 2x, lines of 5 characters about 1.07x.
See #8542.
This is the simple fix - if we have no valid digit, we have nothing to
return. So instead of returning a NULL, we return an error.
This is already the case for invalid octal escapes (like `\777`).
Fixes#8545
This reverts commits:
2d9e51b43ed1d9f147ec346ce8081b
The box drawing because it's entangled with the rest and we don't
currently use this anywhere I know of. Nor was it gated on terminfo,
so it could have broken things, for subjectively little gain.
Fixes#8727.
A history search ends when you move the cursor, but the commandline inserted by
history search is still marked as transient. This means that the next history
search will clear the transient commandline. This means we are dropping an undo
point, for example:
echo 11
echo 1
echo autosuggestion
echo^P # commandline is "echo 1"
^A # stop history search
^P # commandline is "echo 11"
^Z # Bug: commandline goes back to "echo", but it should be "echo 1"
In the worst case, we are switching from line-search to token-search (see
the attached test case). Clearing the transient edit means the line is gone
and only the token is left on the command line.
Today, a command like "var=val status " has custom completions
because we skip over the var=val variable override when detecting
the command token.
However if the custom completions read the commandline state (via
"commandline -opc") they do see they variable override, which breaks
them, most likely. Try "a=b git ".
For completions of wrapped commands, we already set a transient
commandline. Do the same for commands with leading variable overrides;
then git completions for "a=b git " will think the commandline is
"git ".
Previously, when we got an unknown option with --ignore-unknown, we
would increment woptind but still try to read the same contents.
This means in e.g.
```
argparse -i h -- -ooo -h
```
The `-h` would also be skipped as an option, because after the first
`-o` getopt reads the other two `-o` and skips that many options.
This could be handled more extensively in wgetopt, but the simpler fix
is to just skip to the next argv entry once we have an unknown option
- there's nothing more we can do with it anyway!
Additionally, document this and clearly explain that we currently
don't transform the option.
Fixes#8637
fish_git_prompt may run certain git commands which may invoke certain
external programs as specified `.git/config`. Prevent this by suppressing
certain git config options.
This affects the caret position. In an expression like
123 456
we previously reported:
123 456
^ missing operator
Now we do:
123 456
^ missing operator
We do it on the first space, which should be acceptable.
(no need for a changelog entry, we have already ignored #8511)