This means "../" components are cancelled out even after non-existent
paths or files.
(the alternative is to error out, but being able to say `path resolve
/path/to/file/../../` over `path resolve (path dirname
/path/to/file)/../../` seems worth it?)
This sorts paths by basename, dirname or full path - in future
possibly size or age.
It takes --invert to invert the sort and "--what=basename|dirname|..."
to specify what to sort
This can be used to implement better conf.d sorting, with something
like
```fish
set -l sourcelist
for file in (path sort --what=basename $__fish_config_dir/conf.d/*.fish $__fish_sysconf_dir/conf.d/*.fish $vendor_confdirs/*.fish)
```
which will iterate over the files by their basename. Then we keep a
list of their basenames to skip over anything that was already
sourced, like before.
The recent change to skip the newline for `string` changed this, and
it also hit builtin path (which is in development separately, so it's
not like it broke master).
Let's pick a good default here.
This just goes back until it finds an existent path, resolves that,
and adds the normalized rest on top.
So if you try
/bin/foo/bar////../baz
and /bin exists as a symlink to /usr/bin, it would resolve that, and
normalize the rest, giving
/usr/bin/foo/baz
(note: We might want to add this to realpath as well?)
This includes the "." in what `path extension` prints.
This allows distinguishing between an empty extension (just `.`) and a
non-existent extension (no `.` at all).
These are short flags for "--perm=read" and "--type=link" and such.
Not every type or permission has a shorthand - we don't want "-s" for
"suid". So just the big three each get one.
This is needed because you might feasibly give e.g. `path filter`
globs to further match, and they might already present no results.
It's also well-handled since path simply does nothing if given no paths.
These were officially called "--null-input", but I just used
"--null-in" everywhere, which worked because getopt allows unambiguous abbreviations.
But since *I* couldn't keep it straight and the "put" is just
superfluous, let's remove it.
This is theoretically sound, because a path can only be PATH_MAX - 1
bytes long, so at least the PATH_MAXest byte needs to be a NULL.
The one case this could break is when something has a NULL-output mode
but doesn't bother printing the NULL for only one path, and that path
contains a newline. So we leave --null-in there, to force it on.
This adds a "path" builtin that can handle paths.
Implemented so far:
- "path filter PATHS", filters paths according to existence and optionally type and permissions
- "path base" and "path dir", run basename and dirname, respectively
- "path extension PATHS", prints the extension, if any
- "path strip-extension", prints the path without the extension
- "path normalize PATHS", normalizes paths - removing "/./" components
- and such.
- "path real", does realpath - i.e. normalizing *and* link resolution.
Some of these - base, dir, {strip-,}extension and normalize operate on the paths only as strings, so they handle nonexistent paths. filter and real ignore any nonexistent paths.
All output is split explicitly, so paths with newlines in them are
handled correctly. Alternatively, all subcommands have a "--null-input"/"-z" and "--null-output"/"-Z" option to handle null-terminated input and create null-terminated output. So
find . -print0 | path base -z
prints the basename of all files in the current directory,
recursively.
With "-Z" it also prints it null-separated.
(if stdout is going to a command substitution, we probably want to
skip this)
All subcommands also have a "-q"/"--quiet" flag that tells them to skip output. They return true "when something happened". For match/filter that's when a file passed, for "base"/"dir"/"extension"/"strip-extension" that's when something about the path *changed*.
Filtering
---------
`filter` supports all the file*types* `test` has - "dir", "file", "link", "block"..., as well as the permissions - "read", "write", "exec" and things like "suid".
It is missing the tty check and the check for the file being non-empty. The former is best done via `isatty`, the latter I don't think I've ever seen used.
There currently is no way to only get "real" files, i.e. ignore links pointing to files.
Examples
--------
> path real /bin///sh
/usr/bin/bash
> path extension foo.mp4
mp4
> path extension ~/.config
(nothing, because ".config" isn't an extension.)
This teaches `--on-signal SIGINT` (and by extension `trap cmd SIGINT`)
to work properly in scripts, not just interactively. Note any such
function will suppress the default behavior of exiting. Do this for
SIGTERM as well.
s_observed_signals is used to inform the signal handler which signals may
have --on-signal functions attached to them, as an optimization. Prior to
this change it was latched: once we started observing a signal we assume we
will keep observing that signal. Make it properly increment and decrement,
in preparation for making trap work non-interactively.
Like `set` and `read` before it, `eval` can be used to set variables,
and so it can't be shadowed by a function without loss of
functionality.
So this forbids it.
Incidentally, this means we will no longer try to autoload an
`eval.fish` file that's left over from an old version, which would
have helped with #8963.
This concerns what happens if one event handler removes another, when
both are responding to the same event. Previously we had a "double lock"
where we would traverse the list twice. Now track directly in the
handler when it is removed; this simplifies the code a lot. No
functional changes expected here.
Hitting tab on "echo **" will often result in more than 256 matches.
Commit 143757e8c (Expand wildcards on tab, 2021-11-27) describes this scenario
> If the expansion would produce more than 256 items, we flash the command
> line and do nothing, since it would make the commandline overfull.
Yet we actually erase the "**" token, which seems wrong since we already
flash the command line. Fix this, at the cost of making the code a bit uglier.
I tried to write a test in tests/pexpects/wildcard_tab.py but that doesn't
seem to work because pexpect provides only a "dumb" terminal. I wonder if we
can test what we write to the screen without depending on a terminal emulator.
c4fb857dac (in 3.4.1) introduced a regression where process_exit
events would only fire once the job itself is complete. Allow
process_exit events to fire before that. Fixes#8914.
This is after we've tried to find the interpreter, so we would already
have complained about e.g. /usr/bin/pthyon not existing.
Realistically the most common case here is things that don't start
with a shebang like ELFs. Writing special extraction code here is
overkill, and I can't see a good function to do it for us.
But this should point you in the right direction.
Fixes#8938
This gets the passwd entry for $USER (if it is set). If that gives the
same uid that geteuid() gives us, we assume the data is correct.
If not, we reset $USER (and $HOME if it's empty) from the passwd value for our UID.
This allows using $USER in a prompt even if you've `su`d. Bash gets around this by having a special escape in its $PS1 DSL that checks passwd instead.
Fixes#8583
This reverts commit ccb6cb1abe.
CI fails with
/home/runner/work/fish-shell/fish-shell/src/autoload.cpp:148:1: error: function ‘autoload_t::autoload_t(autoload_t&&)’ defaulted on its redeclaration with an exception-specification that differs from the implicit exception-specification ‘’
148 | autoload_t::autoload_t(autoload_t &&) noexcept = default;
| ^~~~~~~~~~
make[2]: *** [CMakeFiles/fishlib.dir/build.make:96: CMakeFiles/fishlib.dir/src/autoload.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:369: CMakeFiles/fishlib.dir/all] Error 2
make: *** [Makefile:139: all] Error 2
Not sure what's wrong - it compiles fine on my machine. Will check later.
Even though we disable exceptions, we use noexcept in some
places to enable certain optimizations in std::vector, see
https://en.cppreference.com/w/cpp/utility/move_if_noexcept.
Some methods have noexcept only at their declaration (or only at the
definition). This will be an error when compiling with "g++ -std=c++17". Make
both signatures match.
This cleans up the path_get_path function which is used to resolve a
command name against $PATH, by removing the dependence on errno and
being explicit about which error is returned.
Should be no user-visible change here.
Curses variables like `enter_italics_mode` are secretly defined to
dereference through the `cur_term` variable. Be sure we do not read or
write these curses variables if cur_term is NULL. See #8873, #8875.
Add a regression test.
Apple's terminfo has missing support for enter_italics_mode,
exit_italics_mode, and enter_dim_mode. Previously we would hack in such
support in set_color; migrate that to init_curses so we do it up-front
instead of opportunistically.
To recap, this means `&` in the middle of a word no longer
backgrounds.
So:
```fish
echo foo&bar # prints foo&bar
echo foo& bar # backgrounds an echo that prints "foo" and runs "bar"
```
This can no longer be changed. If "no-stderr-nocaret" is in
$fish_features it will simply be ignored.
The "^" redirection that was deprecated in fish 3.0 is now gone for good.
Note: For testing reasons, it can still be set _internally_ by running
"feature_flags_t::set". We simply shouldn't do that.
If we get an E2BIG while executing a process, we check how large the
exported variables are. We already did this, but then immediately
added it to the total.
So now we keep the tally just for the variables around, and if it's
over half (which is an atypical value if your system has an ARG_MAX of
2MB), we mention that in the error.
Figuring out which variable is too big (in case it's just one) is probably too complicated,
but we can at least complain if things seem suspect.
Untested because I don't know *how* to do so portably
Prior to this change, if you tab-completed a token with a wildcard (glob), we
would invoke ordinary completions. Instead, expand the wildcard, replacing
the wildcard with the result of expansions. If the wildcard fails to expand,
flash the command line to signal an error and do not modify it.
Example:
> touch file(seq 4)
> echo file*<tab>
becomes:
> echo file1 file2 file3 file4
whereas before the tab would have just added a space.
Some things to note:
1. If the expansion would produce more than 256 items, we flash the command
line and do nothing, since it would make the commandline overfull.
2. The wildcard token can be brought back through Undo (ctrl-Z).
3. This only kicks in if the wildcard is in the "path component
containing the cursor." If the wildcard is in a previous component,
we continue using completions as normal.
Fixes#954.
When fish expands a string that starts with a tilde, like `~/stuff/*`, it
first must resolve the tilde (e.g. to the user's home directory) before
passing it to wildcard expansion. The wildcard expansion will produce full
paths like `/home/user/stuff/file`. fish then "unexpands" the home directory
back to a tilde.
Previously this was only used during completions, but in the next commit
we plan to use it for string expansions as well.
Rationalize this behavior by adding an explicit flag to request it and
explain some subtleties about completions.
This commit was problematic for a few reasons:
1. It silently changed the behavior of argparse, by switching which
characters were replaced with `_` from non-alphanumeric to punctuation.
This is a potentially breaking change and there doesn't appear to be any
justification for it.
2. It combines a one-line if with a multi-line else which we should try
to avoid.
This reverts commit 63bd4eda55.
This reverts commit 4f835a0f0f.
These macros were historically used only in internal error messages which
should never happen! Now we are able to enforce they never happen at
compile time so we can remove them.
No functional change here.
If we ever need any of these... they're in this commit:
fish_wcswidth_visible()
status_cmd_opts_t::feature_name
completion_t::is_naturally_less_than()
parser_t::set_empty_var_and_fire()
parser_t::get_block_desc()
parser_keywords_skip_arguments()
parser_keywords_is_block()
job_t::has_internal_proc()
fish_wcswidth_visible()
When you do
```fish
set foo-bar baz
```
"foo-baz" isn't usable as a variable *name*. When you just say the
"variable" is invalid that could also be interpreted to be a special
type of variable or something.
String tokens are subdivided by command substitutions. Some syntax errors
can occur in the gap between two command substitutions. Make the caret point
to the start of that gap, instead of the token start.
When expanding command substitutions, we use a naïve way of detecting whether
the cmdsub has the optional leading dollar. We check if the last character was
a dollar, which breaks if it's an escaped dollar. We wrongly expand
\$(echo "") to the empty string. Fix this by checking if the dollar was escaped.
The parse_util_* functions have a bunch of output parameters. We should
return a parameter bag instead (I think I tried once and failed).
Given
set var a
echo "$var$(echo b)"
the double-quoted string is expanded right-to-left, so we construct an
intermediate "$varb". Since the variable "varb" is undefined, this wrongly
expands to the empty string (should be "ab"). Fix this by isolating the
expanded command substitution internally. We do the same when handling
unquoted command substitutions.
Fixes#8849
1. Bravely use a real enum for has_arg, despite the warnings.
2. Use some C++11 initializers so we don't have to pass an int for this
parameter.
No functional change expected here.
If the history file is larger than 4GB on a 32 bit system, fish will
refuse to read it. However the check was incorrect because it cast the
file size to size_t, which may be 32 bit. Switch to using uint64.
fd_monitor is used when an external command pipes into a buffer, e.g. for
command substitutions. It monitors the read end of the external command's
pipe in the background, and fills the buffer as data arrives. fd_monitor is
multiplexed, so multiple buffers can be monitored at once by a single
thread.
It may happen that there's no active buffer fill; in this case fd_monitor
wants to keep its thread alive for a little bit in case a new one arrives.
This is useful for e.g. handling loops where you run the same command
multiple times.
However there was a bug due to a refactoring which caused fd_monitor to
exit too aggressively. This didn't affect correctness but it meant more
thread creation and teardown.
Fix this; this improves the aliases.fish benchmark by about 20 msec.
No need to changelog this IMO.
This was already apparently supposed to work, but didn't because we
just overrode errno again.
This now means that, if a correctly named candidate exists, we don't
start the command-not-found handler.
See #8804
This used to call exec_subshell, which has two issues:
1. It creates a command substitution block which shows up in a stack
trace
2. It does much more work than necessary
This removes a useless "in command substitution" from an error message
in an autoloaded file, and it speeds up autoloading a bit (not
measurable in actual benchmarks, but microbenchmarks are 2x).
We need special handling when reporting backtraces for commands run
during startup, i.e. config.fish. Previously we had a global variable;
make it local to the parser to eliminate a global.
No functional change here.
Cancellation groups were meant to reflect the following idea: if you ran a
simple block:
begin
cmd1
cmd2
end
then under job control, cmd1 and cmd2 would get separate groups; however if
either exits due to SIGINT or SIGQUIT we also want to propagate that to the
outer block. So the outermost block and its interior jobs would share a
cancellation group. However this is more complex than necessary; it's
sufficient for the execution context to just store an int internally.
This ought not to affect anything user-visible.
Currently, when a variable like $fish_color_command is set but empty:
set -g fish_color_command
what happens is that highlight parses it and ends up with a "normal"
color.
Change it so instead it sees that the variable is empty and goes
on to check the fallback variable, e.g. fish_color_normal.
That makes it easier to make themes that override variables.
This means that older themes that expect an empty variable to be
"normal" need to be updated to set it to "normal".
Following from this, we could make writing .theme files easier by no
longer requiring them to list all variables with specific values.
Either the theme reader could be updated to implicitly set known color
variables to empty, or the themes could feature empty values.
See #8787.
fish reads the tty modes at startup, and tries to restore them to the
original values on exit, to be polite. However this causes problems when
fish is run in a pipeline with another process which also messes with the
tty modes. Example:
fish -c 'echo foo' | vim -
Here vim's manipulation of the tty would race with fish, and often vim
would end up with broken modes.
Only restore the tty if we are interactive. Fixes#8705.
This is a big cleanup to how tty transfer works. Recall that when job
control is active, we transfer the tty to jobs via tcsetpgrp().
Previously, transferring was done "as needed" in continue_job. That is, if
we are running a job, and the job wants the terminal and does not have it,
we will transfer the tty at that point.
This got pretty weird when running mixed pipelines. For example:
cmd1 | func1 | cmd2
Here we would run `func1` before calling continue_job. Thus the tty
would be transferred by the nested function invocation, and also restored
by that invocation, potentially racing with tty manipulation from cmd1 or
cmd2.
In the new model, migrate the tty transfer responsibility outside of
continue_job. The caller of continue_job is then responsible for setting up
the tty. There's two places where this gets done:
1. In `exec_job`, where we run a job for the first time.
2. In `builtin_fg` where we continue a stopped job in the foreground.
Fixes#8699
This is a cleanup of job groups, rationalizing a bunch of stuff. Some
notable changes (none user-visible hopefully):
1. Previously, if a job group wanted a pgid, then we would assign it to the
first process to run in the job group. Now we deliberately mark which
process will own the pgroup, via a new `leads_pgrp` flag in process_t. This
eliminates a source of ambiguity.
2. Previously, if a job were run inside fish's pgroup, we would set fish's
pgroup as the group of the job. But this meant we had to check if the job
had fish's pgroup in lots of places, for example when calling tcsetpgrp.
Now a job group only has a pgrp if that pgrp is external (i.e. the job is
under job control).
* Turn on default bindings for --no-config mode
The fallback bindings are super awkward to use.
This was called out specifically in #7921, I'm going for the targeted
fix for now.
* Only change keybindings when interactive
That's also when we'd source them normally.
These were changed in fish 3.0 in December 2018.
This means upgrading from fish 2.7.1 or earlier to the next fish
version will require users to set their universal variable again.
This just defines a constant to whichever tparm implementation we're
using (either the actual, working one the system provides, or our
kludge to paper over Solaris' inadequacies).
This means that there won't be so much ping-ponging of what "tparm"
stands for. "tparm" is the system's function. Only we don't use it,
just like we don't use wcstod directly.
Fixes#8780
* New -n flag for string join command.
This is an argument that excludes empty result items. Fixes#8351
* New documentation for string-join.
The new argument --no-empty was added at string-join manpage.
* New completions for the new -n flag for string join.
* Remove the documentation of the new -n flag of string join0
The reason to remove this new argument in the join0 is that this flag basically doesn't make any difference in the join0.
* Refactor the validation for the string join.
The string join command was using the length of the argument, this commit changes the validation to use the empty function.
* Revert #4b56ab452
The reason for the revert is thath the build broke on the ubuntu in the Github actions.
* Revert #e72e239a1
The reason the compilation on GitHub broke is that the test was weird, it didn't even run it, Common CI systems are typically very very resource-constrained.
* Resolve conflicts in the string-join.rst.
* Resolve conflicts in the "string-join.rst".
commit #1242d0fd7 not fixed all conflicts.
This is supposed to detect color escape sequences, to figure out how
long an escape sequence is, for use in width calculations.
However, the typical color sequences are already taken care of by
is_csi_style_escape_seq because they look like a csi sequence starting
with `\e[` and ending in `m`.
In the entire terminfo database shipped with ncurses 6.3, these are
the terminals that have non-csi color sequences:
at-color
atari-color
atari_st-color
d220-dg
d230-dg
d230c-dg
d430-dg
d430-unix
d430-unix-25
d430-unix-s
d430-unix-sr
d430-unix-w
d430c-dg
d430c-unix
d430c-unix-25
d430c-unix-s
d430c-unix-sr
d430c-unix-w
d470-dg
d470c-dg
dg+fixed
dgmode+color
dgmode+color8
dgunix+fixed
emu
fbterm
i3164
ibm3164
linux-m1b
linux-m2
minitel1
minitel1b
putty-m1b
putty-m2
st52-color
tt52
tw52
tw52-color
xterm-8bit
Most of these were discontinued in the 90s and their manufacturers no
longer exist (like Data General, which went defunct in 1999). The last one is a special mode for xterm that is
fundamentally UTF-8 incompatible because it encodes a CSI as \X9b.
The linux/putty m1b and m2 entries (also for minitel) don't support
color to begin with and the sequences they have in their terminfo
entries are control characters anyway, so the calculation would still
add up.
In turn, what we gain from this is much faster width calculations with
unrecognized escapes -
e.g. `string length -V \efoo` is sped up by a factor of 20.
An alternative would be to skip this if max_colors is > 16 as that is
the most any of these entries can do. The runtime scales linearly with
the number of colors so on those systems it would be reasonably quick anyway.
But given just *how* outdated these are I believe it is okay to just
remove support outright. I do not believe anyone has ever run fish on
any of these.
* Implement fish_wcstod_underscores
* Add fish_wcstod_underscores unit tests
* Switch to using fish_wcstod_underscores in tinyexpr
* Add tests for math builtin underscore separator functionality
* Add documentation for underscore separators for math builtin
* Add a changelog entry for underscore numeric separators
We can't always read in chunks because we often can't bear to
overread:
```fish
echo foo\nbar | begin
read -l foo
read -l bar
end
```
needs to have the first read read `foo` and the second read `bar`. So
here we can only read one byte at a time.
However, when we are directly redirected:
```fish
echo foo | read foo
```
we can, because the data is only for us anyway. The stream will be
closed after, so anything not read just goes away. Nobody else is
there to read.
This dramatically speeds up `read` of long lines through a pipe. How
much depends on the length of the line.
With lines of 5000 characters it's about 15x, with lines of 50
characters about 2x, lines of 5 characters about 1.07x.
See #8542.
This is the simple fix - if we have no valid digit, we have nothing to
return. So instead of returning a NULL, we return an error.
This is already the case for invalid octal escapes (like `\777`).
Fixes#8545
This reverts commits:
2d9e51b43ed1d9f147ec346ce8081b
The box drawing because it's entangled with the rest and we don't
currently use this anywhere I know of. Nor was it gated on terminfo,
so it could have broken things, for subjectively little gain.
Fixes#8727.
This updates widechar_width.h to one generated from
15e782aa3df9dfef436516f66f745a90b421329.
The change here is a rationalization of doublewide vs widened-in-9.
Many emoji have been moved to widened-in-9 because we now use the
correct version (this uses the *emoji* version, and emoji version 3.0
corresponds to Unicode 9).
The new --escape option means that -C is not necessarily the last option;
We have this scenario where we produce a bogus error
$ fish -c 'complete -C --escape'
complete: --escape: option requires an argument
--escape doesn't take arguments, so let the error message say -C.