`cargo search` can be used to quickly get crates matching a search string, so we
can pass the current token for first-arg completions to `cargo add` and `cargo
install` to `cargo search` to look up matches.
`cargo search` doesn't restrict itself to (nor prioritize for) prefix matches,
while fish will only display prefix matches (for dynamically generated
completions) so it's perfectly possible for `cargo search foo` to return 20
results none of which will successfully result in a completion, but for a
further-narrowed completion of `cargo install foob^I" to then result in
completions because `cargo search` ended up returning a prefix match for `foob`
while it didn't for `foo`.
The only other oob cargo subcommand that takes a crate name (that isn't the name
of a crate specified in `Cargo.toml`) is `cargo search` but there's no point in
providing completions to that... I think (it's possible to search for crate
"foo" in order to get its latest version number rather than its name, but I'm
not sure that's worth supporting).
This expands completions of `cargo^I` to list any commands named `cargo-xxx` as
cargo subcommands invokable as `cargo xxx` in addition to the default oob
subcommands cargo ships with.
(This is very similar to how git allows users to shim their own subcommands.)
NOTE: This would stay even after cargo someday moves to clap and generates or
even ships/installs an official machine-generated `cargo.fish` completions
script.
The old way of generating cargo completions no longer work, so we need
to manually maintain the completions until clap completions support[1].
[1]: https://github.com/clap-rs/clap/issues/3166
When selecting items in the pager, only the latest of those items is kept
in the edit history, as so-called transient edit. Each new transient edit
evicts any old transient edit (via undo).
If the pager is closed by a command that performs another transient edit
(like history-token-search-backward) we thus inadvertently undo (= remove)
the token inserted by the pager. Fix this by closing a transient edit
session when closing the pager. Token search will start its own session.
Fixes#9160
strncpy will fill the entire buffer with NUL.
In this case we have a 128 byte buffer and write "empty" - 5 bytes -
into it.
So now instead of writing 6 bytes it'll write 128 bytes. Especially
wasteful because we already did memset before
This fixes a crash when you open the history pager and then do
history-token-search-backward (e.g. alt+. or alt-up).
It would sometimes crash because the `colors.at(i)` was an
out-of-bounds access.
Note: This might still leave the highlighting offset in some
cases (not quite sure why), but at least it doesn't *crash*, and the
search generally *works*.
This used `realpath -eq`, which for GNU realpath:
1. Suppresses "most error messages" (-q)
2. Requires that all parts exist (rather than allowing the last not
to)
Since we don't actually need a real path here, just filter.
Fixes#9099
* added completions for sad and added note in changelog
* ran fish_indent on completion file
* split -h and --help into two distinct completion options
This was written while we changed how our synopses are formatted, so
we missed adding a "synopsis" marker to it.
The tokenizer here is a bit cheesy, so we can't mark continuation
lines with a "\", and we also can't mark the general options with a
":=". Tbh that's not a big deal.
Fixes#9154
This starts two sleep processes and expects them to be killed on
SIGHUP.
Unfortunately, if this ever fails the second run will also fail
because it'll see the old sleep still lying around (because it'll run
for 130 seconds).
So, what we do is:
1. Keep the pids for these specific sleeps
2. Check if any of them are still running (and only fail for them)
3. Kill them from python
Fixes#9152
This reverts commit 3e556b984c.
Revert "Further fix the issue and add the assert that'd have prevented it."
This reverts commit 056502001e.
Revert "Fix actual issue with allow_use_posix_spawn."
This reverts commit 85b9f3c71f.
Revert "Stop using posix_spawn when it is not allowed"
This reverts commit 9c896e1990.
Revert "don't even set up a fish_use_posix_spawn handler if unsupported"
This reverts commit 8b14ac4a9c.
Commit 8b14ac4a9c started using
posix_spawn even if allow_use_posix_spawn() returns false. Stop doing
that.
This may be reproduced with:
./docker/docker_run_tests.sh ./docker/centos7.Dockerfile
as centos7 has a too-old glibc.
Let's hope this doesn't causes build failures for e.g. musl: I just
know it's good on macOS and our Linux CI.
It's been a long time.
One fix this brings, is I discovered we #include assert.h or cassert
in a lot of places. If those ever happen to be in a file that doesn't
include common.h, or we are before common.h gets included, we're
unawaringly working with the system 'assert' macro again, which
may get disabled for debug builds or at least has different
behavior on crash. We undef 'assert' and redefine it in common.h.
Those were all eliminated, except in one catch-22 spot for
maybe.h: it can't include common.h. A fix might be to
make a fish_assert.h that *usually* common.h exports.
Fixed a line or two tripped IWYU asserts about visibility
when doing e.g. a private -> public mapping but the visibility
it came up with was identical. Like the <iosfwd> to <string>
mapping, it was defined as private -> public but they're both
"public".
Added a whole bunch of lines necessary to get sane/correct
reccomendations from current IWYU on clang 10 on macOS Ventura.
Incrementally I manually added these as needed while going through
each line change IWYU wanted in each file.
This is a *tiny* commit code-wise, but the explanation is a bit
longer.
When I made string read in chunks, I picked a chunk size from bash's
read, under the assumption that they had picked a good one.
It turns out, on the (linux) systems I've tested, that's simply not
true.
My tests show that a bigger chunk size of up to 4096 is better *across
the board*:
- It's better with very large inputs
- It's equal-to-slightly-better with small inputs
- It's equal-to-slightly-better even if we quit early
My test setup:
0. Create various fish builds with various sizes for
STRING_CHUNK_SIZE, name them "fish-$CHUNKSIZE".
1. Download the npm package names from
https://github.com/nice-registry/all-the-package-names/blob/master/names.json (I
used commit 87451ea77562a0b1b32550124e3ab4a657bf166c, so it's 46.8MB)
2. Extract the names so we get a line-based version:
```fish
jq '.[]' names.json | string trim -c '"' >/tmp/all
```
3. Create various sizes of random extracts:
```fish
for f in 10000 1000 500 50
shuf /tmp/all | head -n $f > /tmp/$f
end
```
(the idea here is to defeat any form of pattern in the input).
4. Run benchmarks:
hyperfine -w 3 ./fish-{128,512,1024,2048,4096}"
-c 'for i in (seq 1000)
string match -re foot < $f
end; true'"
(reduce the seq size for the larger files so you don't have to wait
for hours - the idea here is to have some time running string and not
just fish startup time)
This shows results pretty much like
```
Summary
'./fish-2048 -c 'for i in (seq 1000)
string match -re foot < /tmp/500
end; true'' ran
1.01 ± 0.02 times faster than './fish-4096 -c 'for i in (seq 1000)
string match -re foot < /tmp/500
end; true''
1.02 ± 0.03 times faster than './fish-1024 -c 'for i in (seq 1000)
string match -re foot < /tmp/500
end; true''
1.08 ± 0.03 times faster than './fish-512 -c 'for i in (seq 1000)
string match -re foot < /tmp/500
end; true''
1.47 ± 0.07 times faster than './fish-128 -c 'for i in (seq 1000)
string match -re foot < /tmp/500
end; true''
```
So we see that up to 1024 there's a difference, and after that the
returns are marginal. So we stick with 1024 because of the memory
trade-off.
----
Fun extra:
Comparisons with `grep` (GNU grep 3.7) are *weird*. Because you both
get
```
'./fish-4096 -c 'for i in (seq 100); string match -re foot < /tmp/500; end; true'' ran
11.65 ± 0.23 times faster than 'fish -c 'for i in (seq 100); command grep foot /tmp/500; end''
```
and
```
'fish -c 'for i in (seq 2); command grep foot /tmp/all; end'' ran
66.34 ± 3.00 times faster than './fish-4096 -c 'for i in (seq 2);
string match -re foot < /tmp/all; end; true''
100.05 ± 4.31 times faster than './fish-128 -c 'for i in (seq 2);
string match -re foot < /tmp/all; end; true''
```
Basically, if you *can* give grep a lot of work at once (~40MB in this
case), it'll churn through it like butter. But if you have to call it
a lot, string beats it by virtue of cheating.
clang-format (since 10) can output diagnostics which indicate
lines needing formatting with --dry-run and -Werror: the exit
code indicates if a file is correctly formatted or not.
We used to copy each .cpp file, run clang_format on the duplicate
and then `cmp` to see if there were changes made, before just
printing a line with the filename and moving the new ontop of
the original.
Now we show clang-format diagnostics which indicate which
lines will be changed, prompt for confirmation and then let
clang-format modify the files in-place without the juggling.
Looks like this: https://user-images.githubusercontent.com/291142/184561633-c16754c8-179e-426b-ba15-345ba65b9cf9.png
Rephrase this to more explicitly indicate that the uvar actually
was successfully set. I believe the prior phrasing can leave some
ambiguity as far as wether set just failed with an error, whether it
has done anything or not.
Now uses the same macro other builtins use for a missing -e arg,
and the error message show the short or long option as it was used.
e.g. before
$ set -e
set: Erase needs a variable name
after
$ set --erase
set: --erase: option requires an argument
$ set -e
set: -e: option requires an argument
This moves the stuff that creates skeleton/boilerplate files to
the same place we initialize uvars for the first time or on upgrade.
Being a bit less aggresssive here theoretically makes launch a little
lighter but really I personally just found it weird I couldn't
just delete my empty config.fish file without it getting recreated
and sourced every launch.