We have a *lot* of color sequences to try and tparm is slow (on the
whole, when you do this thousands of times).
So let's just check colors last, which makes everything else (which is
comparatively nothing) faster, while barely impacting
colors (benchmarking confirms no measurable difference).
Fixes#8253.
* Fix ls.fish: add -l option to GNU ls
* Sort alphabetically and remove --lcontext and --scontext (what are these?) on shared and GNU part.
* Revert --lcontext and --scontext options.
Unfortunately, we now need to know which .html file has which sections
to link to the correct one in help.fish.
So this script helps extract the sections from pre-built docs. It's
not supposed to be run at build time because
1. These change rarely.
2. We should link to the correct document even if the user doesn't
have the docs built.
And before anyone mentions it: This does *not* parse html with regex.
This "parses" the restricted subset of "class followed by href without
embedded quotes" that sphinx uses here in practice.
This should add all the sections that aren't linked internally,
including "identifiers".
(also give up on the line breaking because it makes it annoying to do
automatically)
Fixes#8245.
Fixes#8232.
Note that this needed to have expect_prompt used in the pexpect test -
we might want to add a "catchup" there so you can just ignore the
prompt counter for a bit and pick it back up later.
This was semi-automated with
```fish
for file in $argv
set -l varname (string replace -r '.*/(.*).html' '$1' -- $file | string escape --style=var)pages
set -l sections (string replace -rf '.*class="headerlink" href="#([^"]*)".*' '$1' <$file)
echo set -l $varname $sections
end
```
(where $argv contains the path to faq, fish_for_bash_users,
interactive, language and tutorial.html)
Building help.fish at compile time would work, but only for users who
build the docs.
* Remove safe_strerror, safe_perror and safe_append
This no longer works on new glibcs because they removed sys_errlist.
So just hardcode the relevant errno messages (and phrase them better).
Fixes#4183.
Co-authored-by: Johannes Altmanninger <aclopte@gmail.com>
The clang warning for pending_signals_t was about the operator=
return type being wrong (misc-unconventional-assign-operator).
Signed-off-by: Rosen Penev <rosenp@gmail.com>
We don't want to convert the input to a "wcstring &" because
"stage_variables" needs to have the same type as other stages, so we
can use it in a loop. Communicate that to clang-tidy.
We also don't want to take "wcstring &&". As the Google style guide
states, it's not really beneficial here, and it potentially hurts
readability because it's a relatively obscure feature.
The rest of our code contains a bunch of && parameters. We might
want to get rid of some of them.
Closes#8227
clang-tidy wrongly sees an std::move to a const ref parameter and
believes it to be pointless. The copy constructor however is deleted.
Signed-off-by: Rosen Penev <rosenp@gmail.com>
This disables job control inside command substitutions. Prior to this
change, a cmdsub might get its own process group. This caused it to fail
to cancel loops properly. For example:
while true ; echo (sleep 5) ; end
could not be control-C cancelled, because the signal would go to sleep,
and so the loop would continue on. The simplest way to fix this is to
match other shells and not use job control in cmdsubs.
Related is #1362
The same hack that is used for `pkg remove <foo>` is required here, too.
Due to the massive number of results, we use `head -n 250` to prevent
the completion from hanging or the shell from being overencumbered by
too many possibe completions. However, this would only generate matches
for any of the first 250 packages, rather than printing the first 250
packages that match.
[ci skip]
The previous layout confused me for a minute as it suggested it was
possible for `pipe_next_read` to be moved twice (once in the first
conditional block, then again when the deferred process conditional
called `continue` - if and only if the deferred process *was* the last
process in the job. This patch clarifies that can't be the case.
`pipe_next_read` is moved in the body of the loop, and not
re-initialized the last go around. However, we call
`pipe_next_read.close()` after the loop, which is undefined behavior (as
it's been moved).
Best case scenario, the compiler passed the address of our copy of the
struct to `exec_process_in_job` and beyond, it went out of scope there,
the value of `fd` was set to closed (minus one), and we explicitly call
`.close()` again, in which case it does nothing.
Worst case scenario, the compiler re-uses the storage for the now-moved
struct for something else and our call to `.close()` ends up closing
some other value of `fd` (valid or invalid) and things break.
Aside from the fact that we obviously don't need to close it since it's
not assigned for the last process in the job, it's a RAII object so we
don't have to worry about manually closing it in the first place.
`escape_code_length()` was converted from returning a `size_t` to
returning a `maybe_t<size_t>` but that subtly broke all existing call
sites by forcing all input to go through the slow path of assuming a
zero-length escape sequence was found.
This is because all callers predicated their next action on what amounts
to `if (escape_code_length(...))` which would correctly skip the slow
path when `escape_code_length` returned zero, but after the conversion
to `maybe_t` contained not `maybe_t::none()` but rather
`maybe_t::some(0)` due to coercion of the result from the `size_t` local
`esc_seq_len` to the `maybe_t<size_t>` return value - which, when
coerced to a boolean returns *true* for `maybe_t::some(0)` rather than
false.
The regression was introduced in 7ad855a844
and did not ship in any released versions so no harm, no foul.
This is required for the usage of placement new. Not an issue for fish
as it gets picked up from elsewhere, but it lets one use it in a C++
test directly this way.
As functions know where they are loaded from now, there is no point in
them being marked as loaded from a temporary file that has been removed.
Source the function via a redirect instead.
This uses a bit of javascript to add copy buttons, so you can directly
copy all the code in a given block to the clipboard!
For codeblocks without prompts, it just copies all the code, for
blocks with prompts, it copies all the lines after prompts, under the
assumption that that's the code to be executed.
It would give you *all* the lines, so the output wouldn't be
interleaved like it is in the html, but good enough.
The buttons appear on hover, so they aren't usable on phones, but
since you won't really have a clipboard on phones and I have no idea
how to make them not always in front of the text otherwise: Eh.
I'm not in love with the javascript here, but it'll do.
* commandline: Add --is-valid option to query whether it's syntactically complete
This means querying when the commandline is in a state that it could
be executed. Because our `execute` bind function also inserts a
newline if it isn't.
One case that's not handled right now: `execute` also expands
abbreviations, those can technically make the commandline invalid
again.
Unfortunately we have no real way to *check* without doing the
replacement.
Also since abbreviations are only available in command position when
you _execute_ them the commandline will most likely be valid.
This is enough to make transient prompts work:
```fish
function reset-transient --on-event fish_postexec
set -g TRANSIENT 0
end
function maybe_execute
if commandline --is-valid
set -g TRANSIENT 1
commandline -f repaint
else
set -g TRANSIENT 0
end
commandline -f execute
end
bind \r maybe_execute
```
and then in `fish_prompt` react to $TRANSIENT being set to 1.