Prior to this fix, autoloads like function and completion autoloads
would check their path variable (like fish_function_path) on every
autoload request. Switch to invalidating it in response to the variable
changing.
This improves time on a microbenchmark:
for i in (seq 50000)
setenv test_env val$i
end
from ~11 seconds to ~6.5 seconds.
The job control functions were a bit messy, in particular
`set_child_group`'s name would imply that all it does is set the child
group, but in reality it used to set the child group (via `setpgid`),
set the job's pgrp if it hasn't been set, and possibly assign control of
the terminal to the newly-created job.
These have been split into separate functions. Now `set_child_group`
does just (and only) that, `maybe_assign_terminal` might assign the
terminal to the new pgrp, and `on_process_created` is used to set the
job properties the first time an external process is created. This might
also speed things up (but probably not noticeably) as there are no more
repeated calls to `getpgrp()` if JOB_CONTROL is not set.
Additionally, this closes#4715 by no longer unconditionally calling
`setpgid` on all new processes, including those created by `posix_spawn`
which does not need this since the child's pgrep is set at in the
arguments to that API call.
This merges a set of changes that switch functions from executing source
to executing an already parsed tree (the same tree used when the function
is defined). This speeds up function execution, reduces memory usage, and
avoids annoying double parsing.
A simple microbenchmark of function execution:
for i in (seq 10000)
setenv test_env val$i
end
time improves from 1.63 to 1.32 seconds.
This switches function execution from the function's source code to
its stored node and pstree. This means we no longer have to re-parse
the function every time we execute it.
The idea is that we can return the shared pointer directly, avoiding
lots of annoying little getter functions that each need to take locks.
It also helps to pull together the data structures used to initialize
functions versus store them.
This concerns block nodes with redirections, like
begin ... end | grep ...
Prior to this fix, we passed in a pointer to the node. Switch to passing
in the tnode and parsed source ref. This improves type safety and better
aligns with the function-node plans.
Prior to this fix, functions stored a string representation of their
contents. Switch them to storing a parsed source reference and the
tnode of the contents. This is part of an effort to avoid reparsing
a function's contents every time it executes.
Add a fish-specific wrapper around std::mutex that records whether it is
locked in a bool. This is to make ASSERT_IS_LOCKED() simpler (it can just
check the boolean instead of relying on try_lock) which will make Coverity
Scan happier.
Some details: Coverity Scan was complaining about an apparent double-unlock
because it's unaware of the semantics of try_lock(). Specifically fish
asserts that a lock is locked by asserting that try_lock fails; if it
succeeds fish prints an error and then unlocks the lock (so as not to leave
it locked). This unlock is of course correct, but it confused Coverity Scan.
Use wcstring/string instead of a character array. The variable
`term_env` was not being freed before the function exited.
Fixes defect 7520324 in coverity scan.
This merges a sequence of commits that undoes the SIGCONT orchestration
used for WSL compatibility. The essential problem is this: In Unix and
Linux, exited processes are still valid until it is reaped; you can, say,
make an exited process a group leader. But in Windows, when a process
exits, it is gone, and most syscalls (other than, say, waitpid) fail for
it. This is known as the WSL Rick Grimes problem.
This manifests as various race conditions in WSL between a parent operating
on a child, and the child exiting. Prior to this merge, these were
addressed by having the child wait for the parent to send it a SIGCONT.
This resolved the race.
This merge removes this approach and replaces it with a simpler mechanism
that leverages the existing keepalive machinery. A keepalive process is
created for all platforms when we have a pipeline that contains a builtin.
This is necessary to keep the whole process group alive. The fix is, on
WSL, we always create a keepalive and make it the group leader. Because the
keepalive does not call exec and its lifetime is bound to a C++ stack
frame, it is easy to resolve the race.
This improves performance a bit (except on WSL), since child processes no
longer have to synchronize with the parent process, but the big win is
simplicity. This removes the notion of the single global stopped child, of
which there could only be one, and which had be resumed at the right
time(s), of which there were several.
keepalive processes are typically killed by the main shell process.
However if the main shell exits the keepalive may linger. In WSL
keepalives are used more often, and the lingering keepalives are both
leaks and prevent the tests from finishing.
Have keepalives poll for their parent process ID and exit when it
changes, so they can clean themselves up. The polling frequency can be
low.
Have WSL use a keepalive whenever the first process is external.
This works around the fact that WSL prohibits setting an exited
process as the group leader.
* 🚀
* prepare to merge into fish-shell
* split into different files
* remove deprecated option
* captitalize descriptions
* make shorter description for ansible
* update ansible-playbook (and ansible for consistency)
* update version on vault and galaxy
When the pager wants to use the full screen to show many options, it reserves
space at the top to see the command. Previously it pretended the command was a
prompt and engaged the prompt layout mechanism to compute these lines. Instead
let's juts count newlines since escape sequences within commands are very rare.