This is an opposite case from the usual "pipe into grep-the-function"
where my `pbpaste` emitted a lot of content exceeding the OS pipe
buffer. The `block_on_fg` condition was just `send_sigcont` in the
original job control rewrite, and it was incorrect to sub it for
WAIT_BY_PROCESS on its own.
However, this requires always blocking when select_try returns an
interrupted/incomplete read or else fish doesn't block and stays running
in a tight loop in the background (and incorrectly writing to a terminal
it doesn't own under higher debug levels), which I *think* is OK.
Instantiate the std:locale instance used within the character comparison
callback outside the lambda and take a reference to it instead of
creating the locale object for each character in the sequence.
This is part of a very tight loop with lots of inputs during the
evaluation of fuzzy string matches for completions/autosuggestions and
is worth optimizing.
This was introduced in 1b1bc28c0a but did
not cause any problems until the job control refactor, which caused it
to attempt to signal the calling `exec` builtin's own (invalid) pgrp
with SIGHUP.
Also improved debugging for `j->signal()` failures by printing the
signal we tried sending in case of error, rename the function to
`hup_background_jobs`, and move it from `reader.h`/`reader.cpp` to
`proc.h`/`proc.cpp`.
When a function is encountered by exec_job, a new context is created for
its execution from the ground up, with a new job and all, ultimately
resulting in a recursive call to exec_job from the same (main) thread.
Since each time exec_job encounters a new job with external commands
that needs terminal control it creates a new pgrp and gives it control
of the terminal (tcsetpgrp & co), this effectively takes control away
from the previously spawned external commands which may be (and likely
are) expecting to still have terminal access.
This commit attempts to detect when such a situation arises by handling
recursive calls to exec_job (which can only happen if the pipeline
included a function) by borrowing the pgrp from the (necessarily still
active) parent job and spawning new external commands into it.
When a parent job spawns new jobs due to the evaluation of a new
function (which shouldn't be the case in the first place), we end up
with two distinct jobs sharing one pgrp (to fix#3952). This can lead to
early termination of a pgrp if finished parent job children are reaped
before future processes in either the parent or future child jobs can
join it.
While the parent job is under construction, require that waitpid(2)
calls for the child job be done by process id and not job pgrp.
Closes#3952.
Use SIGCHLD to determine whether or not waitpid(2) calls can be elided,
but only with extreme caution. If we receive SIGCHLD but are not able to
reap all jobs, we need to iterate through them again.
For this to work, we need to make sure that we reap all children that we
can reap after a SIGCHLD, i.e. it's not OK to just reap the first and
return or else we can never clear the dirty state flag.
In all cases, as expensive as a call to waitpid() may be, if a child
process is available for reaping it is always cheaper to wait on it then
reap it than to call select_try() and end up timing out.
The old code was rather haphazard with regards to error control, and
would make mutable changes before operations that could fail without any
viable error handling options.
Convert `select_try()` to return a well-defined enum describing its
state, and handle each of the three possible cases with clear reasons
why we are blocking or not blocking in each subsequent call to
`process_mark_finished_children()`.
* Use the newly-introduced signal_block_t RAII wrapper
* Remove EINTR loops as all signals are blocked
* Clean up control flow thanks to RAII wrappers
* Rename parameter to clarify what it does and update docs accordingly
* Update outdated comments referencing SIGSTOP code that was removed a
long time ago.
* Remove no-op CHECK_BLOCK() call
* Convert JOB_* enums to scoped enums
* Convert standalone job_is_* functions to member functions
* Convert standalone job_{promote, signal, continue} to member functions
* Convert standolen job_get{,_from_pid} to `job_t` static functions
* Reduce usage of JOB_* enums outside of proc.cpp by using new
`job_t::is_foo()` const helper methods instead.
This patch is only a refactor and should not change any functionality or
behavior (both observed and unobserved).
* Debug level 3: describe all commands being executed (this is, after all,
a shell and one can argue that this is the most important debug
information avaliable)
* Debug level 4: details of execution, mainly fork vs no-fork and io
handling
Also introduced j->preview() to print a short descriptor of the job
based on the head of the first process so we don't overwhelm with
needless repitition, but also so that we don't have to rely on
distinguishing between repeated, non-unique/non-monotonic job ids that
are often recycled within a single "execution cycle" (pressing enter
once).
Per @ridiculousfish's suggestions in #5219,
`process_mark_finished_children()` has been updated to work in an easier-
to-follow manner. Its behavior is now straight forward, it always checks
for finished processes but only blocks if `block_on_fg` is true.
We're not using the SIGCHLD count in s_sigchld_generation_cnt for
anything any more, as it's not actually a reliable metric since we can
experience one SIGCHLD as a result of two processes exiting (see #1768),
but only reap one of them if the other is in a not-fully-constructed job
(see #5219), a state we cannot possibly detect without calling
`waitpid()` on all child processes, which we are explicitly avoiding.
We never insert elements into the middle of a job list, only move
elements to the top. While that can be done "efficiently" with a list, it
can be done faster with a deque, which also won't thrash the cache when
enumerating over jobs.
This speeds up enumeration in the critical path in
`process_mark_finished_children()`.
* Instead of reaping all child processes when we receive a SIGCHLD, try
reaping only processes belonging to process groups from fully-
constructed jobs, which should eliminate the need for the keepalive
process entirely (WSL's lack of zombies not withstanding) as now
completed processes are not reaped until the job has been fully
constructed (i.e. all processes launched), which means their process
group should still be around for new processes to join.
* When `tcgetpgrp()` calls return 0, attempt to `tcsetpgrp()` before
invoking failure handling code.
* When forking a builtin and not running interactively, do not bail if
unable to set/restore terminal attributes.
Fixes#4178. Fixes#3805. Fixes#5210.
This is to avoid development versions of fish 3.0 freaking out when the
file format is changed. We now have better support for for future universal
variable formats so it's unlikely we'll have to change the file name again.
We do a bunch of escaping before to make `eval` work, and that needs to be removed as well or fragment-urls don't work.
This reverts commit e9568069a7.
Use clang/clang++'s own autocompletion support to complete arguments. It
is rather convoluted as clang generates autocompletions for a portion of
the current token rather than the entire token, e.g. while `--st` will
autocomplete to `--std=` (which is fine by fish), `--std=g` will
autocomplete to `gnu...` without the leading `--std=` which breaks fish'
support for the completion.
Additionally, on systems where clang/clang++ is the system compiler
(such as FreeBSD), it is very often for users to invoke a newer version
of clang/clang++ installed as clang[++]-NN instead of clang. Using a
monkey-patched version of `complete -p` to support that without breaking
(future) completions for commands like `clang-format`.
Closes#4174.
In private mode, access to previous history is blocked and new history
does not persist and is only available for the duration of the current
session.
This mode can be used when it is not desirable for commandline history
to leak into a session, e.g. via autocomplete or when it is desirable to
test the behavior of fish in the absence of history items without
permanently clearing the history.
I'm sure there are a lot more features that can be incorporated into
private mode, such as restricting access to certain user-specific
configuration files, etc.
This addresses a lot of the concerns raised in #1363 (which was later
changed to track mosh-specific problems). See also #102.
When we discard output because there's been too much, we print a
warning, but subsequent uses of the same buffer still discard.
Now we explicitly reset the flag, so we warn once and everything works
normal after.
Fixes#5267.