More ugliness with types that cxx bridge can't recognize as being POD. Using
pointers to get/set `termios` values with an assert to make sure we're using
identical definitions on both sides (in cpp from the system headers and in rust
from the libc crate as exported).
I don't know why cxx bridge doesn't allow `SharedPtr<OpaqueRustType>` but we can
work around it in C++ by converting a `Box<T>` to a `shared_ptr<T>` then convert
it back when it needs to be destructed. I can't find a clean way of doing it
from the cxx bridge wrapper so for now it needs to be done manually in the C++
code.
Types/values that are drop-in ready over ffi are renamed to match the old cpp
names but for types that now differ due to ffi difficulties I've left the `_ffi`
in the function names to indicate that this isn't the "correct" way of using the
types/methods.
A `block_t` instance is allocated for each live block type in memory when
executing a script or snippet of fish code. While many of the items in a
`block_t` class are specific to a particular type of block, the overhead of
`maybe_t<event_t>` that's unused except in the relatively extremely rare case of
an event block is more significant than the rest, given that 88 out of the 216
bytes of a `block_t` are set aside for this field that is rarely used.
This patch reorders the `block_t` members by order of decreasing alignment,
bringing down the size to 208 bytes, then changes `maybe_t<event_t>` to
`shared_ptr<event_t>` instead of allocating room for the event on the stack.
This brings down the runtime memory size of a `block_t` to 136 bytes for a 37%
reduction in size.
I would like to investigate using inheritance and virtual methods to have a
`block_t` only include the values that actually make sense for the block rather
than always allocating some sort of storage for them and then only sometimes
using it. In addition to further reducing the memory, I think this could also be
a safer and saner approach overall, as it would make it very clear when and
where we can expect each block_type_type_t-dependent member to be present and
hold a value.
Let's hope this doesn't causes build failures for e.g. musl: I just
know it's good on macOS and our Linux CI.
It's been a long time.
One fix this brings, is I discovered we #include assert.h or cassert
in a lot of places. If those ever happen to be in a file that doesn't
include common.h, or we are before common.h gets included, we're
unawaringly working with the system 'assert' macro again, which
may get disabled for debug builds or at least has different
behavior on crash. We undef 'assert' and redefine it in common.h.
Those were all eliminated, except in one catch-22 spot for
maybe.h: it can't include common.h. A fix might be to
make a fish_assert.h that *usually* common.h exports.
Intern'd strings were intended to be "shared" to reduce memory usage but
this optimization doesn't carry its weight. Remove it. No functional
change expected.
We store filenames in function definitions to indicate where the
function comes from. Previously these were intern'd strings. Switch them
to a shared_ptr<wcstring>, intending to remove intern'd strings.
ESCAPE_ALL is not really a helpful name. Also it's the most common flag.
Let's make it the default so we can remove this unhelpful name.
While at it, let's add a default value for the flags argument, which helps
most callers.
The absence of ESCAPE_ALL makes it only escape nonprintable characters
(with some exceptions). We use this for displaying strings in the completion
pager as well as for the human-readable output of "set", "set -S", "bind"
and "functions".
No functional change.
Prior to this commit, setting a universal variable may trigger syncing
against the file which will modify other universal variables. But if we
want to support multiple environments we need the parser to decide when to
sync uvars. Shift the decision of when to sync to the parser itself. When a
universal variable is modified, now we just set a flag and it's up to the
(main) parser when to pick it up. This is hopefully just a refactoring with
no user-visible changes.
These macros were historically used only in internal error messages which
should never happen! Now we are able to enforce they never happen at
compile time so we can remove them.
No functional change here.
If we ever need any of these... they're in this commit:
fish_wcswidth_visible()
status_cmd_opts_t::feature_name
completion_t::is_naturally_less_than()
parser_t::set_empty_var_and_fire()
parser_t::get_block_desc()
parser_keywords_skip_arguments()
parser_keywords_is_block()
job_t::has_internal_proc()
fish_wcswidth_visible()
We need special handling when reporting backtraces for commands run
during startup, i.e. config.fish. Previously we had a global variable;
make it local to the parser to eliminate a global.
No functional change here.
Cancellation groups were meant to reflect the following idea: if you ran a
simple block:
begin
cmd1
cmd2
end
then under job control, cmd1 and cmd2 would get separate groups; however if
either exits due to SIGINT or SIGQUIT we also want to propagate that to the
outer block. So the outermost block and its interior jobs would share a
cancellation group. However this is more complex than necessary; it's
sufficient for the execution context to just store an int internally.
This ought not to affect anything user-visible.
This disables job control inside command substitutions. Prior to this
change, a cmdsub might get its own process group. This caused it to fail
to cancel loops properly. For example:
while true ; echo (sleep 5) ; end
could not be control-C cancelled, because the signal would go to sleep,
and so the loop would continue on. The simplest way to fix this is to
match other shells and not use job control in cmdsubs.
Related is #1362
is_block is a field which supports 'status is-block', and also controls
whether notifications get posted. However there is no reason to store
this as a distinct field since it is trivially computed from the block
list. Stop storing it. No functional changes in this commit.
In preparation for using wait handles in --on-process-exit events, factor
wait handles into their own wait handle store. Also switch them to
per-process instead of per-job, which is a simplification.
This is preparing to address the problem where fish cannot wait on a
reaped job, because it only looks at the active job list. Introduce the
idea of a "wait handle," which is a thing that `wait` can use to check if
a job is finished. A job may produce its wait handle on demand, and
parser_t will save the wait handle from wait-able jobs at the point they
are reaped.
This change merely introduces the idea; the next change makes builtin_wait
start using it.
This goes to a separate file because that makes option parsing easier
and allows profiling both at the same time.
The "normal" profile now contains only the profile data of the actual
run, which is much more useful - you can now profile a function by
running
fish -C 'source /path/to/thing' --profile /tmp/thefunction.prof -c 'thefunction'
and won't need to filter out extraneous information.
This concerns how "internal job groups" know to stop executing when an
external command receives a "cancel signal" (SIGINT or SIGQUIT). For
example:
while true
sleep 1
end
The intent is that if any 'sleep' exits from a cancel signal, then so would
the while loop. This is why you can hit control-C to end the loop even
if the SIGINT is delivered to sleep and not fish.
Here the 'while' loop is considered an "internal job group" (no separate
pgid, bash would not fork) while each 'sleep' is a separate external
command with its own job group, pgroup, etc. Prior to this change, after
running each 'sleep', parse_execution_context_t would check to see if its
exit status was a cancel signal, and if so, stash it into an int that the
cancel checker would check. But this became unwieldy: now there were three
sources of cancellation signals (that int, the job group, and fish itself).
Introduce the notion of a "cancellation group" which is a set of job
groups that should cancel together. Even though the while loop and sleep
are in different job groups, they are in the same cancellation group. When
any job gets a SIGINT or SIGQUIT, it marks that signal in its cancellation
group, which prevents running new jobs in that group.
This reduces the number of signals to check from 3 to 2; eventually we can
teach cancellation groups how to check fish's own signals and then it will
just be 1.
This concerns code like the following:
while true ; sleep 100; end
Here 'while' is a "simple block execution" and does not create a new job,
or get a pgid. Each 'sleep' however is an external command execution, and
is treated as a distinct job. (bash is the same way). So `while` and
`sleep` are always in different job groups.
The problem comes about if 'sleep' is cancelled through SIGINT or SIGQUIT.
Prior to 2a4c545b21, if *any* process got a SIGINT or SIGQUIT, then fish
would mark a global "stop executing" variable. This obviously prevents
background execution of fish functions.
In 2a4c545b21, this was changed so only the job's group gets marked as
cancelled. However in the case of one job group spawning another, we
weren't propagating the signal.
This adds a signal to parse_execution_context which the parser checks after
execution. It's not ideal since now we have three different places where
signals can be recorded. However it fixes this regression which is too
important to leave unfixed for long.
Fixes#7259
This can be used to determine whether the previous command produced a real status, or just carried over the status from the command before it. Backgrounded commands and variable assignments will not increment status_generation, all other commands will.