Prior to this the tokenizer ran "one ahead", where tokenizer_t::next()
would in fact return the last-parsed token. Switch to parsing on demand
instead of running one ahead; this is simpler and prepares for tokenizer
changes.
Turns out the process-exit is only ever used in conjunction with
`%self`. Make that explicit by just adding a new "fish_exit" event,
and deprecate the general process-exit machinery.
Fixes#4700.
The previous attempt to support newlines after pipes changed the lexer to
swallow newlines after encountering a pipe. This has two problems that are
difficult to fix:
1. comments cannot be placed after the pipe
2. fish_indent won't know about the newlines, so it will erase them
Address these problems by removing the lexer behavior, and replacing it
with a new parser symbol "optional_newlines" allowing the newlines to be
reflected directly in the fish grammar.
Prior to this fix, if you attempt to complete from inside a quote and the
completion contained an entity that cannot be represented inside quotes
(i.e. \n \r \t \b), the result would be a broken mess of quotes. Rewrite
the implementation so that it exits the quotes, emits the correct unquoted
escape, and then re-enters the quotes.
Properly escape literal tildes in tab completion results. Currently we
always escape tildes in unquoted arguments; in the future we may escape
only leading tildes.
Fixes#2274
Prior to this fix, autoloads like function and completion autoloads
would check their path variable (like fish_function_path) on every
autoload request. Switch to invalidating it in response to the variable
changing.
This improves time on a microbenchmark:
for i in (seq 50000)
setenv test_env val$i
end
from ~11 seconds to ~6.5 seconds.
The job control functions were a bit messy, in particular
`set_child_group`'s name would imply that all it does is set the child
group, but in reality it used to set the child group (via `setpgid`),
set the job's pgrp if it hasn't been set, and possibly assign control of
the terminal to the newly-created job.
These have been split into separate functions. Now `set_child_group`
does just (and only) that, `maybe_assign_terminal` might assign the
terminal to the new pgrp, and `on_process_created` is used to set the
job properties the first time an external process is created. This might
also speed things up (but probably not noticeably) as there are no more
repeated calls to `getpgrp()` if JOB_CONTROL is not set.
Additionally, this closes#4715 by no longer unconditionally calling
`setpgid` on all new processes, including those created by `posix_spawn`
which does not need this since the child's pgrep is set at in the
arguments to that API call.
This merges a set of changes that switch functions from executing source
to executing an already parsed tree (the same tree used when the function
is defined). This speeds up function execution, reduces memory usage, and
avoids annoying double parsing.
A simple microbenchmark of function execution:
for i in (seq 10000)
setenv test_env val$i
end
time improves from 1.63 to 1.32 seconds.
This switches function execution from the function's source code to
its stored node and pstree. This means we no longer have to re-parse
the function every time we execute it.
The idea is that we can return the shared pointer directly, avoiding
lots of annoying little getter functions that each need to take locks.
It also helps to pull together the data structures used to initialize
functions versus store them.
This concerns block nodes with redirections, like
begin ... end | grep ...
Prior to this fix, we passed in a pointer to the node. Switch to passing
in the tnode and parsed source ref. This improves type safety and better
aligns with the function-node plans.
Prior to this fix, functions stored a string representation of their
contents. Switch them to storing a parsed source reference and the
tnode of the contents. This is part of an effort to avoid reparsing
a function's contents every time it executes.
Add a fish-specific wrapper around std::mutex that records whether it is
locked in a bool. This is to make ASSERT_IS_LOCKED() simpler (it can just
check the boolean instead of relying on try_lock) which will make Coverity
Scan happier.
Some details: Coverity Scan was complaining about an apparent double-unlock
because it's unaware of the semantics of try_lock(). Specifically fish
asserts that a lock is locked by asserting that try_lock fails; if it
succeeds fish prints an error and then unlocks the lock (so as not to leave
it locked). This unlock is of course correct, but it confused Coverity Scan.
Use wcstring/string instead of a character array. The variable
`term_env` was not being freed before the function exited.
Fixes defect 7520324 in coverity scan.