Seems like size_t is unnecessarily large as well, as elsewhere
in the code we are clamping down to uint32_t / source_offset_t.
This makes tok_t more like 16 bytes. More cleanup seems desirable,
this is not very well hamrnoized across our code base.
In preparation for applying variable assignments (VAR=VAL cmd), separate
them out from the command when performing completions. This includes both
those that the user typed, and any that come about through
completion --wraps.
Prior to this change, a process after it has been constructed by
parse_execution, but before it is executed, was given a list of
io_data_t redirections. The problem is that redirections have a
sensitive ownership policy because they hold onto fds. This made it
rather hard to reason about fd lifetime.
Change these to redirection_spec_t. This is a textual description
of a redirection after expansion. It does not represent an open file and
so its lifetime is no longer important.
This enables files to be held only on the stack, and are no longer owned
by a process of indeterminate lifetime.
Presently the completion engine ignores builtins that are part of the
fish syntax. This can be a problem when completing a string that was
based on the output of `commandline -p`. This changes completions to
treat these builtins like any other command.
This also disables generic (filename) completion inside comments and
after strings that do not tokenize.
Additionally, comments are stripped off the output of `commandline -p`.
Fixes#5415Fixes#2705
Mostly related to usage _(L"foo"), keeping in mind the _
macro does a wcstring().c_str() already.
And a smattering of other trivial micro-optimizations certain
to not help tangibly.
Rather than having tokenizer_error as pointers to objects, switch it back
to just an error code value. This makes reasoning about it easier since
it's immutable values instead of mutable objects, and it avoids allocation
during startup.
Work around #4810 by retrieving localizations at runtime to avoid issues
possibly caused by inserting into the static unordered_map during static
initialization.
Closes#810.
Line continuations (i.e. escaped new lines) now make sense again. With
the smart pipe support (pipes continue on to next line) recently added,
this hack to have continuations ignore comments makes no sense.
This is valid code:
```fish
echo hello |
# comment here
tr -d 'l'
```
this isn't:
```fish
echo hello | \
# comment here
tr -d 'l'
```
Reverts @snnw's 318daaffb2Closes#2928. Closes#2929.
Prior to this fix, each redirection type was a separate token_type.
Unify these under a single type TOK_REDIRECT and break the redirection
type out into a new sub-type redirection_type_t.
Prior to this the tokenizer ran "one ahead", where tokenizer_t::next()
would in fact return the last-parsed token. Switch to parsing on demand
instead of running one ahead; this is simpler and prepares for tokenizer
changes.
The previous attempt to support newlines after pipes changed the lexer to
swallow newlines after encountering a pipe. This has two problems that are
difficult to fix:
1. comments cannot be placed after the pipe
2. fish_indent won't know about the newlines, so it will erase them
Address these problems by removing the lexer behavior, and replacing it
with a new parser symbol "optional_newlines" allowing the newlines to be
reflected directly in the fish grammar.
Remove the "make iwyu" build target. Move the functionality into the
recently introduced lint.fish script. Fix a lot, but not all, of the
include-what-you-use errors. Specifically, it fixes all of the IWYU errors
on my OS X server but only removes some of them on my Ubuntu 14.04 server.
Fixes#2957