Add helper to cache `| source` completions
We have a lot of completions that look like
```fish
pip completion --fish 2>/dev/null | source
```
That's *fine*, upstream gives us some support.
However, the scripts they provide change very rarely, usually not even
every release, and so running them again for every shell is extremely
wasteful.
In particular the python tools are very slow, `pip completion --fish`
takes about 180ms on my system with a hot cache, which is quite
noticeable.
So what we do is we run them once, store them in a file in our cache
directory, and then serve from that.
We store the mtime of the command we ran, and compare against that for
future runs. If the mtime differs - so if the command was up or
downgraded, we run it again.
2024-01-17 00:22:18 +08:00
|
|
|
function __fish_cache_sourced_completions
|
|
|
|
# Allow a `--name=foo` option which ends up in the filename.
|
|
|
|
argparse -s name= -- $argv
|
|
|
|
or return
|
|
|
|
|
|
|
|
set -q argv[1]
|
|
|
|
or return 1
|
2024-02-26 23:15:54 +08:00
|
|
|
|
Add helper to cache `| source` completions
We have a lot of completions that look like
```fish
pip completion --fish 2>/dev/null | source
```
That's *fine*, upstream gives us some support.
However, the scripts they provide change very rarely, usually not even
every release, and so running them again for every shell is extremely
wasteful.
In particular the python tools are very slow, `pip completion --fish`
takes about 180ms on my system with a hot cache, which is quite
noticeable.
So what we do is we run them once, store them in a file in our cache
directory, and then serve from that.
We store the mtime of the command we ran, and compare against that for
future runs. If the mtime differs - so if the command was up or
downgraded, we run it again.
2024-01-17 00:22:18 +08:00
|
|
|
set -l cmd (command -s $argv[1])
|
|
|
|
or begin
|
|
|
|
# If we have no command, we can't get an mtime
|
|
|
|
# and so we can't cache
|
|
|
|
# The caller can more easily retry
|
|
|
|
return 127
|
|
|
|
end
|
|
|
|
|
|
|
|
set -l cachedir (__fish_make_cache_dir completions)
|
|
|
|
or return
|
|
|
|
|
|
|
|
set -l stampfile $cachedir/$argv[1].stamp
|
|
|
|
set -l compfile $cachedir/$argv[1].fish
|
|
|
|
|
|
|
|
set -l mtime (path mtime -- $cmd)
|
|
|
|
|
|
|
|
set -l cmtime 0
|
|
|
|
path is -rf -- $stampfile
|
2024-02-26 23:15:54 +08:00
|
|
|
and read cmtime <$stampfile
|
Add helper to cache `| source` completions
We have a lot of completions that look like
```fish
pip completion --fish 2>/dev/null | source
```
That's *fine*, upstream gives us some support.
However, the scripts they provide change very rarely, usually not even
every release, and so running them again for every shell is extremely
wasteful.
In particular the python tools are very slow, `pip completion --fish`
takes about 180ms on my system with a hot cache, which is quite
noticeable.
So what we do is we run them once, store them in a file in our cache
directory, and then serve from that.
We store the mtime of the command we ran, and compare against that for
future runs. If the mtime differs - so if the command was up or
downgraded, we run it again.
2024-01-17 00:22:18 +08:00
|
|
|
|
|
|
|
# If either the timestamp or the completion file don't exist,
|
|
|
|
# or the mtime differs, we rerun.
|
|
|
|
#
|
|
|
|
# That means we'll rerun if the command was up- or downgraded.
|
|
|
|
if path is -vrf -- $stampfile $compfile || test "$cmtime" -ne "$mtime" 2>/dev/null
|
2024-02-26 23:15:54 +08:00
|
|
|
$argv >$compfile
|
Add helper to cache `| source` completions
We have a lot of completions that look like
```fish
pip completion --fish 2>/dev/null | source
```
That's *fine*, upstream gives us some support.
However, the scripts they provide change very rarely, usually not even
every release, and so running them again for every shell is extremely
wasteful.
In particular the python tools are very slow, `pip completion --fish`
takes about 180ms on my system with a hot cache, which is quite
noticeable.
So what we do is we run them once, store them in a file in our cache
directory, and then serve from that.
We store the mtime of the command we ran, and compare against that for
future runs. If the mtime differs - so if the command was up or
downgraded, we run it again.
2024-01-17 00:22:18 +08:00
|
|
|
# If the command exited unsuccessfully, we assume it didn't work.
|
|
|
|
or return 2
|
2024-02-26 23:15:54 +08:00
|
|
|
echo -- $mtime >$stampfile
|
Add helper to cache `| source` completions
We have a lot of completions that look like
```fish
pip completion --fish 2>/dev/null | source
```
That's *fine*, upstream gives us some support.
However, the scripts they provide change very rarely, usually not even
every release, and so running them again for every shell is extremely
wasteful.
In particular the python tools are very slow, `pip completion --fish`
takes about 180ms on my system with a hot cache, which is quite
noticeable.
So what we do is we run them once, store them in a file in our cache
directory, and then serve from that.
We store the mtime of the command we ran, and compare against that for
future runs. If the mtime differs - so if the command was up or
downgraded, we run it again.
2024-01-17 00:22:18 +08:00
|
|
|
end
|
|
|
|
|
|
|
|
if path is -rf -- $compfile
|
|
|
|
source $compfile
|
|
|
|
return 0
|
|
|
|
end
|
|
|
|
return 3
|
|
|
|
end
|