gitea/vendor/github.com/klauspost/compress/huff0
6543 792b4dba2c
[Vendor] Update directly used dependencys (#15593)
* update github.com/blevesearch/bleve v2.0.2 -> v2.0.3

* github.com/denisenkom/go-mssqldb v0.9.0 -> v0.10.0

* github.com/editorconfig/editorconfig-core-go v2.4.1 -> v2.4.2

* github.com/go-chi/cors v1.1.1 -> v1.2.0

* github.com/go-git/go-billy v5.0.0 -> v5.1.0

* github.com/go-git/go-git v5.2.0 -> v5.3.0

* github.com/go-ldap/ldap v3.2.4 -> v3.3.0

* github.com/go-redis/redis v8.6.0 -> v8.8.2

* github.com/go-sql-driver/mysql v1.5.0 -> v1.6.0

* github.com/go-swagger/go-swagger v0.26.1 -> v0.27.0

* github.com/lib/pq v1.9.0 -> v1.10.1

* github.com/mattn/go-sqlite3 v1.14.6 -> v1.14.7

* github.com/go-testfixtures/testfixtures v3.5.0 -> v3.6.0

* github.com/issue9/identicon v1.0.1 -> v1.2.0

* github.com/klauspost/compress v1.11.8 -> v1.12.1

* github.com/mgechev/revive v1.0.3 -> v1.0.6

* github.com/microcosm-cc/bluemonday v1.0.7 -> v1.0.8

* github.com/niklasfasching/go-org v1.4.0 -> v1.5.0

* github.com/olivere/elastic v7.0.22 -> v7.0.24

* github.com/pelletier/go-toml v1.8.1 -> v1.9.0

* github.com/prometheus/client_golang v1.9.0 -> v1.10.0

* github.com/xanzy/go-gitlab v0.44.0 -> v0.48.0

* github.com/yuin/goldmark v1.3.3 -> v1.3.5

* github.com/6543/go-version v1.2.4 -> v1.3.1

* do github.com/lib/pq v1.10.0 -> v1.10.1 again ...
2021-04-22 20:08:53 -04:00
..
.gitignore Dump: add output format tar and output to stdout (#10376) 2020-06-05 16:47:39 -04:00
bitreader.go Macaron 1.5 (#12596) 2020-08-27 22:47:17 -04:00
bitwriter.go Macaron 1.5 (#12596) 2020-08-27 22:47:17 -04:00
bytereader.go Dump: add output format tar and output to stdout (#10376) 2020-06-05 16:47:39 -04:00
compress.go [Vendor] Update directly used dependencys (#15593) 2021-04-22 20:08:53 -04:00
decompress.go Macaron 1.5 (#12596) 2020-08-27 22:47:17 -04:00
huff0.go Vendor Update Go Libs (#13166) 2020-10-16 01:06:27 -04:00
README.md Vendor Update (#14496) 2021-01-28 17:56:38 +01:00

Huff0 entropy compression

This package provides Huff0 encoding and decoding as used in zstd.

Huff0,
a Huffman codec designed for modern CPU, featuring OoO (Out of Order) operations on multiple ALU
(Arithmetic Logic Unit), achieving extremely fast compression and decompression speeds.

This can be used for compressing input with a lot of similar input values to the smallest number of bytes.
This does not perform any multi-byte dictionary coding as LZ coders,
but it can be used as a secondary step to compressors (like Snappy) that does not do entropy encoding.

News

This is used as part of the zstandard compression and decompression package.

This ensures that most functionality is well tested.

Usage

This package provides a low level interface that allows to compress single independent blocks.

Each block is separate, and there is no built in integrity checks.
This means that the caller should keep track of block sizes and also do checksums if needed.

Compressing a block is done via the Compress1X and
Compress4X functions.
You must provide input and will receive the output and maybe an error.

These error values can be returned:

Error Description
<nil> Everything ok, output is returned
ErrIncompressible Returned when input is judged to be too hard to compress
ErrUseRLE Returned from the compressor when the input is a single byte value repeated
ErrTooBig Returned if the input block exceeds the maximum allowed size (128 Kib)
(error) An internal error occurred.

As can be seen above some of there are errors that will be returned even under normal operation so it is important to handle these.

To reduce allocations you can provide a Scratch object
that can be re-used for successive calls. Both compression and decompression accepts a Scratch object, and the same
object can be used for both.

Be aware, that when re-using a Scratch object that the output buffer is also re-used, so if you are still using this
you must set the Out field in the scratch to nil. The same buffer is used for compression and decompression output.

The Scratch object will retain state that allows to re-use previous tables for encoding and decoding.

Tables and re-use

Huff0 allows for reusing tables from the previous block to save space if that is expected to give better/faster results.

The Scratch object allows you to set a ReusePolicy
that controls this behaviour. See the documentation for details. This can be altered between each block.

Do however note that this information is not stored in the output block and it is up to the users of the package to
record whether ReadTable should be called,
based on the boolean reported back from the CompressXX call.

If you want to store the table separate from the data, you can access them as OutData and OutTable on the
Scratch object.

Decompressing

The first part of decoding is to initialize the decoding table through ReadTable.
This will initialize the decoding tables.
You can supply the complete block to ReadTable and it will return the data part of the block
which can be given to the decompressor.

Decompressing is done by calling the Decompress1X
or Decompress4X function.

For concurrently decompressing content with a fixed table a stateless Decoder can be requested which will remain correct as long as the scratch is unchanged. The capacity of the provided slice indicates the expected output size.

You must provide the output from the compression stage, at exactly the size you got back. If you receive an error back
your input was likely corrupted.

It is important to note that a successful decoding does not mean your output matches your original input.
There are no integrity checks, so relying on errors from the decompressor does not assure your data is valid.

Contributing

Contributions are always welcome. Be aware that adding public functions will require good justification and breaking
changes will likely not be accepted. If in doubt open an issue before writing the PR.