If I use the default model, then compression is somewhat effective (I have mostly English text).
However, if I train a model first using the data I am about to compress, then use this model to do the compression, the result is 3x bigger than the initial data I started out with.
I used the default options, just fed it an IEnumerable with all the strings that I want to encode. About 72 megabytes of data.
If I use the default model, then compression is somewhat effective (I have mostly English text).
However, if I train a model first using the data I am about to compress, then use this model to do the compression, the result is 3x bigger than the initial data I started out with.
I used the default options, just fed it an IEnumerable with all the strings that I want to encode. About 72 megabytes of data.