FileForums (//
-   Conversion Tutorials (//
-   -   new tool (//

doofoo24 29-12-2017 14:03

new tool

bink compressor - bpk 0.2.6
unreal engine lzo - UELR 0.1.0
also lolz & other compresser Enjoy ;)

KaktoR 29-12-2017 15:03

Anyone can send me the tools in PM?

doofoo24 29-12-2017 15:09

i attached the tools in the first post

uelr_0.1.0b for Unreal Engine game file like tfc
OGGRE 0.1.1 for Audio file
bpk_v0.2.6 for bk video
new srep 0.3
and two compressor in folder lolz21a5c dlz_v0.2.2b

KaktoR 29-12-2017 15:14

Thank you

google translate:

bpk - compressor bink video files first and second version.
exe'shniks in the package handle only the file, cls dll'ki - batch by freearc'a. cls-bpk.dll is used only for compression (there is no decompressing part), cls-bpk_u.dll - only for unpacking (previously renamed to cls-bpk.dll).
% compression of the first version of bink is slightly better than the second, because as in the second case the authors of the format twisted the compression.
bpk compresses and decompresses into several threads. For the first version of bink, the average number of kernels used is 2-3, for the second version and up to 2x it does not hold.
On some KB2j with the alpha channel, there are definitely jambs in the processing. Samples are there, so you can not write about mistakes, there is no desire to fix.


lolz is a compressor built on the basis of adaptive rans, suitable for any data, but the best results show on the structured data. Made special models for dxt textures and raw graphics (in the distant future, perhaps, and for raw audio will appear). It is possible to compress and expand multithreaded.
There are quite a few options in lolz, but some of them do not work anymore due to lack of need for them, but they are not removed from the codec yet. The default options are optimal in most cases.
Short description of the options:
Data Detection Options:
- dt [0..1] - enables / disables detection of pos_ctx / dxt / raw. no headings, everything is detected based on the analysis of data statistics. Default: -dt1;
-dtp [0..1] - enables / disables the transmission to subsequent blocks in the detector of statistics from previous blocks. Default: -dtp1;
-dtb [0..1] - turns on / off the search of all options regardless of heuristics. Default: dtb0;
-dto [0..1] - turns on / off the detection of the best positional o1 context. Default: -dto1;
-dtm [0..1] - enables / disables the detection of multimedia raw graphics. Default: -dtm1;
-dtw [0..1] - enables / disables the width detection for raw graphics and dxt textures;
-dtd [0..1] - enables / disables detection of dxt textures;
Multithread Options:
-mtt [0..1] - when multithreaded, indicates the mode of operation used. At 0, the dictionary size must be at least 2 times the block size. In this mode, the data for each thread will be loaded alternating with the size of the block. In this mode, in most cases, you can achieve better compression than in the second, but to unpack it will require as many streams as compression. At 1 each block is compressed separately, without dependencies from adjacent data, accordingly compression here is usually worse than in the first mode, but the number of streams for unpacking can be specified any. It is for this mode that the options are used from cls.ini MaxThreadsUsage and MaxMemoryUsage. Default: -mtt0;
-mt [1..16] - specifies the number of threads to process. With -mt1 and -mtt0, the usual sequential compression is obtained without loss of compression by dividing the flow into blocks. Default: -mt1;
-mtb [2..512] - specifies the block size in MB. For -mt1 -mtt0, the value also plays, but the minimum value. And more - does not mean better. Usually, for -mtt0, the optimal value is about 32-64mb, so the dictionary size should be more than 2 times larger. For the mtt1 mode, the dictionary size must be no more than the block size;
Basic options:
-d [16..2032] - the size of the dictionary in MB. Default: -d64;
-tt [1..256] is the number of paths considered in the optimal parser. Very strongly affects the speed and compression ratio, but not for unpacking. Do not ask more than 16, I assure you, it's not worth it. Default: -tt4;
-oh [8..14] - specifies the maximum number of bytes that the optimal parser will process at a time (2 ^ X). Default: -oh12;
-os [0 .. -oh ] - specifies the minimum number of bytes that the parser will process at a time (2 ^ X). Default: -os8;
-fba [0..4096] - specifies the size of the minimum match, at which the parser will not become very bothersome in the calculations. Decent compression is accelerated (twice in 2) with a slight loss of compression. At 0, these simplifications are turned off. Default: -fba256;
-fbb [0..4096] - THIS OPTION DOES NOT WORK FOR THE MOMENT. I asked even greater simplifications;
-al [0..1] - enables / disables the computation of the liter price, even if rep0 matches. Default: -al1;
-x [0..2] - includes slow modes of the parser operation with a miscalculation (almost) of all the lengths of the match found, as well as the options match + lit + rep0match. Very slow and merciless. And the benefits are very small. Simpler -tt add; Default: -x0;
Match search options (matchfinder):
-rt [0..2] - THIS OPTION AT THIS TIME DOES NOT WORK. I set the type matchfinder'a - lz, rolz or hybrid mode, but rolz did not live up to expectations and I made all the changes without taking it into account, so it does not work now; Default: -rt0;
-mc [2..1023] - Specifies the maximum number of bypasses of the binary match tree, after which matches for this position are no longer searched; Default: -mc128;
Model options:
-cm [0..1] - turns on / off the simple context mixer in some critical places, which mixes a couple of models in each place. When enabled, improves compression, but slows decompression. Default: -cm1;
-bc [0..8] - sets the level of influence of the previous byte to the mixer; Default: -bc4;
-lm [0..4] - THIS OPTION AT THIS TIME DOES NOT WORK. Defined the type of the "elementary" literal. Complex high-order models with cm showed themselves not very, so they abandoned, as well as rolz. Default: -lm0;
-blo [0..8] - specifies the degree of influence of the previous byte on the encoding of the upper part of the literal. Default: -blo8;
-bll [0..8] - specifies the degree of influence of the previous byte on the encoding of the lower part of the literal. Default: -bll8;
-blr [0..8] - specifies the degree of influence of the rep0lit byte on the encoding of the upper part of the literal. Default: -blr4;
-bm [0..8] - specifies the degree of influence of the rep0lit byte on the encoding of the match type flag. Default: -bm4;
-pc [0..4] - sets the position context for all encoding operations. Automatically ignored when the detective is on. Default: -pc2;
-dmXY (X [0..3], Y [0..4]) - specifies the model for encoding pairs of colors (X) and alpha channel pairs (Y). At the maximum value of each parameter, adaptive switching between models is used, with the speed of decompression being reduced, but compression is in most cases better. Default: -dm34;
-gmXY (X [0..2], Y [0..1]) - X - specifies the model for encoding raw graphics. At the maximum value, adaptive switching between models is switched on. However, in this case, rarely when you can see the gain from the adaptive mode. The 0 mode is mostly in the lead, but its unpacking is 2 times slower than the 1st mode. Y - includes update of model statistics when they were not used (for example, there was a long coincidence). For X0 and X1, usually gives a small gain in compression, but the speed drops by a factor of 2 (all depends on the data). In general, the most optimal is -gm00, it is also the default mode;


DLZ is a compression algorithm based on lzma with added models for compressing dxt and raw textures with dds header. Detect occurs by the title.
In general, it compresses slightly better than lzma, it's decent for dds, but srep spoils everything.
The options are almost the same as those of lzma except:
-cc (context complexity), - cm (model complexity), -cu (context update), -dc , -em , -ep , -pc , -pl
I do not remember what they mean (the project is almost 5 years), and what I remember is too lazy to write. Who needs - will understand. The rest pass by.


UELR is a recompressor of unreal engine LZO streams.
I did it for a very long time, I do not remember anything, :sorry:


OGGRE - compressor ogg vorbis audio files. In the format of vorbis, the final entropy encoding of the frames is very well done, so there is not much to pinch there. But with ogg headers and codebooks at the beginning of each ogg file is a completely different situation, so they had to apply deduplication for them. As a result, only 5-7% of lzma won.
To compress via freearc in arc.ini, you need to add something like this:
[External compressor: oggre]
header = 0
packcmd = oggre_enc.exe {options} $$ arcdatafile $$. tmp $$ arcpackedfile $$. tmp
and for unpacking use CLS-OGGRE.dll.

I also made a version for the compression of wwise vorbis separately - there they just removed all the ogg headers and code books. OGGRE compressor for wwise was done very much on the knee, so to encode a file you must first convert it using ww2ogg utility from .wav to .ogg and then press oggre_enc_wwise.exe. The instance unpacked through the oggre_dec_wwise will correspond to the .wav file, but without the WAVE header. Shorter than clean play.

PS if you do not understand how to use, or something will not work - these are your problems, I scored a bolt on this project.


cls-srep -cls filter for unpacking srep via freearc. Unlike the previous version of my version, I use cls-srep.dll (it has the internal name cls-universal and is the same for all my projects using it), which runs exe depending on the system capacity and exchanges data with it using shared memory technology, which is faster commonly used pipe. Also in cls.ini, the Memory field can be set in percentages and with the absolute value difference, as it is done in the console srep, but unlike the memory it is considered not from its total quantity on the computer, but from the free one. Well, a couple of minor technical chips, but they are of no interest to anyone.
In general, the use of the filter does not change: we put all the filter files next to unarc.dll / exe and change the TempPath field to cls.ini, specifying the path to the tempo folder.

KaktoR 29-12-2017 15:20

1 Attachment(s)
Really nice

Will test all night long

Big thanks to profragger!

c/d is really fast, and what is most important, file check is md5 perfect!

doofoo24 29-12-2017 16:43

so test uelr on mass effect 1 for lzo file and bpk for the video
i get from 10.4gb to 3.25gb...
the setting uelr+srep:m3f:a2+lzma:a1:mfbt4:d384m:fb273:mc10000 :lc8...
for bk file i used mbpk with cls it write 30mb/s

[External compressor:srep]
header = 0
packcmd = srep {options} $$arcdatafile$$.tmp $$arcpackedfile$$.tmp

[External compressor:uelr]
header = 0
packcmd = uelr.exe uv $$arcdatafile$$.tmp $$arcpackedfile$$.tmp

ZakirAhmad 29-12-2017 22:23

arc.ini settings of all compressors please
please provide arc.ini packcmd and unpackcmd settings. thanks

78372 30-12-2017 00:12

2 Attachment(s)
Seems rz is better than lolz

78372 30-12-2017 00:22


Originally Posted by ZakirAhmad (Post 465375)
please provide arc.ini packcmd and unpackcmd settings. thanks


[External compressor:dlz]
;options  = :d :lc :lp :fb :mc :ep :dc :em :cu :pc :pl
header = 0
packcmd  = dlz_x64.exe {options} $$arcdatafile$$.tmp $$arcpackedfile$$.tmp

[External compressor:lolz]
header = 0
packcmd = lolz_x64.exe {options} $$arcdatafile$$.tmp $$arcpackedfile$$.tmp

[External compressor:uelr]
header = 0
packcmd  = uelr.exe uv $$arcdatafile$$.tmp $$arcpackedfile$$.tmp

[External compressor:srep]
header = 0
packcmd = srep64 -m3f $$arcdatafile$$.tmp $$arcpackedfile$$.tmp

[External compressor:oggre]
header = 0
packcmd = OGGRE_enc $$arcdatafile$$.tmp $$arcpackedfile$$.tmp

ZakirAhmad 30-12-2017 00:31

thanks but also unpackcmd and also for bink
bink also

78372 30-12-2017 00:38


Originally Posted by ZakirAhmad (Post 465378)
bink also

bink doesn't need arc.ini, just use cls-bpk.dll

ZakirAhmad 30-12-2017 00:42

please how can i compress with bink

What settings can i use with lolz to get best results. thanks in advance

Next time EDIT your post instead of making another reply, only 5 mins apart, no one has posted between them

JustFun 30-12-2017 08:10

Anyone else getting the "Stopped Working Error" when decompressing with lolz.
Works fine when compressing, but when decompressing it stops working.
I am obviously missing something?

78372 30-12-2017 09:14


Originally Posted by JustFun (Post 465386)
Anyone else getting the "Stopped Working Error" when decompressing with lolz.
Works fine when compressing, but when decompressing it stops working.
I am obviously missing something?

Make sure your decompressor is running as administrator

JustFun 30-12-2017 10:23

It is running as admin.

All times are GMT -7. The time now is 09:01.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2018, vBulletin Solutions Inc.
Copyright 2000-2018, FileForums @