Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Did a test on a 1.3G text file (output of `find -f -printf ...`); Macbook Pro M3 Max 64GB. All timings are "real" seconds from bash's builtin `time`.

    files.txt         1439563776
    bzip2 -9 -k       1026805779 71.3% c=67 d=53
    zstd --long -19   1002759868 69.7% c=357 d=9
    xz -T16 -9 -k      993376236 69.0% c=93 d=9
    zstd -T12 -16      989246823 68.7% c=14 d=9
    bzip3 -b 256       975153650 67.7% c=174 d=187
    bzip3 -b 256 -j12  975153650 67.7% c=46 d=189
    bzip3 -b 511       974113769 67.6% c=172 d=187
    bzip3 -b 511 -j12  974113769 67.6% c=77s d=186
I'll stick with zstd for now (unless I need to compress the Perl source, I guess.)

(edited to add 12 thread runs of bzip3 and remove superfluous filenames)



Since I only have 12 perf cores on this Mac, I tried the xz test again with 12 threads.

    xz -T12 -9 -k      993376236 69.0% c=83 d=9
~10% faster for compression with the same size output.


That d=9 sure wins the day there, for me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: