The scoreboard

Verified on 65 real-world files. Every file decompresses byte-exact. Zero losses. Ever.

0
WIN
0
TIE
0
LOSS

CSV

23WIN2TIE
Up to 42.5% smaller than LZMA-9
Best: 42.5%Avg: 18.3%

JSONL

10WIN2TIE
Up to 88.8% smaller than LZMA-9
Best: 88.8%Avg: 43.2%

Logs

1WIN
31.7% smaller on Apache access logs
Best: 31.7%Avg: 23.6%

JSON

1WIN
8.7% smaller on structured JSON
Best: 8.7%Avg: 8.5%

Text

6WIN
14% smaller on literary classics
Best: 14%Avg: 10.9%

Source Code

14WIN6TIE
Even code compresses better
Best: 3.7%Avg: 2.1%

All 65 files

File
Type
Size
Saving vs LZMA
Result
stress_10k.jsonlJSONL2.4 MB88.8%WIN
sensor_data_20k.jsonlJSONL1.8 MB45.4%WIN
user_records_10k.jsonlJSONL1.6 MB45.0%WIN
event_log_5k.jsonlJSONL0.9 MB44.1%WIN
us_baby_names.csvCSV7.1 MB42.5%WIN
consumer_complaints_26k.csvCSV4.5 MB39.2%WIN
api_logs_8k.jsonlJSONL1.4 MB38.3%WIN
ecb_exchange_rates.csvCSV6.6 MB37.2%WIN
jhu_covid_deaths_us.csvCSV11.3 MB33.2%WIN
apache_access_10k.logLog1.3 MB31.7%WIN
noaa_weather_daily.csvCSV3.2 MB28.8%WIN
wide_schema.jsonlJSONL0.3 MB27.9%WIN
owid_energy.csvCSV5.8 MB26.6%WIN
owid_co2.csvCSV8.4 MB25.9%WIN
mixed_types.jsonlJSONL0.5 MB24.7%WIN
fivethirtyeight_nba.csvCSV2.1 MB24.2%WIN
airport_runways.csvCSV1.8 MB22.7%WIN
ecommerce_events.jsonlJSONL1.1 MB21.8%WIN
usgs_earthquakes_30d.csvCSV4.2 MB21.1%WIN
server_metrics.jsonlJSONL0.8 MB18.6%WIN
owid_covid_full.csvCSV95.0 MB16.4%WIN
fivethirtyeight_congress.csvCSV1.4 MB15.6%WIN
clickstream.jsonlJSONL0.6 MB15.3%WIN
airport_frequencies.csvCSV0.9 MB14.3%WIN
pride_and_prejudice.txtText0.75 MB14.0%WIN
world_cities.csvCSV2.8 MB13.5%WIN
global_power_plants.csvCSV6.3 MB12.2%WIN
sherlock_holmes.txtText0.59 MB12.0%WIN
tale_of_two_cities.txtText0.79 MB11.8%WIN
frankenstein.txtText0.44 MB11.6%WIN
un_population.csvCSV3.5 MB10.9%WIN
country_indicators.csvCSV1.1 MB9.7%WIN
alice_in_wonderland.txtText0.17 MB9.3%WIN
iris.csvCSV0.004 MB8.8%WIN
api_config.jsonJSON0.07 MB8.7%WIN
chicago_crimes.csvCSV42.1 MB7.9%WIN
titanic.csvCSV0.06 MB7.6%WIN
moby_dick.txtText1.2 MB6.9%WIN
boston_housing.csvCSV0.04 MB6.5%WIN
wine_quality.csvCSV0.26 MB5.3%WIN
auto_mpg.csvCSV0.02 MB4.7%WIN
react_index.jsSource0.85 MB3.7%WIN
lodash.jsSource0.54 MB3.4%WIN
express_app.tsSource0.12 MB3.2%WIN
django_models.pySource0.34 MB3.0%WIN
spring_boot.javaSource0.67 MB2.8%WIN
tensorflow_ops.ccSource0.91 MB2.6%WIN
linux_kernel.cSource1.4 MB2.5%WIN
rust_compiler.rsSource0.45 MB2.4%WIN
go_stdlib.goSource0.38 MB2.2%WIN
swift_foundation.swiftSource0.29 MB2.1%WIN
vue_runtime.jsSource0.72 MB2.0%WIN
angular_core.tsSource0.63 MB1.9%WIN
numpy_core.pySource0.21 MB1.8%WIN
rails_active_record.rbSource0.48 MB1.7%WIN
ghcnd_stations.txtCSV9.1 MB0%TIE
movietweetings_ratings.datCSV0.5 MB0%TIE
github_archive_10mb.jsonlJSONL10.0 MB0%TIE
nested_objects.jsonlJSONL0.007 MB0%TIE
small_util.jsSource0.003 MB0%TIE
config.yamlSource0.002 MB0%TIE
readme.mdSource0.004 MB0%TIE
setup.cfgSource0.001 MB0%TIE
makefileSource0.002 MB0%TIE
dockerfileSource0.001 MB0%TIE
65 files shown — All benchmarked against LZMA-9 (maximum compression)

Methodology

Baseline:     LZMA-9 (maximum compression level)
Verification: Byte-exact round-trip on every file
Guarantee:    Never-worse (PZIP <= LZMA always)
Tests:        916 automated, zero manual overrides
Container:    PZ01 v2 (21-byte overhead)
Date:         2026-02-01

About that Weissman Score...

Remember the Weissman Score from Silicon Valley? It measures compression ratio times the log of speed. The show made it up for TV drama.

We made it real. Except ours is verified on 65 real-world files, not fictional middleware. Unlike Pied Piper, we have 46 papers. They had...enthusiasm.

"This guy compresses." — probably Russ Hanneman

Download the test data

Every benchmark above uses public data. Download the same files, run any compressor you want, and verify our numbers yourself. No trust required — just math.

Want to test on your own data? Head to the demo page and upload any file. We'll compress it with PZIP, and you can run LZMA, gzip, bzip2, zstd, and xz right there to compare.