A CLI tool for viewing, querying, and converting tabular data files. Reads CSV, TSV, JSON Lines, Parquet, Avro, SQLite tables, and DuckDB tables -- locally or from S3, GCS, and Azure Blob Storage.
pip install tab-cliDocumentation: tongfei.me/tab
tab view data.csvtab view 's3://aws-public-blockchain/v1.0/btc/blocks/date=2026-01-*/*'The table is always available as t:
tab view --sql 'SELECT * FROM t WHERE Metric_A_Value > 80' data.csvtab view --jp '{id: participant.id, city: profile.address.city}' data.parquet tab convert data.csv data.parquet
tab convert data.parquet data.jsonl -o jsonl
tab convert data.csv output_dir/ -o parquet -n 4 # partitionedtab cat part1.csv part2.csv part3.csv -o jsonl > combined.jsonltab schema data.parquet
tab summary data.parquetcurl -s https://example.com/data.csv | tab view -i csv -tab view s3://bucket/path/data.parquet
tab view gs://bucket/path/data.csv
tab view az://container/path/data.jsonlGlobbing is supported for local and cloud paths:
tab view 'data/date=*/*.parquet'
tab view 's3://bucket/path/date=2026-01-*/*'Use {url}#{table_name} for SQLite inputs:
tab view data.db#users
tab view s3://bucket/path/data.db#usersUse {url}#{table_name} for DuckDB inputs:
tab view data.duckdb#users
tab view s3://bucket/path/data.duckdb#usersInstall cloud extras as needed:
pip install 'tab-cli[s3]' # AWS S3
pip install 'tab-cli[gs]' # Google Cloud Storage
pip install 'tab-cli[az]' # Azure Blob Storage
pip install 'tab-cli[duckdb]' # DuckDB input support
pip install 'tab-cli[sqlite]' # SQLite via Polars ADBC
pip install 'tab-cli[all]' # Install all optional integrations- csv
- tsv
- jsonl
- parquet
- avro
- duckdb (input only; use
{url}#{table_name}) - sqlite (input only; use
{url}#{table_name})