Skip to content

Latest commit

 

History

History
409 lines (270 loc) · 9.62 KB

README.md

File metadata and controls

409 lines (270 loc) · 9.62 KB

Last Updated: 06/21/2024

Background

This repo contains files for a data frames benchmark. Currently, the data frame pacakges tested include R data.table, Python Polars, R DuckDB, Python Pandas, and R Collapse.

All of the packages are installed as recommended. I'm using Windows 10 OS. If anyone wants to run these on MAC or Linux, please share your results and I will display them. Lastly, I'm running this locally, not on cloud.

The datasets utilized replicates a real world example of a beverage company's data, for 1M, 10M, 100M, and 1B records. The datasets include a date variable, four group variables, and four numeric variables. The benchmark tests each dataset, using the Date variables, then adds additional group variables, and then repeats that with additional numeric variables, for each of the datasets.

Last, but not least, I run the R scripts in RStudio and the Python scripts in vscode.


Current Frameworks Tested

  • R data.table: v1.15.99
  • R Collapse: v.2.0.15
  • R DuckDB: v1.0.0
  • Python Polars: v1.0.0
  • Python Pandas: v2.2.2

Current Operations

  • Aggregation
  • Melt
  • Cast
  • Windowing (lags)
  • Union
  • Left Join
  • Inner Join
  • Filter

Dataset Attributes

Common attributes across datasets:

  • Brand: 13 levels
  • Category: 6 levels
  • Beverage Flavor: 21 levels
  • Four numeric variables
  • One Date Variable

1M Rows Data

  • Customer: 99 levels

10M Rows Data

  • Customer: 1071 levels

100M Rows Data

  • Customer: 10793 levels

1Bn Rows Data

  • Customer: 108017 levels

Machine Specs

  • Windows 10 OS
  • Memory: 256GB
  • CPU: 32 cores / 64 threads
  • AMD Ryzen CPU

Replicate Benchmarks

Aggregation Sum

Click here to see steps
  • Fork the repo and clone it to your local machine
  • Modify the Path variable at the top of each script to reflect your file location
  • Run FakeBevDataBuilds.R to generate the benchmarking datasets
  • Run AggSum_datatable.R
  • Run AggSum_DuckDB.R
  • Run AggSum_Polars.py
  • Run AggSum_Pandas.py
  • Run AggSum_collapse.py
  • Run CombineResults_AggSum
  • Done!

Melt

Click here to see steps
  • Fork the repo and clone it to your local machine
  • Modify the Path variable at the top of each script to reflect your file location
  • Run FakeBevDataBuilds.R to generate the benchmarking datasets
  • Run Melt_datatable.R
  • Run Melt_DuckDB.R
  • Run Melt_Polars.py
  • Run Melt_Pandas.py
  • Run Melt_collapse.py
  • Run CombineResults_Melt
  • Done!

Cast

Click here to see steps
  • Fork the repo and clone it to your local machine
  • Modify the Path variable at the top of each script to reflect your file location
  • Run FakeBevDataBuilds.R to generate the benchmarking datasets
  • Run Cast_datatable.R
  • Run Cast_DuckDB.R
  • Run Cast_Polars.py
  • Run Cast_Pandas.py
  • Run Cast_collapse.py
  • Run CombineResults_Cast
  • Done!

Windowing (lags)

Click here to see steps
  • Fork the repo and clone it to your local machine
  • Modify the Path variable at the top of each script to reflect your file location
  • Run FakeBevDataBuilds.R to generate the benchmarking datasets
  • Run Lags_datatable.R
  • Run Lags_DuckDB.R
  • Run Lags_Polars.py
  • Run Lags_Pandas.py
  • Run Lags_collapse.py
  • Run CombineResults_Lags
  • Done!

Union

Click here to see steps
  • Fork the repo and clone it to your local machine
  • Modify the Path variable at the top of each script to reflect your file location
  • Run FakeBevDataBuilds.R to generate the benchmarking datasets
  • Run Union_datatable.R
  • Run Union_DuckDB.R
  • Run Union_Polars.py
  • Run Union_Pandas.py
  • Run CombineResults_Union
  • Done!

Left Join

Click here to see steps
  • Fork the repo and clone it to your local machine
  • Modify the Path variable at the top of each script to reflect your file location
  • Run FakeBevDataBuilds.R to generate the benchmarking datasets
  • Run LeftJoin_datatable.R
  • Run LeftJoin_collapse.R
  • Run LeftJoin_DuckDB.R
  • Run LeftJoin_Polars.py
  • Run LeftJoin_Pandas.py
  • Run CombineResults_LeftJoin
  • Done!

Inner Join

Click here to see steps
  • Fork the repo and clone it to your local machine
  • Modify the Path variable at the top of each script to reflect your file location
  • Run FakeBevDataBuilds.R to generate the benchmarking datasets
  • Run InnerJoin_datatable.R
  • Run InnerJoin_collapse.R
  • Run InnerJoin_DuckDB.R
  • Run InnerJoin_Polars.py
  • Run InnerJoin_Pandas.py
  • Run CombineResults_InnerJoin
  • Done!

Filter

Click here to see steps
  • Fork the repo and clone it to your local machine
  • Modify the Path variable at the top of each script to reflect your file location
  • Run FakeBevDataBuilds.R to generate the benchmarking datasets
  • Run Filter_datatable.R
  • Run Filter_collapse.R
  • Run Filter_DuckDB.R
  • Run Filter_Polars.py
  • Run Filter_Pandas.py
  • Run CombineResults_Filter
  • Done!

Benmark Results

In the plots below the x-axis "Experiments" shows four letters with numbers in front of them. This is what they mean:

  • M: millions of rows
  • N: number of numeric variables
  • D: number of date variables
  • G: number of additional group variables

Sum Aggregation

Total Run Time

Click here to see detailed results





Melt

Total Run Time

Click here to see detailed results



With DuckDB


Without DuckDB


Cast

Total Run Time

Click here to see detailed results




Windowing (lags)

Total Run Time

Click here to see detailed results


With DuckDB


Without DuckDB


With DuckDB


Without DuckDB


Union

Total Run Time

Click here to see detailed results




Left Join

Total Run Time

Click here to see detailed results




Inner Join

Total Run Time

Click here to see detailed results




Filter

Total Run Time

Click here to see detailed results