You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Most of the analysis tools like Apache Spark, Apache Drill support parquet out of the box. The data crunching is a lot faster and more efficient than SQL server or Pgsql due to the format's columnary design and the sheer number of channels used in Scada.
I use Python script to convert dat archive from scada5 into parquet, the size is like 1/2 to 1/10. very compact in size. also analysis tools working with parquet usually support partition by dir names, quite easy to organize.
I think supporting exporting archive to parquet, after that if you feel comfortable, moving over to use parquet as native archive format might be a good idea. putting the fancy tools aside, you got 1/2-3/5 free storage space, plus data loading is also more efficient.
The text was updated successfully, but these errors were encountered:
Hello,
Creating a module that supports Apache Parquet format for storing historical archives looks promises. Are you planning to implement such a module for Rapid SCADA? If so, we can add it to https://rapidscada.net/store/
Most of the analysis tools like Apache Spark, Apache Drill support parquet out of the box. The data crunching is a lot faster and more efficient than SQL server or Pgsql due to the format's columnary design and the sheer number of channels used in Scada.
I use Python script to convert dat archive from scada5 into parquet, the size is like 1/2 to 1/10. very compact in size. also analysis tools working with parquet usually support partition by dir names, quite easy to organize.
I think supporting exporting archive to parquet, after that if you feel comfortable, moving over to use parquet as native archive format might be a good idea. putting the fancy tools aside, you got 1/2-3/5 free storage space, plus data loading is also more efficient.
The text was updated successfully, but these errors were encountered: