-
Notifications
You must be signed in to change notification settings - Fork 31
Read an Excel document using the Spark2 datasource API
This is a Spark2 datasource application demonstrating some of the capabilities of the hadoopoffice library. It takes as input a set of Excel files. As an output it prints the number of rows, the schema and the content of the Excel cells (including comments, formulas, address). It has successfully been tested with the HDP Sandbox VM 2.5, but other Hadoop distributions should work equally well, if they support Spark 2.
You can create yourself an Excel file in LibreOffice or Microsoft Excel. Alternatively, you can download an Excel file that is used for unit testing of hadoopoffice library by executing the following command:
wget --no-check-certificate https://github.com/ZuInnoTe/hadoopoffice/raw/master/fileformat/src/test/resources/excel2013test.xlsx
You can put it on your HDFS cluster by executing the following commands:
hadoop fs -mkdir -p /user/spark/office/excel/input
hadoop fs -put ./excel2013test.xlsx /user/spark/office/excel/input
After it has been copied you are ready to use the example.
Note the datasource is available on Maven Central and Spark-packages.
Execute
git clone https://github.com/ZuInnoTe/hadoopoffice.git hadoopoffice
You can build the application by changing to the directory hadoopoffice//examples/scala2-spark-excel-in-ds and using the following command:
gradle clean build
Execute the following command (please take care that you use spark-submit of Spark2)
spark-submit --class org.zuinnote.spark.office.example.excel.SparkScalaExcelInDataSource ./example-ho-spark-scala-ds-excelin.jar /user/spark/office/excel/input
After the Spark2 job has been completed, you find in the output the number of rows in your Excel files, the schema and the content of the Excelfile including formulas, comments, cell address in A1 format and the sheetname of the cell.