Skip to content

Commit

Permalink
rerun example
Browse files Browse the repository at this point in the history
Signed-off-by: Nitish Bharambe <[email protected]>
  • Loading branch information
nitbharambe committed Oct 23, 2024
1 parent 198177b commit 8058be6
Showing 1 changed file with 9 additions and 12 deletions.
21 changes: 9 additions & 12 deletions docs/examples/arrow_example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@
"metadata": {},
"source": [
"The primitive types of each attribute in the arrow tables need to match to make the operation efficient.\n",
"Zero-copy conversion is not guaranteed if the data types provided by the PGM via `power_grid_meta_data` are not used.\n",
"Zero-copy conversion is not guaranteed if the data types provided via the PGM via `power_grid_meta_data` are not used.\n",
"Note that the asymmetric type of attribute in power-grid-model has a shape of `(3,)` along with a specific type. These represent the 3 phases of electrical system.\n",
"Hence, special care is required when handling asymmetric attributes. \n",
"\n",
Expand All @@ -143,10 +143,10 @@
"name": "stdout",
"output_type": "stream",
"text": [
"-------node schema-------\n",
"-------node scehma-------\n",
"id: int32\n",
"u_rated: double\n",
"-------asym load schema-------\n",
"-------asym load scehma-------\n",
"id: int32\n",
"node: int32\n",
"status: int8\n",
Expand All @@ -173,9 +173,9 @@
" return pa.schema(schemas)\n",
"\n",
"\n",
"print(\"-------node schema-------\")\n",
"print(\"-------node scehma-------\")\n",
"print(pgm_schema(DatasetType.input, ComponentType.node))\n",
"print(\"-------asym load schema-------\")\n",
"print(\"-------asym load scehma-------\")\n",
"print(pgm_schema(DatasetType.input, ComponentType.asym_load))"
]
},
Expand All @@ -188,12 +188,12 @@
"The [power-grid-model documentation on Components](https://power-grid-model.readthedocs.io/en/stable/user_manual/components.html) provides documentation on which components are required and which ones are optional.\n",
"\n",
"Construct the Arrow data as a table with the correct headers and data types. \n",
"The creation and initialization of arrays and combining the data in a RecordBatch is up to the user."
"The creation of arrays and combining it in a RecordBatch as well as the method of initializing that RecordBatch is up to the user."
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"metadata": {},
"outputs": [
{
Expand All @@ -213,7 +213,6 @@
}
],
"source": [
"# create the individual columns with the correct data type\n",
"nodes_schema = pgm_schema(DatasetType.input, ComponentType.node)\n",
"nodes = pa.record_batch(\n",
" [\n",
Expand All @@ -223,7 +222,6 @@
" names=(\"id\", \"u_rated\"),\n",
")\n",
"\n",
"# or convert directly using the schema\n"
"lines = pa.record_batch(\n",
" {\n",
" \"id\": [4, 5],\n",
Expand Down Expand Up @@ -369,7 +367,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": null,
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -792,7 +790,6 @@
" data: SingleColumnarData, dataset_type: DatasetType, component_type: ComponentType\n",
") -> pa.RecordBatch:\n",
" \"\"\"Convert NumPy data to Arrow data.\"\"\"\n",
" # pa.record_batch.from_arrays(data, schema=pgm_schema(DatasetType.result, ComponentType.node))\n",
" component_pgm_schema = pgm_schema(dataset_type, component_type, data.keys())\n",
" pa_columns = {}\n",
" for attribute, data in data.items():\n",
Expand Down Expand Up @@ -820,7 +817,7 @@
{
"data": {
"text/plain": [
"<pyarrow.lib.DoubleArray object at 0x000001A81FF94A00>\n",
"<pyarrow.lib.DoubleArray object at 0x00000184F527A680>\n",
"[\n",
" 1,\n",
" 0.01,\n",
Expand Down

0 comments on commit 8058be6

Please sign in to comment.