Skip to content

Powerwall Dashboard

Jason Cox edited this page Dec 16, 2021 · 6 revisions

Grafana Dashboard

image image

Instructions to set up a Grafana stack for this dashboard is located here: https://github.com/jasonacox/powerwall_monitor (this is a fork from mihailescu2m/powerwall_monitor that has been configured to use pypowerwall's caching proxy).

Installation

This stack uses the following tools that run in docker compose containers:

  • Telegraf to pull poll data.
  • pyPowerwall Cache to pull data from the Tesla Powerwall
  • InnoDB to store the poll data
  • Grafana to render the dashboard
  1. Run this from the folder to pull down the dashboard setup files:
# Pull the Powerwall Dashboard Setup files
git clone https://github.com/jasonacox/powerwall_monitor.git
  1. Edit the docker compose configuration
nano powerwall.yml  # or vim

Look for the section under pypowerall and update the following details for your Powerwall:

            PW_PASSWORD: "password"
            PW_EMAIL: "[email protected]"
            PW_HOST: "192.168.91.1"
            PW_TIMEZONE: "Australia/Adelaide"
  1. Launch the stack of docker containers:
docker-compose -f powerwall.yml up -d
  1. Update InfluxDB schema for the dashboard:
docker exec -it influxdb influx
  • At the database prompt, you will need to enter (copy/paste) the following commands after you adjust the timezone (tz) as appropriate:
     USE powerwall
     CREATE RETENTION POLICY raw ON powerwall duration 3d replication 1
     ALTER RETENTION POLICY autogen ON powerwall duration 365d
     CREATE RETENTION POLICY kwh ON powerwall duration INF replication 1
     CREATE RETENTION POLICY daily ON powerwall duration INF replication 1
     CREATE RETENTION POLICY monthly ON powerwall duration INF replication 1
     CREATE CONTINUOUS QUERY cq_autogen ON powerwall BEGIN SELECT mean(home) AS home, mean(solar) AS solar, mean(from_pw) AS from_pw, mean(to_pw) AS to_pw, mean(from_grid) AS from_grid, mean(to_grid) AS to_grid, last(percentage) AS percentage INTO powerwall.autogen.:MEASUREMENT FROM (SELECT load_instant_power AS home, solar_instant_power AS solar, abs((1+battery_instant_power/abs(battery_instant_power))*battery_instant_power/2) AS from_pw, abs((1-battery_instant_power/abs(battery_instant_power))*battery_instant_power/2) AS to_pw, abs((1+site_instant_power/abs(site_instant_power))*site_instant_power/2) AS from_grid, abs((1-site_instant_power/abs(site_instant_power))*site_instant_power/2) AS to_grid, percentage FROM raw.http) GROUP BY time(1m), month, year fill(linear) END
     CREATE CONTINUOUS QUERY cq_kwh ON powerwall RESAMPLE EVERY 1m BEGIN SELECT integral(home)/1000/3600 AS home, integral(solar)/1000/3600 AS solar, integral(from_pw)/1000/3600 AS from_pw, integral(to_pw)/1000/3600 AS to_pw, integral(from_grid)/1000/3600 AS from_grid, integral(to_grid)/1000/3600 AS to_grid INTO powerwall.kwh.:MEASUREMENT FROM autogen.http GROUP BY time(1h), month, year tz('Australia/Adelaide') END
     CREATE CONTINUOUS QUERY cq_daily ON powerwall RESAMPLE EVERY 1h BEGIN SELECT sum(home) AS home, sum(solar) AS solar, sum(from_pw) AS from_pw, sum(to_pw) AS to_pw, sum(from_grid) AS from_grid, sum(to_grid) AS to_grid INTO powerwall.daily.:MEASUREMENT FROM powerwall.kwh.http GROUP BY time(1d), month, year tz('Australia/Adelaide') END 
     CREATE CONTINUOUS QUERY cq_monthly ON powerwall RESAMPLE EVERY 1h BEGIN SELECT sum(home) AS home, sum(solar) AS solar, sum(from_pw) AS from_pw, sum(to_pw) AS to_pw, sum(from_grid) AS from_grid, sum(to_grid) AS to_grid INTO powerwall.monthly.:MEASUREMENT FROM powerwall.daily.http GROUP BY time(365d), month, year END

Note: the database queries are set to use Australia/Adelaide as timezone. Edit the database commands above to replace Australia/Adelaide with your own timezone.

  1. Grafana Setup
  • Open up Grafana in a browser at http://<server ip>:9000 and login with admin/admin
  • From Configuration\Data Sources, add InfluxDB database with:
    • name: InfluxDB
    • url: http://influxdb:8086
    • database: powerwall
    • min time interval: 5s
  • From Configuration\Data Sources, add Sun and Moon database with:
    • name: Sun and Moon
    • your latitude and longitude
  • Edit dashboard.json to replace Australia/Adelaide with your own timezone.
  • From Dashboard\Manage, select Import, and upload dashboard.json

Notes

  • The database queries and dashboard are set to use Australia/Adelaide as the timezone. Remember to edit the database commands and the dashboard.json file to replace Australia/Adelaide with your own timezone.

  • InfluxDB does not run reliably on older models of Raspberry Pi, resulting in the Docker container terminating with error 139.

Splunk Dashboard

You can push Powerwall data into a tiny installation of Splunk (see TinySplunk) and get a graphs like this:

image

image

Here is an example script to poll the Powerwall data and push to Splunk:

import pypowerwall
from splunk_http_event_collector import http_event_collector
import logging
import sys

# Credentials for your Powerwall - Customer Login Data
password='password'
email='[email protected]'
host = "localhost"                # Change to the IP of your Powerwall
timezone = "America/Los_Angeles"  # Change to your local timezone/tz

# Connect to Powerwall
pw = pypowerwall.Powerwall(host,password,email,timezone)

# Solar Metrics
solar = {}

# Solar Power
solar['solar'] = pw.solar()
solar['grid'] = pw.site()
solar['battery'] = pw.battery()
solar['house'] = pw.load()
solar['battery_level'] = pw.level()

# Solar Strings
strings = pw.strings(jsonformat=False)
for k, v in strings.items():
      solar.update({k:v})

logging.basicConfig(format='%(asctime)s %(name)s %(levelname)s %(message)s', datefmt='%Y-%m-%d %H:%M:%S %z')

# Update for your Splunk Instance
http_event_collector_key = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
http_event_collector_host = "10.1.1.101"

# Setup Splunk HEC Connector
splunk = http_event_collector(http_event_collector_key, http_event_collector_host)
splunk.log.setLevel(logging.ERROR)
    
# perform a HEC reachable check
hec_reachable = splunk.check_connectivity()
if not hec_reachable:
    print("ERROR: Splunk HEC unreachable.")
    sys.exit(1)

try:
    # Build payload with metadata information
    payload = {}
    payload.update({"index":"main"})
    payload.update({"sourcetype":"powerwall"})
    payload.update({"source":"http-stream"})
    payload.update({"host":"pypowerwall"})
    payload.update({"event":solar})
    splunk.sendEvent(payload)
    
    splunk.flushBatch()

except:
    print("ERROR: Unable to send payload to Splunk.")

I created a dashboard graphs with Splunk queries like this:

* sourcetype="powerwall" | timechart span=5m avg(A.Power) as "A" avg(B.Power) as "B" avg(C.Power) as "C" avg(D.Power) as "D"
Clone this wiki locally