Skip to content
This repository has been archived by the owner on Sep 4, 2024. It is now read-only.
/ anthropic-hass Public archive

Home Assistant custom component for Anthropic Claude conversation agent

License

Notifications You must be signed in to change notification settings

Shulyaka/anthropic-hass

Repository files navigation

anthropic-hass

CI Coverage Status

Important note: This integration has been accepted into core. This repository will be archived when Home Assistant 2024.9.0 is released.

Home Assistant custom component for Anthropic Claude conversation agent

Legal notice: This integration uses the official API. Please note that Anthropic API is intended for B2B and not for individual use, more info here. Therefore this integration is intended for commercial uses.

The Anthropic integration adds a conversation agent powered by Anthropic, such as Claude 3.5 Sonnet, in Home Assistant.

Controlling Home Assistant is done by providing the AI access to the Assist API of Home Assistant. You can control what devices and entities it can access from the Exposed Entities page. The AI can provide you information about your devices and control them.

This integration does not integrate with sentence triggers.

This integration requires an API key to use, which you can generate here.. This is a paid service, we advise you to monitor your costs in the Anthropic portal closely.

Generate an API Key

The Anthropic API key is used to authenticate requests to the Anthropic API. To generate an API key, take the following steps:

  • Log in to the Anthropic portal or sign up for an account.
  • Enable billing with a valid credit card on the plans page.
  • Visit the API Keys page to retrieve the API key you'll use to configure the integration.

Configuration

parameter description
Instructions Instructions for the AI on how it should respond to your requests. It is written using Home Assistant Templating.
Control Home Assistant If the model is allowed to interact with Home Assistant. It can only control or provide information about entities that are exposed to it.
Recommended settings If enabled, the recommended model and settings are chosen.

If you choose not to use the recommended settings, you can configure the following options:

parameter description
Model The model that will complete your prompt. See models for additional details and options.
Maximum Tokens to Return in Response The maximum number of tokens to generate before stopping. Note that our models may stop before reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate. Different models have different maximum values for this parameter. See models for details.
Temperature Amount of randomness injected into the response. Use temperature closer to 0.0 for analytical / multiple choice, and closer to 1.0 for creative and generative tasks. Note that even with temperature of 0.0, the results will not be fully deterministic.

About

Home Assistant custom component for Anthropic Claude conversation agent

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages