There are several different kinds of contribution, each of which has its own standards for a speedy review. The following sections describe guidelines for each type of contribution.
- Documentation Update
- Enhancement/Bugfix to a Resource
- Adding Resource Import Support
- Adding Resource Name Generation Support
- Adding Resource Policy Support
- Adding Resource Tagging Support
- New Resource
- New Service
- New Region
The [Terraform AWS Provider's website source][website] is in this repository along with the code and tests. Below are some common items that will get flagged during documentation reviews:
- Reasoning for Change: Documentation updates should include an explanation for why the update is needed.
- Prefer AWS Documentation: Documentation about AWS service features and valid argument values that are likely to update over time should link to AWS service user guides and API references where possible.
- Large Example Configurations: Example Terraform configuration that includes multiple resource definitions should be added to the repository
examples
directory instead of an individual resource documentation page. Each directory underexamples
should be self-contained to callterraform apply
without special configuration. - Terraform Configuration Language Features: Individual resource documentation pages and examples should refrain from highlighting particular Terraform configuration language syntax workarounds or features such as
variable
,local
,count
, and built-in functions.
Working on existing resources is a great way to get started as a Terraform contributor because you can work within existing code and tests to get a feel for what to do.
In addition to the below checklist, please see the Common Review Items sections for more specific coding and testing guidelines.
- Acceptance test coverage of new behavior: Existing resources each have a set of [acceptance tests][acctests] covering their functionality. These tests should exercise all the behavior of the resource. Whether you are adding something or fixing a bug, the idea is to have an acceptance test that fails if your code were to be removed. Sometimes it is sufficient to "enhance" an existing test by adding an assertion or tweaking the config that is used, but it's often better to add a new test. You can copy/paste an existing test and follow the conventions you see there, modifying the test to exercise the behavior of your code.
- Documentation updates: If your code makes any changes that need to be documented, you should include those doc updates in the same PR. This includes things like new resource attributes or changes in default values. The [Terraform website][website] source is in this repo and includes instructions for getting a local copy of the site up and running if you'd like to preview your changes.
- Well-formed Code: Do your best to follow existing conventions you
see in the codebase, and ensure your code is formatted with
go fmt
. The PR reviewers can help out on this front, and may provide comments with suggestions on how to improve the code. - Dependency updates: Create a separate PR if you are updating dependencies. This is to avoid conflicts as version updates tend to be fast- moving targets. We will plan to merge the PR with this change first.
Adding import support for Terraform resources will allow existing infrastructure to be managed within Terraform. This type of enhancement generally requires a small to moderate amount of code changes.
Comprehensive code examples and information about resource import support can be found in the Extending Terraform documentation.
In addition to the below checklist and the items noted in the Extending Terraform documentation, please see the Common Review Items sections for more specific coding and testing guidelines.
- Resource Code Implementation: In the resource code (e.g.
aws/resource_aws_service_thing.go
), implementation ofImporter
State
function - Resource Acceptance Testing Implementation: In the resource acceptance testing (e.g.
aws/resource_aws_service_thing_test.go
), implementation ofTestStep
s withImportState: true
- Resource Documentation Implementation: In the resource documentation (e.g.
website/docs/r/service_thing.html.markdown
), addition ofImport
documentation section at the bottom of the page
Terraform AWS Provider resources can use shared logic to support and test name generation, where the operator can choose between an expected naming value, a generated naming value with a prefix, or a fully generated name.
Implementing name generation support for Terraform AWS Provider resources requires the following, each with its own section below:
- Resource Name Generation Code Implementation: In the resource code (e.g.
aws/resource_aws_service_thing.go
), implementation ofname_prefix
attribute, along with handling inCreate
function. - Resource Name Generation Testing Implementation: In the resource acceptance testing (e.g.
aws/resource_aws_service_thing_test.go
), implementation of new acceptance test functions and configurations to exercise new naming logic. - Resource Name Generation Documentation Implementation: In the resource documentation (e.g.
website/docs/r/service_thing.html.markdown
), addition ofname_prefix
argument and update ofname
argument description.
- In the resource Go file (e.g.
aws/resource_aws_service_thing.go
), add the following Go import:"github.com/terraform-providers/terraform-provider-aws/aws/internal/naming"
- In the resource schema, add the new
name_prefix
attribute and adjust thename
attribute to beOptional
,Computed
, andConflictsWith
thename_prefix
attribute. Ensure to keep any existing schema fields onname
such asValidateFunc
. e.g.
"name": {
Type: schema.TypeString,
Optional: true,
Computed: true,
ForceNew: true,
ConflictsWith: []string{"name_prefix"},
},
"name_prefix": {
Type: schema.TypeString,
Optional: true,
ForceNew: true,
ConflictsWith: []string{"name"},
},
- In the resource
Create
function, switch any calls fromd.Get("name").(string)
to instead use thenaming.Generate()
function, e.g.
name := naming.Generate(d.Get("name").(string), d.Get("name_prefix").(string))
// ... in AWS Go SDK Input types, etc. use aws.String(name)
- If the resource supports import, in the resource
Read
function add a call tod.Set("name_prefix", ...)
, e.g.
d.Set("name", resp.Name)
d.Set("name_prefix", aws.StringValue(naming.NamePrefixFromName(aws.StringValue(resp.Name))))
- In the resource testing (e.g.
aws/resource_aws_service_thing_test.go
), add the following Go import:"github.com/terraform-providers/terraform-provider-aws/aws/internal/naming"
- In the resource testing, implement two new tests named
_Name_Generated
and_NamePrefix
with associated configurations, that verifies creating the resource withoutname
andname_prefix
arguments (for the former) and with only thename_prefix
argument (for the latter). e.g.
func TestAccAWSServiceThing_Name_Generated(t *testing.T) {
var thing service.ServiceThing
resourceName := "aws_service_thing.test"
resource.ParallelTest(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSServiceThingDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSServiceThingConfigNameGenerated(),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSServiceThingExists(resourceName, &thing),
naming.TestCheckResourceAttrNameGenerated(resourceName, "name"),
),
},
// If the resource supports import:
{
ResourceName: resourceName,
ImportState: true,
ImportStateVerify: true,
},
},
})
}
func TestAccAWSServiceThing_NamePrefix(t *testing.T) {
var thing service.ServiceThing
resourceName := "aws_service_thing.test"
resource.ParallelTest(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckAWSServiceThingDestroy,
Steps: []resource.TestStep{
{
Config: testAccAWSServiceThingConfigNamePrefix("tf-acc-test-prefix-"),
Check: resource.ComposeTestCheckFunc(
testAccCheckAWSServiceThingExists(resourceName, &thing),
naming.TestCheckResourceAttrNameFromPrefix(resourceName, "name", "tf-acc-test-prefix-"),
),
},
// If the resource supports import:
{
ResourceName: resourceName,
ImportState: true,
ImportStateVerify: true,
},
},
})
}
func testAccAWSServiceThingConfigNameGenerated() string {
return fmt.Sprintf(`
resource "aws_service_thing" "test" {
# ... other configuration ...
}
`)
}
func testAccAWSServiceThingConfigNamePrefix(namePrefix string) string {
return fmt.Sprintf(`
resource "aws_service_thing" "test" {
# ... other configuration ...
name_prefix = %[1]q
}
`, namePrefix)
}
- In the resource documentation (e.g.
website/docs/r/service_thing.html.markdown
), add the following to the arguments reference:
* `name_prefix` - (Optional) Creates a unique name beginning with the specified prefix. Conflicts with `name`.
- Adjust the existing
name
argument reference to ensure its denoted asOptional
, includes a mention that it can be generated, and that it conflicts withname_prefix
:
* `name` - (Optional) Name of the thing. If omitted, Terraform will assign a random, unique name. Conflicts with `name_prefix`.
Some AWS components support resource-based IAM policies to control permissions. When implementing this support in the Terraform AWS Provider, we typically prefer creating a separate resource, aws_{SERVICE}_{THING}_policy
(e.g. aws_s3_bucket_policy
) for a few reasons:
- Many of these policies require the Amazon Resource Name (ARN) of the resource in the policy itself. It is difficult to workaround this requirement with custom difference handling within a self-contained resource.
- Sometimes policies between two resources need to be written where they cross-reference each other resource's ARN within each policy. Without a separate resource, this introduces a configuration cycle.
- Splitting the resources allows operators to logically split their infrastructure on purely operational and security boundaries with separate configurations/modules.
- Splitting the resources prevents any separate policy API calls from needing to be permitted in the main resource in environments with restrictive IAM permissions, which can be undesirable.
See the New Resource section for more information about implementing the separate resource.
AWS provides key-value metadata across many services and resources, which can be used for a variety of use cases including billing, ownership, and more. See the AWS Tagging Strategy page for more information about tagging at a high level.
Implementing tagging support for Terraform AWS Provider resources requires the following, each with its own section below:
- Generated Service Tagging Code: In the internal code generators (e.g.
aws/internal/keyvaluetags
), implementation and customization of how a service handles tagging, which is standardized for the resources. - Resource Tagging Code Implementation: In the resource code (e.g.
aws/resource_aws_service_thing.go
), implementation oftags
schema attribute, along with handling inCreate
,Read
, andUpdate
functions. - Resource Tagging Acceptance Testing Implementation: In the resource acceptance testing (e.g.
aws/resource_aws_service_thing_test.go
), implementation of new acceptance test function and configurations to exercise new tagging logic. - Resource Tagging Documentation Implementation: In the resource documentation (e.g.
website/docs/r/service_thing.html.markdown
), addition oftags
argument
See also a full example pull request for implementing EKS tagging.
This step is only necessary for the first implementation and may have been previously completed. If so, move on to the next section.
More details about this code generation, including fixes for potential error messages in this process, can be found in the keyvaluetags documentation.
-
Open the AWS Go SDK documentation for the service, e.g. for
service/eks
. Note: there can be a delay between the AWS announcement and the updated AWS Go SDK documentation. -
Determine the "type" of tagging implementation. Some services will use a simple map style (
map[string]*string
in Go) while others will have a separate structure shape ([]service.Tag
struct withKey
andValue
fields).- If the type is a map, add the AWS Go SDK service name (e.g.
eks
) tomapServiceNames
inaws/internal/keyvaluetags/generators/servicetags/main.go
- Otherwise, if the type is a struct, add the AWS Go SDK service name (e.g.
eks
) tosliceServiceNames
inaws/internal/keyvaluetags/generators/servicetags/main.go
. If the struct name is not exactlyTag
, it can be customized via theServiceTagType
function. If the struct key field is not exactlyKey
, it can be customized via theServiceTagTypeKeyField
function. If the struct value field is not exactlyValue
, it can be customized via theServiceTagTypeValueField
function.
- If the type is a map, add the AWS Go SDK service name (e.g.
-
Determine if the service API includes functionality for listing tags (usually a
ListTags
orListTagsForResource
API call) or updating tags (usuallyTagResource
andUntagResource
API calls). If so, add the AWS Go SDK service client information toServiceClientType
(along with the new required import) inaws/internal/keyvaluetags/service_generation_customizations.go
, e.g. for EKS:case "eks": funcType = reflect.TypeOf(eks.New)
- If the service API includes functionality for listing tags, add the AWS Go SDK service name (e.g.
eks
) toserviceNames
inaws/internal/keyvaluetags/generators/listtags/main.go
. - If the service API includes functionality for updating tags, add the AWS Go SDK service name (e.g.
eks
) toserviceNames
inaws/internal/keyvaluetags/generators/updatetags/main.go
.
- If the service API includes functionality for listing tags, add the AWS Go SDK service name (e.g.
-
Run
make gen
(go generate ./...
) and ensure there are no errors viamake test
(go test ./...
)
-
In the resource Go file (e.g.
aws/resource_aws_eks_cluster.go
), add the following Go import:"github.com/terraform-providers/terraform-provider-aws/aws/internal/keyvaluetags"
-
In the resource schema, add
"tags": tagsSchema(),
-
If the API supports tagging on creation (the
Input
struct accepts aTags
field), in the resourceCreate
function, implement the logic to convert the configuration tags into the service tags, e.g. with EKS Clusters:input := &eks.CreateClusterInput{ /* ... other configuration ... */ Tags: keyvaluetags.New(d.Get("tags").(map[string]interface{})).IgnoreAws().EksTags(), }
If the service API does not allow passing an empty list, the logic can be adjusted similar to:
input := &eks.CreateClusterInput{ /* ... other configuration ... */ } if v := d.Get("tags").(map[string]interface{}); len(v) > 0 { input.Tags = keyvaluetags.New(v).IgnoreAws().EksTags() }
-
Otherwise if the API does not support tagging on creation (the
Input
struct does not accept aTags
field), in the resourceCreate
function, implement the logic to convert the configuration tags into the service API call to tag a resource, e.g. with ElasticSearch Domain:if v := d.Get("tags").(map[string]interface{}); len(v) > 0 { if err := keyvaluetags.ElasticsearchserviceUpdateTags(conn, d.Id(), nil, v); err != nil { return fmt.Errorf("error adding Elasticsearch Cluster (%s) tags: %s", d.Id(), err) } }
-
Some EC2 resources (for example
aws_ec2_fleet
) have aTagsSpecification
field in theInputStruct
instead of aTags
field. In these cases theec2TagSpecificationsFromMap()
helper function should be used, e.g.:input := &ec2.CreateFleetInput{ /* ... other configuration ... */ TagSpecifications: ec2TagSpecificationsFromMap(d.Get("tags").(map[string]interface{}), ec2.ResourceTypeFleet), }
-
In the resource
Read
function, implement the logic to convert the service tags to save them into the Terraform state for drift detection, e.g. with EKS Clusters (which had the tags available in the DescribeCluster API call):// Typically declared near conn := /* ... */ ignoreTagsConfig := meta.(*AWSClient).IgnoreTagsConfig if err := d.Set("tags", keyvaluetags.EksKeyValueTags(cluster.Tags).IgnoreAws().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { return fmt.Errorf("error setting tags: %s", err) }
If the service API does not return the tags directly from reading the resource and requires a separate API call, its possible to use the
keyvaluetags
functionality like the following, e.g. with Athena Workgroups:// Typically declared near conn := /* ... */ ignoreTagsConfig := meta.(*AWSClient).IgnoreTagsConfig tags, err := keyvaluetags.AthenaListTags(conn, arn.String()) if err != nil { return fmt.Errorf("error listing tags for resource (%s): %s", arn, err) } if err := d.Set("tags", tags.IgnoreAws().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { return fmt.Errorf("error setting tags: %s", err) }
-
In the resource
Update
function (this may be the first functionality requiring the creation of theUpdate
function), implement the logic to handle tagging updates, e.g. with EKS Clusters:if d.HasChange("tags") { o, n := d.GetChange("tags") if err := keyvaluetags.EksUpdateTags(conn, d.Get("arn").(string), o, n); err != nil { return fmt.Errorf("error updating tags: %s", err) } }
-
In the resource testing (e.g.
aws/resource_aws_eks_cluster_test.go
), verify that existing resources without tagging are unaffected and do not have tags saved into their Terraform state. This should be done in the_basic
acceptance test by adding a line similar toresource.TestCheckResourceAttr(resourceName, "tags.%s", "0"),
-
In the resource testing, implement a new test named
_Tags
with associated configurations, that verifies creating the resource with tags and updating tags. e.g. EKS Clusters:func TestAccAWSEksCluster_Tags(t *testing.T) { var cluster1, cluster2, cluster3 eks.Cluster rName := acctest.RandomWithPrefix("tf-acc-test") resourceName := "aws_eks_cluster.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { testAccPreCheck(t); testAccPreCheckAWSEks(t) }, Providers: testAccProviders, CheckDestroy: testAccCheckAWSEksClusterDestroy, Steps: []resource.TestStep{ { Config: testAccAWSEksClusterConfigTags1(rName, "key1", "value1"), Check: resource.ComposeTestCheckFunc( testAccCheckAWSEksClusterExists(resourceName, &cluster1), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), ), }, { ResourceName: resourceName, ImportState: true, ImportStateVerify: true, }, { Config: testAccAWSEksClusterConfigTags2(rName, "key1", "value1updated", "key2", "value2"), Check: resource.ComposeTestCheckFunc( testAccCheckAWSEksClusterExists(resourceName, &cluster2), resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), ), }, { Config: testAccAWSEksClusterConfigTags1(rName, "key2", "value2"), Check: resource.ComposeTestCheckFunc( testAccCheckAWSEksClusterExists(resourceName, &cluster3), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), ), }, }, }) } func testAccAWSEksClusterConfigTags1(rName, tagKey1, tagValue1 string) string { return testAccAWSEksClusterConfig_Base(rName) + fmt.Sprintf(` resource "aws_eks_cluster" "test" { name = %[1]q role_arn = aws_iam_role.test.arn tags = { %[2]q = %[3]q } vpc_config { subnet_ids = aws_subnet.test[*].id } depends_on = [aws_iam_role_policy_attachment.test-AmazonEKSClusterPolicy] } `, rName, tagKey1, tagValue1) } func testAccAWSEksClusterConfigTags2(rName, tagKey1, tagValue1, tagKey2, tagValue2 string) string { return testAccAWSEksClusterConfig_Base(rName) + fmt.Sprintf(` resource "aws_eks_cluster" "test" { name = %[1]q role_arn = aws_iam_role.test.arn tags = { %[2]q = %[3]q %[4]q = %[5]q } vpc_config { subnet_ids = aws_subnet.test[*].id } depends_on = [aws_iam_role_policy_attachment.test-AmazonEKSClusterPolicy] } `, rName, tagKey1, tagValue1, tagKey2, tagValue2) }
-
Verify all acceptance testing passes for the resource (e.g.
make testacc TESTARGS='-run=TestAccAWSEksCluster_'
)
-
In the resource documentation (e.g.
website/docs/r/eks_cluster.html.markdown
), add the following to the arguments reference:* `tags` - (Optional) Key-value mapping of resource tags
Implementing a new resource is a good way to learn more about how Terraform interacts with upstream APIs. There are plenty of examples to draw from in the existing resources, but you still get to implement something completely new.
In addition to the below checklist, please see the Common Review Items sections for more specific coding and testing guidelines.
-
Minimal LOC: It's difficult for both the reviewer and author to go through long feedback cycles on a big PR with many resources. We ask you to only submit 1 resource at a time.
-
Acceptance tests: New resources should include acceptance tests covering their behavior. See Writing Acceptance Tests below for a detailed guide on how to approach these.
-
Resource Naming: Resources should be named
aws_<service>_<name>
, using underscores (_
) as the separator. Resources are namespaced with the service name to allow easier searching of related resources, to align the resource naming with the service for Customizing Endpoints, and to prevent future conflicts with new AWS services/resources. For reference:service
is the AWS short service name that matches the entry inendpointServiceNames
(created via the New Service section)name
represents the conceptual infrastructure represented by the create, read, update, and delete methods of the service API. It should be a singular noun. For example, in an API that has methods such asCreateThing
,DeleteThing
,DescribeThing
, andModifyThing
the name of the resource would end in_thing
.
-
Arguments_and_Attributes: The HCL for arguments and attributes should mimic the types and structs presented by the AWS API. API arguments should be converted from
CamelCase
tocamel_case
. -
Documentation: Each data source and resource gets a page in the Terraform documentation, which lives at
website/docs/d/<service>_<name>.html.markdown
andwebsite/docs/r/<service>_<name>.html.markdown
respectively. -
Well-formed Code: Do your best to follow existing conventions you see in the codebase, and ensure your code is formatted with
go fmt
. The PR reviewers can help out on this front, and may provide comments with suggestions on how to improve the code. -
Dependency updates: Create a separate PR if you are updating dependencies. This is to avoid conflicts as version updates tend to be fast- moving targets. We will plan to merge the PR with this change first.
Implementing a new AWS service gives Terraform the ability to manage resources in a whole new API. It's a larger undertaking, but brings major new functionality into Terraform.
-
Service Client: Before new resources are submitted, we request a separate pull request containing just the new AWS Go SDK service client. Doing so will pull the AWS Go SDK service code into the project at the current version. Since the AWS Go SDK is updated frequently, these pull requests can easily have merge conflicts or be out of date. The maintainers prioritize reviewing and merging these quickly to prevent those situations.
To add the AWS Go SDK service client:
- In
aws/provider.go
Add a new service entry toendpointServiceNames
. This service name should match the AWS Go SDK or AWS CLI service name. - In
aws/config.go
: Add a new import for the AWS Go SDK code. e.g.github.com/aws/aws-sdk-go/service/quicksight
- In
aws/config.go
: Add a new{SERVICE}conn
field to theAWSClient
struct for the service client. The service name should match the name inendpointServiceNames
. e.g.quicksightconn *quicksight.QuickSight
- In
aws/config.go
: Create the new service client in the{SERVICE}conn
field in theAWSClient
instantiation withinClient()
. e.g.quicksightconn: quicksight.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints["quicksight"])})),
- In
website/allowed-subcategories.txt
: Add a name acceptable for the documentation navigation. - In
website/docs/guides/custom-service-endpoints.html.md
: Add the service name in the list of customizable endpoints. - In
infrastructure/repository/labels-service.tf
: Add the new service to create a repository label. - In
.hashibot.hcl
: Add the new service to automated issue and pull request labeling. e.g. with thequicksight
service
behavior "regexp_issue_labeler_v2" "service_labels" { # ... other configuration ... label_map = { # ... other services ... "service/quicksight" = [ "aws_quicksight_", ], # ... other services ... } } behavior "pull_request_path_labeler" "service_labels" # ... other configuration ... label_map = { # ... other services ... "service/quicksight" = [ "aws/internal/service/quicksight/**/*", "**/*_quicksight_*", "**/quicksight_*", ], # ... other services ... } }
- Run the following then submit the pull request:
go test ./aws go mod tidy
- In
-
Initial Resource: Some services can be big and it can be difficult for both reviewer & author to go through long feedback cycles on a big PR with many resources. Often feedback items in one resource will also need to be applied in other resources. We prefer you to submit the necessary minimum in a single PR, ideally just the first resource of the service.
The initial resource and changes afterwards should follow the other sections of this guide as appropriate.
While region validation is automatically added with SDK updates, new regions are generally limited in which services they support. Below are some manually sourced values from documentation.
- Check Elastic Load Balancing endpoints and quotas and add Route53 Hosted Zone ID if available to
aws/data_source_aws_elb_hosted_zone_id.go
- Check Amazon Simple Storage Service endpoints and quotas and add Route53 Hosted Zone ID if available to
aws/hosted_zones.go
- Check CloudTrail Supported Regions docs and add AWS Account ID if available to
aws/data_source_aws_cloudtrail_service_account.go
- Check Elastic Load Balancing Access Logs docs and add Elastic Load Balancing Account ID if available to
aws/data_source_aws_elb_service_account.go
- Check Redshift Database Audit Logging docs and add AWS Account ID if available to
aws/data_source_aws_redshift_service_account.go
- Check AWS Elastic Beanstalk endpoints and quotas and add Route53 Hosted Zone ID if available to
aws/data_source_aws_elastic_beanstalk_hosted_zone.go