* provider/aws: Support kms_key_id for `aws_rds_cluster`
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSRDSCluster_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSRDSCluster_
-timeout 120m
=== RUN TestAccAWSRDSCluster_basic
--- PASS: TestAccAWSRDSCluster_basic (127.57s)
=== RUN TestAccAWSRDSCluster_kmsKey
--- PASS: TestAccAWSRDSCluster_kmsKey (323.72s)
=== RUN TestAccAWSRDSCluster_encrypted
--- PASS: TestAccAWSRDSCluster_encrypted (173.25s)
=== RUN TestAccAWSRDSCluster_backupsUpdate
--- PASS: TestAccAWSRDSCluster_backupsUpdate (264.07s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 888.638s
```
* provider/aws: Add KMS Key ID to `aws_rds_cluster_instance`
```
```
* docs/digitalocean: Adding an import section to the bottom of the DO
importable resources
* docs/azurerm: Adding the Import sections for the AzureRM Importable resources
* docs/aws: Adding the import sections to the AWS provider pages
We cannot use the "id" field to represent policy ID, because it is used
internally by Terraform. Also change the "id" field within a statement
to "sid" for consistency with the generated JSON.
When adding multiple notifications from one S3 bucket to one SQS queue, it wasn't immediately intuitive how to do this.
At first I created two `aws_s3_bucket_notification` configs and it seemed to work fine, however the config for one event
will overwrite the other. In order to have multiple events, you can defined the `queue` key twice, or use an array if you're
working with the JSON syntax. I tried to make this more clear in the documentation.
This fixes#7157. It doesn't change the way aws_ami works
```
make testacc TEST=./builtin/providers/aws
TESTARGS='-run=TestAccAWSAMICopy'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /vendor/)
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSAMICopy
-timeout 120m
=== RUN TestAccAWSAMICopy
--- PASS: TestAccAWSAMICopy (479.75s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 479.769s
```
allows load balancer policies and their assignment to backend servers or listeners to be configured independently.
this gives flexibility to configure additional policies on aws elastic load balancers aside from the already provided "convenience" wrappers for cookie stickiness
* small doc update
* provider/atlas: Add docs for Artifact Data Source
* provider/atlas: Remove a test method that isn't used
* provider/atlas: Add Data Source for Atlas Artifact
* provider/atlas: Show deprecation error on atlas_artifact resource
* Added support for redshift destination to firehose delivery streams
* Small documentation fix
* go fmt after rebase
* small fixes after rebase
* provider/aws: Firehose test cleanups
* provider/aws: Update docs
* Convert Redshift and S3 blocks to TypeList
* provider/aws: Add migration for S3 Configuration in Kinesis firehose
* providers/aws: Safety first when building Redshift config options
* restore commented out log statements in the migration
* provider/aws: use MaxItems in schema
* Add SES resource
* Detect ReceiptRule deletion outside of Terraform
* Handle order of rule actions
* Add position field to docs
* Fix hashes, add log messages, and other small cleanup
* Fix rebase issue
* Fix formatting
this datasource allows terraform to work with externally modified state, e.g.
when you're using an ECS service which is continously updated by your CI via the
AWS CLI.
right now you'd have to wrap terraform into a shell script which looks up the
current image digest, so running terraform won't change the updated service.
using the aws_ecs_container_definition data source you can now leverage
terraform, removing the wrapper entirely.
Since this resource produces a list it feels more intuitive to give its
attribute a plural name, and since the noun "instance" already means
something specific in the AWS provider that doesn't apply here we use
"names" to indicate that these are availability zone names.
Also includes updating the docs to not show a dynamic count example for
now, since we don't support that yet.
false
Fixes#7035
A known issue in Terraform means that d.GetOk() on a bool which is false
will mean it doesn't get evaulated. Therefore, when people set
publicly_accessible to false, it will never get evaluated on the Create
We are going to make it default to false now
The documentation wording implies that in all cases you have to manually accept peering requests. This change is intended to clarify where this is required. The documentation also separates between "basic usage" and "basic usage with tags", but the expanded usage didn't actually provide much additional useful information. Expanded a bit to show the use of auto_accept since both VPCs are created by the content and to show setting the Name tag for proper display in the console.
The example is referencing a non-existent variable, `allocation_id`, within the `aws_eip` resource. I believe this should actually be `aws_eip.example.id` instead of `aws_eip.example.allocation_id`.
Add the iam_arn attribute to aws_cloudfront_origin_access_identity,
which computes the IAM ARN for a certain CloudFront origin access
identity.
This is necessary because S3 modifies the bucket policy if CanonicalUser
is sent, causing spurious diffs with aws_s3_bucket resources.
This brings over the work done by @apparentlymart and @radeksimko in
PR #3124, and converts it into a data source for the AWS provider:
This commit adds a helper to construct IAM policy documents using
familiar Terraform concepts. It makes Terraform-style interpolations
easier and resolves the syntax conflict between Terraform interpolations
and IAM policy variables by changing the latter to use &{...} for its
interpolations.
Its use is completely optional and users are free to go on using literal
heredocs, file interpolations or whatever else; this just adds another
option that fits more naturally into a Terraform config.
This data source allows one to look up the most recent AMI for a specific
set of parameters, much like aws ec2 describe-images in the AWS CLI.
Basically a refresh of hashicorp/terraform#4396, in data source form.
* Add per user, role and group policy attachment
* Add docs for new IAM policy attachment resources.
* Make policy attachment resources manage only 1 entity<->policy attachment
* provider/aws: Tidy up IAM Group/User/Role attachments
This commit adds a data source with a single list, `instance` for the
schema which gets populated with the availability zones to which an
account has access.
resource
We had a line on the Update func that said:
```
Hash key can only be specified at creation, you cannot modify it.
```
The resource has now been changed to ForceNew on the hashkey
```
aws_dynamodb_table.demo-user-table: Refreshing state... (ID: Users)
aws_dynamodb_table.demo-user-table: Destroying...
aws_dynamodb_table.demo-user-table: Destruction complete
aws_dynamodb_table.demo-user-table: Creating...
aws_dynamodb_table.demo-user-table: Creation complete
```
When stage_name is not passed to the resource
aws_api_gateway_deployment a terraform apply will fail. This is
because the stage_name is required and not optional.
As requested in #4822, add support for a KMS Key ID (ARN) for Db
Instance
```
make testacc TEST=./builtin/providers/aws
TESTARGS='-run=TestAccAWSDBInstance_kmsKey' 2>~/tf.log
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /vendor/)
TF_ACC=1 go test ./builtin/providers/aws -v
-run=TestAccAWSDBInstance_kmsKey -timeout 120m
=== RUN TestAccAWSDBInstance_basic
--- PASS: TestAccAWSDBInstance_basic (587.37s)
=== RUN TestAccAWSDBInstance_kmsKey
--- PASS: TestAccAWSDBInstance_kmsKey (625.31s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 1212.684s
```
* New top level AWS resource aws_eip_association
* Add documentation for aws_eip_association
* Add tests for aws_eip_association
* provider/aws: Change `aws_elastic_ip_association` to have computed
parameters
The AWS API was send ing more parameters than we had set. Therefore,
Terraform was showing constant changes when plans were being formed
Added the hosted_zone_id attribute, which aliases to the Route 53
zone ID that can be used to route Alias Resource Record Sets to.
This fixeshashicorp/terraform#6489.
Change the AWS DB Instance to now include the DB Option Group param. Adds a test to prove that it works
Add acceptance tests for the AWS DB Option Group work. This ensures that Options can be added and updated
Documentation for the AWS DB Option resource
automated_snapshot_retention_period
The default value for `automated_snapshot_retention_period` is 1.
Therefore, it can be included in the `CreateClusterInput` without
needing to check that it is set.
This was actually stopping people from setting the value to 0 (disabling
the snapshots) as there is an issue in `d.GetOk()` evaluating 0 for int
* Fix headers and header anchor tags
The markdown parser already generates unique ids for header elements by
downcasing all of the words and replacing spaces with hyphens. Knowing
this, we can take the code blocks out of the headers and use the
generated ids as the link targets.
Aside: I tried to see if there was a standard way of documenting
subresources, but couldn't really find one. Both the aws_elb and
aws_instance resources seem to just say "documented below" without a
link. Then the relevant section is just a new paragraph with a list of
arguments.
* Reformat long lines
I find 80 character lines and whitespaces make the lists much easier to
read :)
* Remove extraneous <a> tags for header anchor tags
Now that middleman generates anchor tags for headers automagically, we
don't need to have blank <a> tags for anchor links to use.
Just saying `id` is ambiguous, it could be interpreted as the resource ID which will fail with the follow error: `CertificateNotFound: Server Certificate not found for the key: <id>`. The AWS documentation states that the ssl certificate id parameter must be the ARN.
Previously we linked to the whole request body definition for valid values of `runtime`.
Now we link directly to the docs for `Runtime`, which will hopefully make it easier to find the valid values.
This fixes#4570.
* provider/aws: Fix hashing on CloudFront certificate parameters
Adding necessary type assertion to values on the viewer_certificate hash
function to ensure that certain fields are indeed not zero string
values, versus simply zero interface{} values (aka nil, as is such for a
map[string]interface{}).
* provider/aws: CloudFront complex structure error handling
Handle errors better on calls to d.Set() in the
aws_cloudfront_distribution, namely in flattenDistributionConfig(). Also
caught a bug in the setting of the origin attribute, was incorrectly
attempting to set origins.
* provider/aws: Pass pointers to set CloudFront primitives
Change a few d.Set() for primitives in aws_cloudfront_distribution and
aws_cloudfront_origin_access_identity to use the pointer versus a
dereference.
* docs: Fix CloudFront examples formatting
Ran each example thru terraform fmt to fix indentation.
* provider/aws: Remove delete retention on CloudFront tests
To play better with Travis and not bloat the test account with disabled
distributions.
Disable-only functionality has been retained - one can enable it with
the TF_TEST_CLOUDFRONT_RETAIN environment variable.
* provider/aws: CloudFront delete waiter error handling
The call to resourceAwsCloudFrontDistributionWaitUntilDeployed() on
deletion of CloudFront distributions was not trapping error messages,
causing issues with waiter failure.
* provider/aws: Default Network ACL resource
Provides a resource to manage the default AWS Network ACL. VPC Only.
* Remove subnet_id update, mark as computed value. Remove extra tag update
* refactor default rule number to be a constant
* refactor revokeRulesForType to be revokeAllNetworkACLEntries
Refactor method to delete all network ACL entries, regardless of type. The
previous implementation was under the assumption that we may only eliminate some
rule types and possibly not others, so the split was necessary.
We're now removing them all, so the logic isn't necessary
Several doc and test cleanups are here as well
* smite subnet_id, improve docs
* CloudFront implementation v3
* Update tests
* Refactor - new resource: aws_cloudfront_distribution
* Includes a complete re-write of the old aws_cloudfront_web_distribution
resource to bring it to feature parity with API and CloudFormation.
* Also includes the aws_cloudfront_origin_access_identity resource to generate
origin access identities for use with S3.
* provider/aws: CodeDeploy Deployment Group Triggers
- Create a Trigger to Send Notifications for AWS CodeDeploy Events
- Update aws_codedeploy_deployment_group docs
* Refactor validateTriggerEvent function and test
- also rename TestAccAWSCodeDeployDeploymentGroup_triggerConfiguration test
* Enhance existing Deployment Group integration tests
- by using built in resource attribute helpers
- these can get quite verbose and repetitive, so passing the resource to a function might be better
- can't use these (yet) to assert trigger configuration state
* Unit tests for conversions between aws TriggerConfig and terraform resource schema
- buildTriggerConfigs
- triggerConfigsToMap
On creating CloudWatch metric alarms, I need to get the HealthCheckId dimension. Reference would be useful.
```
dimensions {
"HealthCheckId" = "${aws_route53_health_check.foo.id}"
}
```
When calling AssociateAddress, the PrivateIpAddress parameter must be
used to select which private IP the EIP should associate with, otherwise
the EIP always associates with the _first_ private IP.
Without this parameter, multiple EIPs couldn't be assigned to a single
ENI. Includes covering test and docs update.
Fixes#2997
* update docs on required parameter for api_gateway_integration
This parameter was required for lambda integration.
Otherwise,
` Error creating API Gateway Integration: BadRequestException: Enumeration value for HttpMethod must be non-empty`
* documentation: Including the AWS type on the api_gateway_integration docs
Documentation for `aws_cloudwatch_event_target` to warn that in order to be
able to have your AWS Lambda function or SNS topic invoked by a CloudWatch
Events rule, you must setup the right permissions
using `aws_lambda_permission` or `aws_sns_topic.policy`
Needed to truncate the identifier for SQL Server engines to keep it at
max 15 chars per the docs. Not a full UUID going into it, but should be
"unique enough" to not matter in practice.
Modified the basic test to use the generated value. Other tests are
still working w/ explicitly specified identifiers.
This adds support for Elastic Beanstalk Applications, Configuration Templates,
and Environments.
This is a combined work of @catsby, @dharrisio, @Bowbaq, and @jen20
The `description` field is easy to confuse for a nice field to
add an arbitrary comment to - and it's surprising that changes to this
field force a new resource, so we add a big note about it to point users
at tags.
Also marked all the other ForceNew attributes on this resource.
It was a mistake to switched fully to `==` when activating waiting for
capacity on updates in #3947. Users that didn't set `min_elb_capacity ==
desired_capacity` and instead treated it as an actual "minimum" would
see timeouts for every create, since their target numbers would never be
reached exactly.
Here, we fix that regression by restoring the minimum waiting behavior
during creates.
In order to preserve all the stated behavior, I had to split out
different criteria for create and update, criteria which are now
exhaustively unit tested.
The set of fields that affect capacity waiting behavior has become a bit
of a mess. Next major release I'd like to rework all of these into a
more consistently named block of config. For now, just getting the
behavior correct and documented.
(Also removes all the fixed names from the ASG tests as I was hitting
collision issues running them over here.)
Fixes#4792
of the database after creation. So we need to be able to
set the CharacterSetName on creation.
This is an option and will automagically default to
AL32UTF8.
The AWS SDK will give you an error message if you try to
apply this setting to other engines. The patch will only
report the character_set_name attribute, if CharacterSetName
is set on the instance.
Signed-off-by: Lars Bahner <lars.bahner@gmail.com>
http and https SNS topic subscription endpoints require confirmation to set a valid arn otherwise
arn would be set to "pending confirmation". If the endpoints auto confirm then arn is set
asynchronously but if we try to create another subscription with same parameters then api returns
"pending subscription" as arn but does not create another a duplicate subscription. In order to
solve this we should be fetching the subscription list for the topic and identify the subscription
with same parameters i.e., protocol, topic_arn, endpoint and extract the subscription arn.
Following changes were made to support the http/https endpoints that auto confirms
1. Added 3 extra parameters i.e.,
1. endpoint_auto_confirms -> boolean indicates if end points auto confirms
2. max_fetch_retries -> number of times to fetch subscription list for the topic to get the subscription arn
3. fetch_retry_delay -> delay b/w fetch subscription list call as the confirmation is done asynchronously.
With these parameters help added support http and https protocol based endpoints that auto confirm.
2. Update website doc appropriately
This allows specification of the profile for the shared credentials
provider for AWS to be specified in Terraform configuration. This is
useful if defining providers with aliases, or if you don't want to set
environment variables. Example:
$ aws configure --profile this_is_dog
... enter keys
$ cat main.tf
provider "aws" {
profile = "this_is_dog"
# Optionally also specify the path to the credentials file
shared_credentials_file = "/tmp/credentials"
}
This is equivalent to specifying AWS_PROFILE or
AWS_SHARED_CREDENTIALS_FILE in the environment.
When spinning up from a snapshot or a read replica, these fields are
now optional:
* allocated_storage
* engine
* password
* username
Some validation logic is added to make these fields required when
starting a database from scratch.
The documentation is updated accordingly.
AWS does some funky stuff to handle all the variations in certificates that CA's like to hand out to users. This commit adds a note about this and details how to avoid issues. See #3837 for more information.
Only use the create_before_destroy-hook in launch configurations. The autoscaling group must not use the create_before_destroy-hook, because it can be updated (and not destroyed + re-created). Using the create_before_destroy-hook in autoscaling group also leads to unwanted cyclic dependencies.
also removed the notion of tags from the redshift security group and
parameter group documentation until that has been implemented
Redshift Cluster CRUD and acceptance tests
Removing the Acceptance test for the Cluster Updates. You cannot delete
a cluster immediately after performing an operation on it. We would need
to add a lot of retry logic to the system to get this test to work
Adding some schema validation for RedShift cluster
Adding the last of the pieces of a first draft of the Redshift work - this is the documentation
Changed the aws_redshift_security_group and aws_redshift_parameter_group
to remove the tags from the schema. Tags are a little bit more
complicated than originally though - I will revisit this later
Then added the schema, CRUD functionality and basic acceptance tests for
aws_redshift_subnet_group
Adding an acceptance test for the Update of subnet_ids in AWS Redshift Subnet Group
This action is almost exactly the same as creating a SimpleAD so we
reuse this resource and allow the user to specify the type when creating
the directory (ignoring the size if the type is MicrosoftAD).
Some error-checking was omitted.
Specifically, the cloudTrailSetLogging call in the Create function was
ignoring the return and cloudTrailGetLoggingStatus could crash on a
nil-dereference during the return. Fixed both.
Fixed some needless casting in cloudTrailGetLoggingStatus.
Clarified error message in acceptance tests.
Removed needless option from example in docs.
The default for `enable_logging`, which defines whether CloudTrail
actually logs events was originally written as defaulting to `false`,
since that's how AWS creates trails.
`true` is likely a better default for Terraform users.
Changed the default and updated the docs.
Changed the acceptance tests to verify new default behavior.
It's a bit confusing to have Terraform poll until instances come up on
ASG creation but not on update. This changes update to also poll if
min_size or desired_capacity are changed.
This changes the waiting behavior to wait for precisely the desired
number of instances instead of that number as a "minimum". I believe
this shouldn't have any undue side effects, and the behavior can still
be opted out of by setting `wait_for_capacity_timeout` to 0.
* master: (95 commits)
Update CHANGELOG.md
Update CHANGELOG.md
Update CHANGELOG.md
Update CHANGELOG.md
upgrade a warning to error
add some logging around create/update requests for IAM user
Update CHANGELOG.md
Update CHANGELOG.md
Build using `make test` on Travis CI
Update CHANGELOG.md
provider/aws: Fix error format in Kinesis Firehose
Update CHANGELOG.md
Changes to Aws Kinesis Firehouse Docs
Update CHANGELOG.md
modify aws_iam_user_test to correctly check username and path for initial and changed username/path
Update CHANGELOG.md
Update CHANGELOG.md
Prompt for input variables before context validate
Removing the AWS DBInstance Acceptance Test for withoutEngine as this is now part of the checkInstanceAttributes func
Making engine_version be computed in the db_instance provider
...
This tripped me up today when I was trying to connect using MFA. I had a look at the source and found the token property, tested it out and low and behold it worked!
Hopefully this saves someone else going through the same pain
See #2911.
This adds a `name_prefix` option to `aws_launch_configuration` resources.
When specified, it is used instead of `terraform-` as the prefix for the
launch configuration. It conflicts with `name`, so existing
functionality is unchanged. `name` still sets the name explicitly.
Added an acceptance test, and updated the site documentation.
* pr-3707:
config updates for ElastiCache test
Removing the instance_type check in the ElastiCache cluster creation. We now allow the error to bubble up to the userr when the wrong instance type is used. The limitation for t2 instance types now allowing snapshotting is also now documented
Making the changes to the snapshotting for Elasticache Redis as per @catsby's findings
Added an extra test for the Elasticache Cluster to show that updates work. Also added some debugging to show that the API returns the Elasticache retention period info
When I was setting the update parameters for the Snapshotting, I didn't update the copy/pasted params
Adding the ability to specify a snapshot window and retention limit for Redis ElastiCache clusters
* master: (335 commits)
Update CHANGELOG.md
config: return to the go1.5 generated lang/y.go
Update CHANGELOG.md
Allow cluster name, not only ARN for aws_ecs_service
Update CHANGELOG.md
Add check errors on reading CORS rules
Update CHANGELOG.md
website: docs for null_resource
dag: use hashcodes to as map key to edge sets
Update CHANGELOG.md
Update CHANGELOG.md
Update CHANGELOG.md
Use hc-releases
provider/google: Added scheduling block to compute_instance
Use vendored fastly logo
Use releases for releases
Update CHANGELOG.md
Update CHANGELOG.md
Update vpn.tf
Update CHANGELOG.md
...
Fixing basic acceptance test.
Adding warning to website about mixed mode.
Adding exists to aws_route.
Adding acceptance test for changing destination_cidr_block.
aws_lb_cookie_stickiness_policy.elbland: Error creating LBCookieStickinessPolicy: ValidationError: Policy name cannot contain characters that are not letters, or digits or the dash.
The `ForceDelete` parameter was getting sent to the upstream API call,
but only after we had already finished draining instances from
Terraform, so it was a moot point by then.
This fixes that by skipping the drain when force_delete is true, and it
also simplifies the field config a bit:
* set a default of false to simplify the logic
* remove `ForceNew` since there's no need to replace the resource to
flip this value
* pull a detail comment from code into the docs
A "Layer" is a particular service that forms part of the infrastructure for
a set of applications. Some layers are application servers and others are
pure infrastructure, like MySQL servers or load balancers.
Although the AWS API only has one type called "Layer", it actually has
a number of different "soft" types that each have slightly different
validation rules and extra properties that are packed into the Attributes
map.
To make the validation rule differences explicit in Terraform, and to make
the Terraform structure more closely resemble the OpsWorks UI than its
API, we use a separate resource type per layer type, with the common code
factored out into a shared struct type.
"Stack" is the root concept in OpsWorks, and acts as a container for a number
of different "layers" that each provide some service for an application.
A stack isn't very interesting on its own, but it needs to be created before
any layers can be created.