The example is referencing a non-existent variable, `allocation_id`, within the `aws_eip` resource. I believe this should actually be `aws_eip.example.id` instead of `aws_eip.example.allocation_id`.
Add the iam_arn attribute to aws_cloudfront_origin_access_identity,
which computes the IAM ARN for a certain CloudFront origin access
identity.
This is necessary because S3 modifies the bucket policy if CanonicalUser
is sent, causing spurious diffs with aws_s3_bucket resources.
This brings over the work done by @apparentlymart and @radeksimko in
PR #3124, and converts it into a data source for the AWS provider:
This commit adds a helper to construct IAM policy documents using
familiar Terraform concepts. It makes Terraform-style interpolations
easier and resolves the syntax conflict between Terraform interpolations
and IAM policy variables by changing the latter to use &{...} for its
interpolations.
Its use is completely optional and users are free to go on using literal
heredocs, file interpolations or whatever else; this just adds another
option that fits more naturally into a Terraform config.
This data source allows one to look up the most recent AMI for a specific
set of parameters, much like aws ec2 describe-images in the AWS CLI.
Basically a refresh of hashicorp/terraform#4396, in data source form.
* Add per user, role and group policy attachment
* Add docs for new IAM policy attachment resources.
* Make policy attachment resources manage only 1 entity<->policy attachment
* provider/aws: Tidy up IAM Group/User/Role attachments
This commit adds a data source with a single list, `instance` for the
schema which gets populated with the availability zones to which an
account has access.
resource
We had a line on the Update func that said:
```
Hash key can only be specified at creation, you cannot modify it.
```
The resource has now been changed to ForceNew on the hashkey
```
aws_dynamodb_table.demo-user-table: Refreshing state... (ID: Users)
aws_dynamodb_table.demo-user-table: Destroying...
aws_dynamodb_table.demo-user-table: Destruction complete
aws_dynamodb_table.demo-user-table: Creating...
aws_dynamodb_table.demo-user-table: Creation complete
```
When stage_name is not passed to the resource
aws_api_gateway_deployment a terraform apply will fail. This is
because the stage_name is required and not optional.
As requested in #4822, add support for a KMS Key ID (ARN) for Db
Instance
```
make testacc TEST=./builtin/providers/aws
TESTARGS='-run=TestAccAWSDBInstance_kmsKey' 2>~/tf.log
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /vendor/)
TF_ACC=1 go test ./builtin/providers/aws -v
-run=TestAccAWSDBInstance_kmsKey -timeout 120m
=== RUN TestAccAWSDBInstance_basic
--- PASS: TestAccAWSDBInstance_basic (587.37s)
=== RUN TestAccAWSDBInstance_kmsKey
--- PASS: TestAccAWSDBInstance_kmsKey (625.31s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 1212.684s
```
* New top level AWS resource aws_eip_association
* Add documentation for aws_eip_association
* Add tests for aws_eip_association
* provider/aws: Change `aws_elastic_ip_association` to have computed
parameters
The AWS API was send ing more parameters than we had set. Therefore,
Terraform was showing constant changes when plans were being formed
Added the hosted_zone_id attribute, which aliases to the Route 53
zone ID that can be used to route Alias Resource Record Sets to.
This fixeshashicorp/terraform#6489.
Change the AWS DB Instance to now include the DB Option Group param. Adds a test to prove that it works
Add acceptance tests for the AWS DB Option Group work. This ensures that Options can be added and updated
Documentation for the AWS DB Option resource
automated_snapshot_retention_period
The default value for `automated_snapshot_retention_period` is 1.
Therefore, it can be included in the `CreateClusterInput` without
needing to check that it is set.
This was actually stopping people from setting the value to 0 (disabling
the snapshots) as there is an issue in `d.GetOk()` evaluating 0 for int
* Fix headers and header anchor tags
The markdown parser already generates unique ids for header elements by
downcasing all of the words and replacing spaces with hyphens. Knowing
this, we can take the code blocks out of the headers and use the
generated ids as the link targets.
Aside: I tried to see if there was a standard way of documenting
subresources, but couldn't really find one. Both the aws_elb and
aws_instance resources seem to just say "documented below" without a
link. Then the relevant section is just a new paragraph with a list of
arguments.
* Reformat long lines
I find 80 character lines and whitespaces make the lists much easier to
read :)
* Remove extraneous <a> tags for header anchor tags
Now that middleman generates anchor tags for headers automagically, we
don't need to have blank <a> tags for anchor links to use.
Just saying `id` is ambiguous, it could be interpreted as the resource ID which will fail with the follow error: `CertificateNotFound: Server Certificate not found for the key: <id>`. The AWS documentation states that the ssl certificate id parameter must be the ARN.
Previously we linked to the whole request body definition for valid values of `runtime`.
Now we link directly to the docs for `Runtime`, which will hopefully make it easier to find the valid values.
This fixes#4570.
* provider/aws: Fix hashing on CloudFront certificate parameters
Adding necessary type assertion to values on the viewer_certificate hash
function to ensure that certain fields are indeed not zero string
values, versus simply zero interface{} values (aka nil, as is such for a
map[string]interface{}).
* provider/aws: CloudFront complex structure error handling
Handle errors better on calls to d.Set() in the
aws_cloudfront_distribution, namely in flattenDistributionConfig(). Also
caught a bug in the setting of the origin attribute, was incorrectly
attempting to set origins.
* provider/aws: Pass pointers to set CloudFront primitives
Change a few d.Set() for primitives in aws_cloudfront_distribution and
aws_cloudfront_origin_access_identity to use the pointer versus a
dereference.
* docs: Fix CloudFront examples formatting
Ran each example thru terraform fmt to fix indentation.
* provider/aws: Remove delete retention on CloudFront tests
To play better with Travis and not bloat the test account with disabled
distributions.
Disable-only functionality has been retained - one can enable it with
the TF_TEST_CLOUDFRONT_RETAIN environment variable.
* provider/aws: CloudFront delete waiter error handling
The call to resourceAwsCloudFrontDistributionWaitUntilDeployed() on
deletion of CloudFront distributions was not trapping error messages,
causing issues with waiter failure.
* provider/aws: Default Network ACL resource
Provides a resource to manage the default AWS Network ACL. VPC Only.
* Remove subnet_id update, mark as computed value. Remove extra tag update
* refactor default rule number to be a constant
* refactor revokeRulesForType to be revokeAllNetworkACLEntries
Refactor method to delete all network ACL entries, regardless of type. The
previous implementation was under the assumption that we may only eliminate some
rule types and possibly not others, so the split was necessary.
We're now removing them all, so the logic isn't necessary
Several doc and test cleanups are here as well
* smite subnet_id, improve docs
* CloudFront implementation v3
* Update tests
* Refactor - new resource: aws_cloudfront_distribution
* Includes a complete re-write of the old aws_cloudfront_web_distribution
resource to bring it to feature parity with API and CloudFormation.
* Also includes the aws_cloudfront_origin_access_identity resource to generate
origin access identities for use with S3.
* provider/aws: CodeDeploy Deployment Group Triggers
- Create a Trigger to Send Notifications for AWS CodeDeploy Events
- Update aws_codedeploy_deployment_group docs
* Refactor validateTriggerEvent function and test
- also rename TestAccAWSCodeDeployDeploymentGroup_triggerConfiguration test
* Enhance existing Deployment Group integration tests
- by using built in resource attribute helpers
- these can get quite verbose and repetitive, so passing the resource to a function might be better
- can't use these (yet) to assert trigger configuration state
* Unit tests for conversions between aws TriggerConfig and terraform resource schema
- buildTriggerConfigs
- triggerConfigsToMap
On creating CloudWatch metric alarms, I need to get the HealthCheckId dimension. Reference would be useful.
```
dimensions {
"HealthCheckId" = "${aws_route53_health_check.foo.id}"
}
```
When calling AssociateAddress, the PrivateIpAddress parameter must be
used to select which private IP the EIP should associate with, otherwise
the EIP always associates with the _first_ private IP.
Without this parameter, multiple EIPs couldn't be assigned to a single
ENI. Includes covering test and docs update.
Fixes#2997
* update docs on required parameter for api_gateway_integration
This parameter was required for lambda integration.
Otherwise,
` Error creating API Gateway Integration: BadRequestException: Enumeration value for HttpMethod must be non-empty`
* documentation: Including the AWS type on the api_gateway_integration docs
Documentation for `aws_cloudwatch_event_target` to warn that in order to be
able to have your AWS Lambda function or SNS topic invoked by a CloudWatch
Events rule, you must setup the right permissions
using `aws_lambda_permission` or `aws_sns_topic.policy`
Needed to truncate the identifier for SQL Server engines to keep it at
max 15 chars per the docs. Not a full UUID going into it, but should be
"unique enough" to not matter in practice.
Modified the basic test to use the generated value. Other tests are
still working w/ explicitly specified identifiers.
This adds support for Elastic Beanstalk Applications, Configuration Templates,
and Environments.
This is a combined work of @catsby, @dharrisio, @Bowbaq, and @jen20
The `description` field is easy to confuse for a nice field to
add an arbitrary comment to - and it's surprising that changes to this
field force a new resource, so we add a big note about it to point users
at tags.
Also marked all the other ForceNew attributes on this resource.
It was a mistake to switched fully to `==` when activating waiting for
capacity on updates in #3947. Users that didn't set `min_elb_capacity ==
desired_capacity` and instead treated it as an actual "minimum" would
see timeouts for every create, since their target numbers would never be
reached exactly.
Here, we fix that regression by restoring the minimum waiting behavior
during creates.
In order to preserve all the stated behavior, I had to split out
different criteria for create and update, criteria which are now
exhaustively unit tested.
The set of fields that affect capacity waiting behavior has become a bit
of a mess. Next major release I'd like to rework all of these into a
more consistently named block of config. For now, just getting the
behavior correct and documented.
(Also removes all the fixed names from the ASG tests as I was hitting
collision issues running them over here.)
Fixes#4792
of the database after creation. So we need to be able to
set the CharacterSetName on creation.
This is an option and will automagically default to
AL32UTF8.
The AWS SDK will give you an error message if you try to
apply this setting to other engines. The patch will only
report the character_set_name attribute, if CharacterSetName
is set on the instance.
Signed-off-by: Lars Bahner <lars.bahner@gmail.com>
http and https SNS topic subscription endpoints require confirmation to set a valid arn otherwise
arn would be set to "pending confirmation". If the endpoints auto confirm then arn is set
asynchronously but if we try to create another subscription with same parameters then api returns
"pending subscription" as arn but does not create another a duplicate subscription. In order to
solve this we should be fetching the subscription list for the topic and identify the subscription
with same parameters i.e., protocol, topic_arn, endpoint and extract the subscription arn.
Following changes were made to support the http/https endpoints that auto confirms
1. Added 3 extra parameters i.e.,
1. endpoint_auto_confirms -> boolean indicates if end points auto confirms
2. max_fetch_retries -> number of times to fetch subscription list for the topic to get the subscription arn
3. fetch_retry_delay -> delay b/w fetch subscription list call as the confirmation is done asynchronously.
With these parameters help added support http and https protocol based endpoints that auto confirm.
2. Update website doc appropriately
This allows specification of the profile for the shared credentials
provider for AWS to be specified in Terraform configuration. This is
useful if defining providers with aliases, or if you don't want to set
environment variables. Example:
$ aws configure --profile this_is_dog
... enter keys
$ cat main.tf
provider "aws" {
profile = "this_is_dog"
# Optionally also specify the path to the credentials file
shared_credentials_file = "/tmp/credentials"
}
This is equivalent to specifying AWS_PROFILE or
AWS_SHARED_CREDENTIALS_FILE in the environment.
When spinning up from a snapshot or a read replica, these fields are
now optional:
* allocated_storage
* engine
* password
* username
Some validation logic is added to make these fields required when
starting a database from scratch.
The documentation is updated accordingly.
AWS does some funky stuff to handle all the variations in certificates that CA's like to hand out to users. This commit adds a note about this and details how to avoid issues. See #3837 for more information.
Only use the create_before_destroy-hook in launch configurations. The autoscaling group must not use the create_before_destroy-hook, because it can be updated (and not destroyed + re-created). Using the create_before_destroy-hook in autoscaling group also leads to unwanted cyclic dependencies.
also removed the notion of tags from the redshift security group and
parameter group documentation until that has been implemented
Redshift Cluster CRUD and acceptance tests
Removing the Acceptance test for the Cluster Updates. You cannot delete
a cluster immediately after performing an operation on it. We would need
to add a lot of retry logic to the system to get this test to work
Adding some schema validation for RedShift cluster
Adding the last of the pieces of a first draft of the Redshift work - this is the documentation
Changed the aws_redshift_security_group and aws_redshift_parameter_group
to remove the tags from the schema. Tags are a little bit more
complicated than originally though - I will revisit this later
Then added the schema, CRUD functionality and basic acceptance tests for
aws_redshift_subnet_group
Adding an acceptance test for the Update of subnet_ids in AWS Redshift Subnet Group
This action is almost exactly the same as creating a SimpleAD so we
reuse this resource and allow the user to specify the type when creating
the directory (ignoring the size if the type is MicrosoftAD).
Some error-checking was omitted.
Specifically, the cloudTrailSetLogging call in the Create function was
ignoring the return and cloudTrailGetLoggingStatus could crash on a
nil-dereference during the return. Fixed both.
Fixed some needless casting in cloudTrailGetLoggingStatus.
Clarified error message in acceptance tests.
Removed needless option from example in docs.
The default for `enable_logging`, which defines whether CloudTrail
actually logs events was originally written as defaulting to `false`,
since that's how AWS creates trails.
`true` is likely a better default for Terraform users.
Changed the default and updated the docs.
Changed the acceptance tests to verify new default behavior.
It's a bit confusing to have Terraform poll until instances come up on
ASG creation but not on update. This changes update to also poll if
min_size or desired_capacity are changed.
This changes the waiting behavior to wait for precisely the desired
number of instances instead of that number as a "minimum". I believe
this shouldn't have any undue side effects, and the behavior can still
be opted out of by setting `wait_for_capacity_timeout` to 0.
* master: (95 commits)
Update CHANGELOG.md
Update CHANGELOG.md
Update CHANGELOG.md
Update CHANGELOG.md
upgrade a warning to error
add some logging around create/update requests for IAM user
Update CHANGELOG.md
Update CHANGELOG.md
Build using `make test` on Travis CI
Update CHANGELOG.md
provider/aws: Fix error format in Kinesis Firehose
Update CHANGELOG.md
Changes to Aws Kinesis Firehouse Docs
Update CHANGELOG.md
modify aws_iam_user_test to correctly check username and path for initial and changed username/path
Update CHANGELOG.md
Update CHANGELOG.md
Prompt for input variables before context validate
Removing the AWS DBInstance Acceptance Test for withoutEngine as this is now part of the checkInstanceAttributes func
Making engine_version be computed in the db_instance provider
...
This tripped me up today when I was trying to connect using MFA. I had a look at the source and found the token property, tested it out and low and behold it worked!
Hopefully this saves someone else going through the same pain
See #2911.
This adds a `name_prefix` option to `aws_launch_configuration` resources.
When specified, it is used instead of `terraform-` as the prefix for the
launch configuration. It conflicts with `name`, so existing
functionality is unchanged. `name` still sets the name explicitly.
Added an acceptance test, and updated the site documentation.
* pr-3707:
config updates for ElastiCache test
Removing the instance_type check in the ElastiCache cluster creation. We now allow the error to bubble up to the userr when the wrong instance type is used. The limitation for t2 instance types now allowing snapshotting is also now documented
Making the changes to the snapshotting for Elasticache Redis as per @catsby's findings
Added an extra test for the Elasticache Cluster to show that updates work. Also added some debugging to show that the API returns the Elasticache retention period info
When I was setting the update parameters for the Snapshotting, I didn't update the copy/pasted params
Adding the ability to specify a snapshot window and retention limit for Redis ElastiCache clusters
* master: (335 commits)
Update CHANGELOG.md
config: return to the go1.5 generated lang/y.go
Update CHANGELOG.md
Allow cluster name, not only ARN for aws_ecs_service
Update CHANGELOG.md
Add check errors on reading CORS rules
Update CHANGELOG.md
website: docs for null_resource
dag: use hashcodes to as map key to edge sets
Update CHANGELOG.md
Update CHANGELOG.md
Update CHANGELOG.md
Use hc-releases
provider/google: Added scheduling block to compute_instance
Use vendored fastly logo
Use releases for releases
Update CHANGELOG.md
Update CHANGELOG.md
Update vpn.tf
Update CHANGELOG.md
...
Fixing basic acceptance test.
Adding warning to website about mixed mode.
Adding exists to aws_route.
Adding acceptance test for changing destination_cidr_block.
aws_lb_cookie_stickiness_policy.elbland: Error creating LBCookieStickinessPolicy: ValidationError: Policy name cannot contain characters that are not letters, or digits or the dash.
The `ForceDelete` parameter was getting sent to the upstream API call,
but only after we had already finished draining instances from
Terraform, so it was a moot point by then.
This fixes that by skipping the drain when force_delete is true, and it
also simplifies the field config a bit:
* set a default of false to simplify the logic
* remove `ForceNew` since there's no need to replace the resource to
flip this value
* pull a detail comment from code into the docs
A "Layer" is a particular service that forms part of the infrastructure for
a set of applications. Some layers are application servers and others are
pure infrastructure, like MySQL servers or load balancers.
Although the AWS API only has one type called "Layer", it actually has
a number of different "soft" types that each have slightly different
validation rules and extra properties that are packed into the Attributes
map.
To make the validation rule differences explicit in Terraform, and to make
the Terraform structure more closely resemble the OpsWorks UI than its
API, we use a separate resource type per layer type, with the common code
factored out into a shared struct type.
"Stack" is the root concept in OpsWorks, and acts as a container for a number
of different "layers" that each provide some service for an application.
A stack isn't very interesting on its own, but it needs to be created before
any layers can be created.
* 'master' of github.com:hashicorp/terraform:
Update CHANGELOG.md
Changing the ElastiCache Cluster configuration_engine to be on the cluster, not on the cache nodes
Adding configuration endpoint to the elasticache cluster nodes
When launching a new RDS instance in a VPC-default AWS account, trying to control which VPC the new RDS instance lands in is not apparent from the parameters available.
The following works:
```
resource "aws_db_subnet_group" "foo" {
name = "foo"
description = "DB Subnet for foo"
subnet_ids = ["${aws_subnet.foo_1a.id}", "${aws_subnet.foo_1b.id}"]
}
resource "aws_db_instance" "bar" {
...
db_subnet_group_name = "${aws_db_subnet_group.foo.name}"
...
}
```
Hopefully this doc update will help others
AWS provides three different ways to create AMIs that each have different
inputs, but once they are complete the same management operations apply.
Thus these three resources each have a different "Create" implementation
but then share the same "Read", "Update" and "Delete" implementations.
The Elasticache API accepts a mixed-case subnet name on create, but
normalizes it to lowercase before storing it. When retrieving a subnet,
the name is treated as case-sensitive, so the lowercase version must be
used.
Given that case within subnet names is not significant, the new StateFunc
on the name attribute causes the state to reflect the lowercase version
that the API uses, and changes in case alone will not show as a diff.
Given that we must look up subnet names in lower case, we set the
instance id to be a lowercase version of the user's provided name. This
then allows a later Refresh call to succeed even if the user provided
a mixed-case name.
Previously users could work around this by just avoiding putting uppercase
letters in the name, but that is often inconvenient if e.g. the name is
being constructed from variables defined elsewhere that may already have
uppercase letters present.
* master: (84 commits)
provider/aws: Update to aws-sdk 0.9.0 rc1
use name instead of id - launch configs use the name and not ID
Fix typo on heroku_cert example
provider/aws: add value into ELB name validation message
tests: fix missed test update from last merge
update prevent_destroy error message
Update CHANGELOG.md
Update CHANGELOG.md
providers/aws: Update Launch Config. docs to detail naming and lifecycle recommendation
release: cleanup after v0.6.3
v0.6.3
Update CHANGELOG.md
core: fix deadlock when dependable node replaced with non-dependable one
tests: extract deadlock checking test helper
core: log every 5s while waiting for dependencies
Fixed indentation in a code sample
state/remote/s3: match with upstream changes
provider/aws: match with upstream changes
google: Add example of two-tier app
Updating Launch Config Docs for Name attribute
...
* upstream/master:
Update CHANGELOG.md
Update CHANGELOG.md
provider/aws: allow external ENI attachments
Update AWS provider documentation
docs/aws: Fix example of aws_iam_role_policy
provider/aws: S3 bucket test that should fail
provider/aws: Return if Bucket not found
Update CHANGELOG.md
Update CHANGELOG.md
helper/schema: record schema version when destroy fails
settings file is not required
provider/azure: Allow settings_file to accept XML string
add note to aws_iam_policy_attachment explaining its use/limitations
docs: clarify template_file path information
google: Sort resources by alphabet in docs
Support go get in go 1.5
Update CHANGELOG.md
aws_network_interface attachment block is not required
provider/aws: Fix issue in Security Group Rules where the Security Group is not found
This commit exports the `arn` as well as the `id`, since IAM
roles require the full resource name rather than just the table
name. I'd even be in favor or having `arn` as the `id` since the
<region, tablename> pair is the uniqueness constraint, but this
will keep backwards compatibility:
http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_CreateTable.html
* master: (720 commits)
Update CHANGELOG.md
Update CHANGELOG.md
dynamodb-local Update AWS config https://github.com/hashicorp/terraform/pull/2825#issuecomment-126353610
Make target_pools optional
Update CHANGELOG.md
code formatting
Update CHANGELOG.md
providers/google: Fix reading account_file path
providers/google: Fix error appending
providers/google: Return if we could parse JSON
providers/google: Change account_file to JSON
providers/google: Default account_file* to empty
providers/google: Add account_file/account_file_contents ConflictsWith
providers/google: Document account_file_contents
providers/google: Use account_file_contents if provided
providers/google: Add account_file_contents to provider
Update CHANGELOG.md
Update CHANGELOG.md
dynamodb-local Use ` instead of : to refer region to keep the consistency with the provider docs
dynamodb-local Update aws provider docs to include the `dynamodb_endpoint` argument
...
* master: (86 commits)
providers/google: Fix reading account_file path
providers/google: Fix error appending
providers/google: Return if we could parse JSON
providers/google: Change account_file to JSON
providers/google: Default account_file* to empty
providers/google: Add account_file/account_file_contents ConflictsWith
providers/google: Document account_file_contents
providers/google: Use account_file_contents if provided
providers/google: Add account_file_contents to provider
Update CHANGELOG.md
Update CHANGELOG.md
use d.Id()
Update CHANGELOG.md
Update CHANGELOG.md
scripts: change website_push to push from HEAD
update analytics
core: fix crash on provider warning
provider/aws: Update source to comply with upstream breaking change
Update CHANGELOG.
provider/aws: Fix issue with IAM Server Certificates and Chains
...