existing example returns an error like the following should you try to
run `terraform plan` against it:
Error reading config for aws_subnet[example]: data.aws_availability_zone.name_suffix: data variables must be four parts: data.TYPE.NAME.ATTR in:
${cidrsubnet(aws_vpc.example.cidr_block, 4, var.az_number[data.aws_availability_zone.name_suffix])}
* provider/aws: Add aws_alb_listener data source
This adds the aws_alb_listener data source to get information on an AWS
Application Load Balancer listener.
The schema is slightly modified (only option-wise, attributes are the
same) and we use the aws_alb_listener resource read function to get the
data.
Note that the HTTPS test here may fail due until
hashicorp/terraform#10180 is merged.
* provider/aws: Add aws_alb_listener data source docs
Now documented.
Use this data source to get the ARN of a certificate in AWS Certificate
Manager (ACM). The process of requesting and verifying a certificate in ACM
requires some manual steps, which means that Terraform cannot automate the
creation of ACM certificates. But using this data source, you can reference
them by domain without having to hard code the ARNs as input.
The acceptance test included requires an ACM certificate be pre-created
in and information about it passed in via environment variables. It's a
bit sad but there's really no other way to do it.
This will allows us to filter a specific ebs_volume for attachment to an
aws_instance
```
make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSEbsVolumeDataSource_'✹
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2016/11/01 12:39:19 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v
-run=TestAccAWSEbsVolumeDataSource_ -timeout 120m
=== RUN TestAccAWSEbsVolumeDataSource_basic
--- PASS: TestAccAWSEbsVolumeDataSource_basic (28.74s)
=== RUN TestAccAWSEbsVolumeDataSource_multipleFilters
--- PASS: TestAccAWSEbsVolumeDataSource_multipleFilters (28.37s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws57.145s
```
* provider/aws: data source for AWS Security Group
* provider/aws: add documentation for data source for AWS Security Group
* provider/aws: data source for AWS Security Group (improve if condition and syntax)
* fix fmt
* Add AWS Prefix List data source.
AWS Prefix List data source acceptance test.
AWS Prefix List data source documentation.
* Improve error message when PL not matched.
This adds a note in the `aws_iam_policy_document` documentation that
`resources` is required by AWS if used on an IAM policy. Also added a
note on `aws_iam_policy` that `aws_iam_policy_document` is a good thing
to use when configuring.
Closes#9002
The primary purpose of this data source is to ask the question "what is
my current region?", but it can also be used to retrieve the endpoint
hostname for a particular (possibly non-current) region, should that be
useful for some esoteric case.
This adds a singular data source in addition to the existing plural one.
This allows retrieving data about a specific AZ.
As a helper for writing reusable modules, the AZ letter (without its
usual region name prefix) is exposed so that it can be used in
region-agnostic mappings where a different value is used per AZ, such as
for subnet numbering schemes.
Renamed the local_name_filter attribute to name_regex and made it clear in the
docs that this runs locally and could have a performance impact on a large set
of AMIs returned from AWS.
In cases where the filters provided by AWS against the name of an AMI are not
sufficient, allow adding a "local_name_filter" which is a regex that is used
to filter the AMIs returned by amazon.
When you need to enable monitoring for Redshift, you need to create the
correct policy in the bucket for logging. This needs to have the
Redshift Account ID for a given region. This data source provides a
handy lookup for this
http://docs.aws.amazon.com/redshift/latest/mgmt/db-auditing.html#db-auditing-enable-logging
% make testacc TEST=./builtin/providers/aws
% TESTARGS='-run=TestAccAWSRedshiftAccountId_basic' 2 ↵ ✹ ✭
==> Checking that code complies with gofmt requirements...
/Users/stacko/Code/go/bin/stringer
go generate $(go list ./... | grep -v /terraform/vendor/)
2016/08/16 14:39:35 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v
-run=TestAccAWSRedshiftAccountId_basic -timeout 120m
=== RUN TestAccAWSRedshiftAccountId_basic
--- PASS: TestAccAWSRedshiftAccountId_basic (19.47s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 19.483s
This data source provides access during configuration to the ID of the
AWS account for the connection to AWS. It is primarily useful for
interpolating into policy documents, for example when creating the
policy for an ELB or ALB access log bucket.
This will need revisiting and further testing once the work for
AssumeRole is integrated.
* Add state filter to aws_availability_zones data source.
This commit adds an ability to filter Availability Zones based on state, where
by default it would only list available zones.
Be advised that this does not always works reliably for an older accounts which
have been created in the pre-VPC era of EC2. These accounts tends to retrieve
availability zones that are not VPC-enabled, thus creation of a custom subnet
within such Availability Zone would result in a failure.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
* Update documentation for aws_availability_zones data source.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
* Do not filter on state by default.
This commit makes the state filter applicable only when set.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
We cannot use the "id" field to represent policy ID, because it is used
internally by Terraform. Also change the "id" field within a statement
to "sid" for consistency with the generated JSON.
* small doc update
* provider/atlas: Add docs for Artifact Data Source
* provider/atlas: Remove a test method that isn't used
* provider/atlas: Add Data Source for Atlas Artifact
* provider/atlas: Show deprecation error on atlas_artifact resource
this datasource allows terraform to work with externally modified state, e.g.
when you're using an ECS service which is continously updated by your CI via the
AWS CLI.
right now you'd have to wrap terraform into a shell script which looks up the
current image digest, so running terraform won't change the updated service.
using the aws_ecs_container_definition data source you can now leverage
terraform, removing the wrapper entirely.
Since this resource produces a list it feels more intuitive to give its
attribute a plural name, and since the noun "instance" already means
something specific in the AWS provider that doesn't apply here we use
"names" to indicate that these are availability zone names.
Also includes updating the docs to not show a dynamic count example for
now, since we don't support that yet.
This brings over the work done by @apparentlymart and @radeksimko in
PR #3124, and converts it into a data source for the AWS provider:
This commit adds a helper to construct IAM policy documents using
familiar Terraform concepts. It makes Terraform-style interpolations
easier and resolves the syntax conflict between Terraform interpolations
and IAM policy variables by changing the latter to use &{...} for its
interpolations.
Its use is completely optional and users are free to go on using literal
heredocs, file interpolations or whatever else; this just adds another
option that fits more naturally into a Terraform config.
This data source allows one to look up the most recent AMI for a specific
set of parameters, much like aws ec2 describe-images in the AWS CLI.
Basically a refresh of hashicorp/terraform#4396, in data source form.
This commit adds a data source with a single list, `instance` for the
schema which gets populated with the availability zones to which an
account has access.