* Implemented EventHubs
* Missing the sidebar link
* Fixing the type
* Fixing the docs for Namespace
* Removing premium tests
* Checking the correct status code on delete
* Added a test case for the import
* Documentation for importing
* Fixing a typo
The documentation mentions ownership of both VPCs for aws_vpc_peering_connection auto_accept to work but if both VPC are in separate accounts it does not matter if both account are owned or not.
In #6843 its stated that aws_vpc_peering_connection only works if both VPC are in the same AWS account.
The documentation fails to mention that peeing of two VPCs in two different regions is not supported by AWS.
Use this data source to get the ARN of a certificate in AWS Certificate
Manager (ACM). The process of requesting and verifying a certificate in ACM
requires some manual steps, which means that Terraform cannot automate the
creation of ACM certificates. But using this data source, you can reference
them by domain without having to hard code the ARNs as input.
The acceptance test included requires an ACM certificate be pre-created
in and information about it passed in via environment variables. It's a
bit sad but there's really no other way to do it.
* GH-8755 - Adding in support to attach ASG to ELB as independent action
* GH-8755 - Adding in docs
* GH-8755 - Adjusting attribute name and responding to other PR feedback
* provider/aws: Provide the option to skip_destroy on
aws_volume_attachment
When you want to attach and detach pre-existing EBS volumes to an
instance, we would do that as follows:
```
resource "aws_instance" "web" {
ami = "ami-21f78e11"
availability_zone = "us-west-2a"
instance_type = "t1.micro"
tags {
Name = "HelloWorld"
}
}
data "aws_ebs_volume" "ebs_volume" {
filter {
name = "size"
values = ["${aws_ebs_volume.example.size}"]
}
filter {
name = "availability-zone"
values = ["${aws_ebs_volume.example.availability_zone}"]
}
filter {
name = "tag:Name"
values = ["TestVolume"]
}
}
resource "aws_volume_attachment" "ebs_att" {
device_name = "/dev/sdh"
volume_id = "${data.aws_ebs_volume.ebs_volume.id}"
instance_id = "${aws_instance.web.id}"
skip_destroy = true
}
```
The issue here is that when we run a terraform destroy command, the volume tries to get detached from a running instance and goes into a non-responsive state. We would have to force_destroy the volume at that point and risk losing any data on it.
This PR introduces the idea of `skip_destroy` on a volume attachment. tl;dr:
We want the volume to be detached from the instane when the instance itself has been destroyed. This way the normal shut procedures will happen and protect the disk for attachment to another instance
Volume Attachment Tests:
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSVolumeAttachment_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2016/11/02 00:47:27 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSVolumeAttachment_ -timeout 120m
=== RUN TestAccAWSVolumeAttachment_basic
--- PASS: TestAccAWSVolumeAttachment_basic (133.49s)
=== RUN TestAccAWSVolumeAttachment_skipDestroy
--- PASS: TestAccAWSVolumeAttachment_skipDestroy (119.64s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 253.158s
```
EBS Volume Tests:
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSEBSVolume_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2016/11/02 01:00:18 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSEBSVolume_ -timeout 120m
=== RUN TestAccAWSEBSVolume_importBasic
--- PASS: TestAccAWSEBSVolume_importBasic (26.38s)
=== RUN TestAccAWSEBSVolume_basic
--- PASS: TestAccAWSEBSVolume_basic (26.86s)
=== RUN TestAccAWSEBSVolume_NoIops
--- PASS: TestAccAWSEBSVolume_NoIops (27.89s)
=== RUN TestAccAWSEBSVolume_withTags
--- PASS: TestAccAWSEBSVolume_withTags (26.88s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 108.032s
```
* Update volume_attachment.html.markdown
This will allows us to filter a specific ebs_volume for attachment to an
aws_instance
```
make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSEbsVolumeDataSource_'✹
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2016/11/01 12:39:19 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v
-run=TestAccAWSEbsVolumeDataSource_ -timeout 120m
=== RUN TestAccAWSEbsVolumeDataSource_basic
--- PASS: TestAccAWSEbsVolumeDataSource_basic (28.74s)
=== RUN TestAccAWSEbsVolumeDataSource_multipleFilters
--- PASS: TestAccAWSEbsVolumeDataSource_multipleFilters (28.37s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws57.145s
```
Specifically:
- User data is available in all regions, so remove the sentence saying
to check for supported features in each region.
- Clarify domain vs. DNS record resources.
- Explain which DNS record types the `weight`, `port`, and `priority`
arguments are applicable for.
Fixes#6447
This ensures that all variables of type string are consistently
converted to a string value upon running Terraform.
The place this is done is in the `Variables()` call within the
`terraform` package. This is the function responsible for loading and
merging the variables from the various sources and seems ideal for
proper conversion to consistent values for various types. We actually
already had tests to this effect.
This also adds docs that talk about the fake-ish boolean variables
Terraform currently has and about how in future versions we'll likely
support them properly, which can cause BC issues so beware.
When creating a CloudWatch Metric for an Application Load Balancer Target Group it is
neccessary to use the suffix of the ARN as the reference to the load
balancer TG . This commit exposes that as an attribute on the `aws_alb_target_group`
resource to prevent the need to use regular expression substitution to
make the reference.