Merge remote-tracking branch 'upstream/master' into add-tags-plus-networktags

This commit is contained in:
Carles Figuerola 2016-01-21 22:37:23 -06:00
commit 0983ca4c2a
163 changed files with 3936 additions and 419 deletions

View File

@ -1,12 +1,23 @@
## 0.6.10 (Unreleased) ## 0.6.10 (Unreleased)
BACKWARDS INCOMPATIBILITIES:
* The `-module-depth` flag available on `plan`, `apply`, `show`, and `graph` now defaults to `-1`, causing
resources within modules to be expanded in command output. This is only a cosmetic change; it does not affect
any behavior.
* This release includes a bugfix for `$${}` interpolation escaping. These strings are now properly converted to `${}`
during interpolation. This may cause diffs on existing configurations in certain cases.
FEATURES: FEATURES:
* **New resource: `azurerm_cdn_endpoint`** [GH-4759]
* **New resource: `azurerm_cdn_profile`** [GH-4740]
* **New resource: `azurerm_network_security_rule`** [GH-4586] * **New resource: `azurerm_network_security_rule`** [GH-4586]
* **New resource: `azurerm_subnet`** [GH-4595] * **New resource: `azurerm_subnet`** [GH-4595]
* **New resource: `azurerm_network_interface`** [GH-4598] * **New resource: `azurerm_network_interface`** [GH-4598]
* **New resource: `azurerm_route_table`** [GH-4602] * **New resource: `azurerm_route_table`** [GH-4602]
* **New resource: `azurerm_route`** [GH-4604] * **New resource: `azurerm_route`** [GH-4604]
* **New resource: `azurerm_storage_account`** [GH-4698]
* **New resource: `aws_lambda_alias`** [GH-4664] * **New resource: `aws_lambda_alias`** [GH-4664]
* **New resource: `aws_redshift_cluster`** [GH-3862] * **New resource: `aws_redshift_cluster`** [GH-3862]
* **New resource: `aws_redshift_security_group`** [GH-3862] * **New resource: `aws_redshift_security_group`** [GH-3862]
@ -19,6 +30,8 @@ FEATURES:
IMPROVEMENTS: IMPROVEMENTS:
* core: Add `sha256()` interpolation function [GH-4704] * core: Add `sha256()` interpolation function [GH-4704]
* core: Validate lifecycle keys to show helpful error messages whe they are mistypes [GH-4745]
* core: Default `module-depth` parameter to `-1`, which expands resources within modules in command output [GH-4763]
* provider/aws: Add new parameters `az_mode` and `availability_zone(s)` in ElastiCache [GH-4631] * provider/aws: Add new parameters `az_mode` and `availability_zone(s)` in ElastiCache [GH-4631]
* provider/aws: Allow ap-northeast-2 (Seoul) as valid region [GH-4637] * provider/aws: Allow ap-northeast-2 (Seoul) as valid region [GH-4637]
* provider/aws: Limit SNS Topic Subscription protocols [GH-4639] * provider/aws: Limit SNS Topic Subscription protocols [GH-4639]
@ -30,15 +43,27 @@ IMPROVEMENTS:
* provider/aws: Added support for `encrypted` on `ebs_block_devices` in Launch Configurations [GH-4481] * provider/aws: Added support for `encrypted` on `ebs_block_devices` in Launch Configurations [GH-4481]
* provider/aws: Add support for creating Managed Microsoft Active Directory * provider/aws: Add support for creating Managed Microsoft Active Directory
and Directory Connectors [GH-4388] and Directory Connectors [GH-4388]
* provider/aws: Mark some `aws_db_instance` fields as optional [GH-3139] * provider/aws: Mark some `aws_db_instance` fields as optional [GH-3138]
* provider/digitalocean: Add support for reassigning `digitalocean_floating_ip` resources [GH-4476]
* provider/dme: Add support for Global Traffic Director locations on `dme_record` resources [GH-4305]
* provider/docker: Add support for adding host entries on `docker_container` resources [GH-3463] * provider/docker: Add support for adding host entries on `docker_container` resources [GH-3463]
* provider/docker: Add support for mounting named volumes on `docker_container` resources [GH-4480] * provider/docker: Add support for mounting named volumes on `docker_container` resources [GH-4480]
* provider/google: Add content field to bucket object [GH-3893] * provider/google: Add content field to bucket object [GH-3893]
* provider/google: Add support for `named_port` blocks on `google_compute_instance_group_manager` resources [GH-4605]
* provider/openstack: Add "personality" support to instance resource [GH-4623] * provider/openstack: Add "personality" support to instance resource [GH-4623]
* provider/packet: Handle external state changes for Packet resources gracefully [GH-4676] * provider/packet: Handle external state changes for Packet resources gracefully [GH-4676]
* provider/tls: `tls_private_key` now exports attributes with public key in both PEM and OpenSSH format [GH-4606]
* state/remote: Allow KMS Key Encryption to be used with S3 backend [GH-2903]
BUG FIXES: BUG FIXES:
* core: Fix handling of literals with escaped interpolations `$${var}` [GH-4747]
* core: Fix diff mismatch when RequiresNew field and list both change [GH-4749]
* core: Respect module target path argument on `terraform init` [GH-4753]
* core: Write planfile even on empty plans [GH-4766]
* core: Add validation error when output is missing value field [GH-4762]
* core: Fix improper handling of orphan resources when targeting [GH-4574]
* config: Detect a specific JSON edge case and show a helpful workaround [GH-4746]
* provider/openstack: Ensure valid Security Group Rule attribute combination [GH-4466] * provider/openstack: Ensure valid Security Group Rule attribute combination [GH-4466]
* provider/openstack: Don't put fixed_ip in port creation request if not defined [GH-4617] * provider/openstack: Don't put fixed_ip in port creation request if not defined [GH-4617]
* provider/google: Clarify SQL Database Instance recent name restriction [GH-4577] * provider/google: Clarify SQL Database Instance recent name restriction [GH-4577]
@ -47,7 +72,9 @@ BUG FIXES:
* provider/aws: Trap Instance error from mismatched SG IDs and Names [GH-4240] * provider/aws: Trap Instance error from mismatched SG IDs and Names [GH-4240]
* provider/aws: EBS optimised to force new resource in AWS Instance [GH-4627] * provider/aws: EBS optimised to force new resource in AWS Instance [GH-4627]
* provider/aws: `default_result` on `aws_autoscaling_lifecycle_hook` resources is now computed [GH-4695] * provider/aws: `default_result` on `aws_autoscaling_lifecycle_hook` resources is now computed [GH-4695]
* provider/template: fix race causing sporadic crashes in template_file with count > 1 [GH-4694] * provider/mailgun: Handle the fact that the domain destroy API is eventually consistent [GH-4777]
* provider/template: Fix race causing sporadic crashes in template_file with count > 1 [GH-4694]
* provider/template: Add support for updating `template_cloudinit_config` resources [GH-4757]
## 0.6.9 (January 8, 2016) ## 0.6.9 (January 8, 2016)

View File

@ -11,30 +11,98 @@ best way to contribute to the project, read on. This document will cover
what we're looking for. By addressing all the points we're looking for, what we're looking for. By addressing all the points we're looking for,
it raises the chances we can quickly merge or address your contributions. it raises the chances we can quickly merge or address your contributions.
Specifically, we have provided checklists below for each type of issue and pull
request that can happen on the project. These checklists represent everything
we need to be able to review and respond quickly.
## HashiCorp vs. Community Providers
We separate providers out into what we call "HashiCorp Providers" and
"Community Providers".
HashiCorp providers are providers that we'll dedicate full time resources to
improving, supporting the latest features, and fixing bugs. These are providers
we understand deeply and are confident we have the resources to manage
ourselves.
Community providers are providers where we depend on the community to
contribute fixes and enhancements to improve. HashiCorp will run automated
tests and ensure these providers continue to work, but will not dedicate full
time resources to add new features to these providers. These providers are
available in official Terraform releases, but the functionality is primarily
contributed.
The current list of HashiCorp Providers is as follows:
* `aws`
* `azurerm`
* `google`
Our testing standards are the same for both HashiCorp and Community providers,
and HashiCorp runs full acceptance test suites for every provider nightly to
ensure Terraform remains stable.
We make the distinction between these two types of providers to help
highlight the vast amounts of community effort that goes in to making Terraform
great, and to help contributers better understand the role HashiCorp employees
play in the various areas of the code base.
## Issues ## Issues
### Reporting an Issue ### Issue Reporting Checklists
* Make sure you test against the latest released version. It is possible We welcome issues of all kinds including feature requests, bug reports, and
we already fixed the bug you're experiencing. general questions. Below you'll find checklists with guidlines for well-formed
issues of each type.
* Provide steps to reproduce the issue, along with your `.tf` files, #### Bug Reports
with secrets removed, so we can try to reproduce it. Without this,
it makes it much harder to fix the issue.
* If you experienced a panic, please create a [gist](https://gist.github.com) - [ ] __Test against latest release__: Make sure you test against the latest
of the *entire* generated crash log for us to look at. Double check released version. It is possible we already fixed the bug you're experiencing.
no sensitive items were in the log.
* Respond as promptly as possible to any questions made by the Terraform - [ ] __Search for possible duplicate reports__: It's helpful to keep bug
team to your issue. Stale issues will be closed. reports consolidated to one thread, so do a quick search on existing bug
reports to check if anybody else has reported the same thing. You can scope
searches by the label "bug" to help narrow things down.
- [ ] __Include steps to reproduce__: Provide steps to reproduce the issue,
along with your `.tf` files, with secrets removed, so we can try to
reproduce it. Without this, it makes it much harder to fix the issue.
- [ ] __For panics, include `crash.log`__: If you experienced a panic, please
create a [gist](https://gist.github.com) of the *entire* generated crash log
for us to look at. Double check no sensitive items were in the log.
#### Feature Requests
- [ ] __Search for possible duplicate requests__: It's helpful to keep requests
consolidated to one thread, so do a quick search on existing requests to
check if anybody else has reported the same thing. You can scope searches by
the label "enhancement" to help narrow things down.
- [ ] __Include a use case description__: In addition to describing the
behavior of the feature you'd like to see added, it's helpful to also lay
out the reason why the feature would be important and how it would benefit
Terraform users.
#### Questions
- [ ] __Search for answers in Terraform documentation__: We're happy to answer
questions in GitHub Issues, but it helps reduce issue churn and maintainer
workload if you work to find answers to common questions in the
documentation. Often times Question issues result in documentation updates
to help future users, so if you don't find an answer, you can give us
pointers for where you'd expect to see it in the docs.
### Issue Lifecycle ### Issue Lifecycle
1. The issue is reported. 1. The issue is reported.
2. The issue is verified and categorized by a Terraform collaborator. 2. The issue is verified and categorized by a Terraform collaborator.
Categorization is done via tags. For example, bugs are marked as "bugs". Categorization is done via GitHub labels. We generally use a two-label
system of (1) issue/PR type, and (2) section of the codebase. Type is
usually "bug", "enhancement", "documentation", or "question", and section
can be any of the providers or provisioners or "core".
3. Unless it is critical, the issue is left for a period of time (sometimes 3. Unless it is critical, the issue is left for a period of time (sometimes
many weeks), giving outside contributors a chance to address the issue. many weeks), giving outside contributors a chance to address the issue.
@ -47,49 +115,401 @@ it raises the chances we can quickly merge or address your contributions.
the issue tracker clean. The issue is still indexed and available for the issue tracker clean. The issue is still indexed and available for
future viewers, or can be re-opened if necessary. future viewers, or can be re-opened if necessary.
# Contributing to Terraform ## Pull Requests
Thank you for contributing! We do have some requests that we ask you to include Thank you for contributing! Here you'll find information on what to include in
in your contribution your Pull Request to ensure it is accepted quickly.
## Providers or Resources * For pull requests that follow the guidelines, we expect to be able to review
and merge very quickly.
* Pull requests that don't follow the guidelines will be annotated with what
they're missing. A community or core team member may be able to swing around
and help finish up the work, but these PRs will generally hang out much
longer until they can be completed and merged.
Contributions to Providers or their Resources need to be documented and include ### Pull Request Lifecycle
relevant acceptance tests. Information on setting up the terraform.io site
locally can be found in the [website folder][1]
of this repository, in the README.
Instructions on how to run acceptance tests can be found in our [README][2] 1. You are welcome to submit your pull request for commentary or review before
in the root of this project. it is fully completed. Please prefix the title of your pull request with
"[WIP]" to indicate this. It's also a good idea to include specific
questions or items you'd like feedback on.
If you have questions about this process, please checkout our [mailing list][3] 2. Once you believe your pull request is ready to be merged, you can remove any
or #terraform-tool on Freenode. "[WIP]" prefix from the title and a core team member will review. Follow
[the checklists below](#checklists-for-contribution) to help ensure that
your contribution will be merged quickly.
## Setting up Go to work on Terraform 3. One of Terraform's core team members will look over your contribution and
either provide comments letting you know if there is anything left to do. We
do our best to provide feedback in a timely manner, but it may take some
time for us to respond.
If you have never worked with Go before, you will have to complete the 4. Once all outstanding comments and checklist items have been addressed, your
following steps in order to be able to compile and test Terraform (or contribution will be merged! Merged PRs will be included in the next
use the Vagrantfile in this repo to stand up a dev VM). Terraform release. The core team takes care of updating the CHANGELOG as
they merge.
1. Install Go. Make sure the Go version is at least Go 1.4. Terraform will not work with anything less than 5. In rare cases, we might decide that a PR should be closed. We'll make sure
Go 1.4. On a Mac, you can `brew install go` to install Go 1.4. to provide clear reasoning when this happens.
2. Set and export the `GOPATH` environment variable and update your `PATH`. ### Checklists for Contribution
For example, you can add to your `.bash_profile`.
``` There are several different kinds of contribution, each of which has its own
export GOPATH=$HOME/Documents/golang standards for a speedy review. The following sections describe guidelines for
export PATH=$PATH:$GOPATH/bin each type of contribution.
#### Documentation Update
Because [Terraform's website][website] is in the same repo as the code, it's
easy for anybody to help us improve our docs.
- [ ] __Reasoning for docs update__: Including a quick explanation for why the
update needed is helpful for reviewers.
- [ ] __Relevant Terraform version__: Is this update worth deploying to the
site immediately, or is it referencing an upcoming version of Terraform and
should get pushed out with the next release?
#### Enhancement/Bugfix to a Resource
Working on existing resources is a great way to get started as a Terraform
contributor because you can work within existing code and tests to get a feel
for what to do.
- [ ] __Acceptance test coverage of new behavior__: Existing resources each
have a set of [acceptance tests][acctests] covering their functionality.
These tests should exercise all the behavior of the resource. Whether you are
adding something or fixing a bug, the idea is to have an acceptance test that
fails if your code were to be removed. Sometimes it is sufficient to
"enhance" an existing test by adding an assertion or tweaking the config
that is used, but often a new test is better to add. You can copy/paste an
existing test and follow the conventions you see there, modifying the test
to exercise the behavior of your code.
- [ ] __Documentation updates__: If your code makes any changes that need to
be documented, you should include those doc updates in the same PR. The
[Terraform website][website] source is in this repo and includes
instructions for getting a local copy of the site up and running if you'd
like to preview your changes.
- [ ] __Well-formed Code__: Do your best to follow existing conventions you
see in the codebase, and ensure your code is formatted with `go fmt`. (The
Travis CI build will fail if `go fmt` has not been run on incoming code.)
The PR reviewers can help out on this front, and may provide comments with
suggestions on how to improve the code.
#### New Resource
Implementing a new resource is a good way to learn more about how Terraform
interacts with upstream APIs. There are plenty of examples to draw from in the
existing resources, but you still get to implement something completely new.
- [ ] __Acceptance tests__: New resources should include acceptance tests
covering their behavior. See [Writing Acceptance
Tests](#writing-acceptance-tests) below for a detailed guide on how to
approach these.
- [ ] __Documentation__: Each resource gets a page in the Terraform
documentation. The [Terraform website][website] source is in this
repo and includes instructions for getting a local copy of the site up and
running if you'd like to preview your changes. For a resource, you'll want
to add a new file in the appropriate place and add a link to the sidebar for
that page.
- [ ] __Well-formed Code__: Do your best to follow existing conventions you
see in the codebase, and ensure your code is formatted with `go fmt`. (The
Travis CI build will fail if `go fmt` has not been run on incoming code.)
The PR reviewers can help out on this front, and may provide comments with
suggestions on how to improve the code.
#### New Provider
Implementing a new provider gives Terraform the ability to manage resources in
a whole new API. It's a larger undertaking, but brings major new functionaliy
into Terraform.
- [ ] __Acceptance tests__: Each provider should include an acceptance test
suite with tests for each resource should include acceptance tests covering
its behavior. See [Writing Acceptance Tests](#writing-acceptance-tests) below
for a detailed guide on how to approach these.
- [ ] __Documentation__: Each provider has a section in the Terraform
documentation. The [Terraform website][website] source is in this repo and
includes instructions for getting a local copy of the site up and running if
you'd like to preview your changes. For a provider, you'll want to add new
index file and individual pages for each resource.
- [ ] __Well-formed Code__: Do your best to follow existing conventions you
see in the codebase, and ensure your code is formatted with `go fmt`. (The
Travis CI build will fail if `go fmt` has not been run on incoming code.)
The PR reviewers can help out on this front, and may provide comments with
suggestions on how to improve the code.
#### Core Bugfix/Enhancement
We are always happy when any developer is interested in diving into Terraform's
core to help out! Here's what we look for in smaller Core PRs.
- [ ] __Unit tests__: Terraform's core is covered by hundreds of unit tests at
several different layers of abstraction. Generally the best place to start
is with a "Context Test". These are higher level test that interact
end-to-end with most of Terraform's core. They are divided into test files
for each major action (plan, apply, etc.). Getting a failing test is a great
way to prove out a bug report or a new enhancement. With a context test in
place, you can work on implementation and lower level unit tests. Lower
level tests are largely context dependent, but the Context Tests are almost
always part of core work.
- [ ] __Documentation updates__: If the core change involves anything that
needs to be reflected in our documentation, you can make those changes in
the same PR. The [Terraform website][website] source is in this repo and
includes instructions for getting a local copy of the site up and running if
you'd like to preview your changes.
- [ ] __Well-formed Code__: Do your best to follow existing conventions you
see in the codebase, and ensure your code is formatted with `go fmt`. (The
Travis CI build will fail if `go fmt` has not been run on incoming code.)
The PR reviewers can help out on this front, and may provide comments with
suggestions on how to improve the code.
#### Core Feature
If you're interested in taking on a larger core feature, it's a good idea to
get feedback early and often on the effort.
- [ ] __Early validation of idea and implementation plan__: Terraform's core
is complicated enough that there are often several ways to implement
something, each of which has different implications and tradeoffs. Working
through a plan of attack with the team before you dive into implementation
will help ensure that you're working in the right direction.
- [ ] __Unit tests__: Terraform's core is covered by hundreds of unit tests at
several different layers of abstraction. Generally the best place to start
is with a "Context Test". These are higher level test that interact
end-to-end with most of Terraform's core. They are divided into test files
for each major action (plan, apply, etc.). Getting a failing test is a great
way to prove out a bug report or a new enhancement. With a context test in
place, you can work on implementation and lower level unit tests. Lower
level tests are largely context dependent, but the Context Tests are almost
always part of core work.
- [ ] __Documentation updates__: If the core change involves anything that
needs to be reflected in our documentation, you can make those changes in
the same PR. The [Terraform website][website] source is in this repo and
includes instructions for getting a local copy of the site up and running if
you'd like to preview your changes.
- [ ] __Well-formed Code__: Do your best to follow existing conventions you
see in the codebase, and ensure your code is formatted with `go fmt`. (The
Travis CI build will fail if `go fmt` has not been run on incoming code.)
The PR reviewers can help out on this front, and may provide comments with
suggestions on how to improve the code.
### Writing Acceptance Tests
Terraform includes an acceptance test harness that does most of the repetitive
work involved in testing a resource.
#### Acceptance Tests Often Cost Money to Run
Because acceptance tests create real resources, they often cost money to run.
Because the resources only exist for a short period of time, the total amount
of money required is usually a relatively small. Nevertheless, we don't want
financial limitations to be a barrier to contribution, so if you are unable to
pay to run acceptance tests for your contribution, simply mention this in your
pull request. We will happily accept "best effort" implementations of
acceptance tests and run them for you on our side. This might mean that your PR
takes a bit longer to merge, but it most definitely is not a blocker for
contributions.
#### Running an Acceptance Test
Acceptance tests can be run using the `testacc` target in the Terraform
`Makefile`. The individual tests to run can be controlled using a regular
expression. Prior to running the tests provider configuration details such as
access keys must be made available as environment variables.
For example, to run an acceptance test against the Azure Resource Manager
provider, the following environment variables must be set:
```sh
export ARM_SUBSCRIPTION_ID=...
export ARM_CLIENT_ID=...
export ARM_CLIENT_SECRET=...
export ARM_TENANT_ID=...
```
Tests can then be run by specifying the target provider and a regular
expression defining the tests to run:
```sh
$ make testacc TEST=./builtin/providers/azurerm TESTARGS='-run=TestAccAzureRMPublicIpStatic_update'
==> Checking that code complies with gofmt requirements...
go generate ./...
TF_ACC=1 go test ./builtin/providers/azurerm -v -run=TestAccAzureRMPublicIpStatic_update -timeout 120m
=== RUN TestAccAzureRMPublicIpStatic_update
--- PASS: TestAccAzureRMPublicIpStatic_update (177.48s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm 177.504s
```
Entire resource test suites can be targeted by using the naming convention to
write the regular expression. For example, to run all tests of the
`azurerm_public_ip` resource rather than just the update test, you can start
testing like this:
```sh
$ make testacc TEST=./builtin/providers/azurerm TESTARGS='-run=TestAccAzureRMPublicIpStatic'
==> Checking that code complies with gofmt requirements...
go generate ./...
TF_ACC=1 go test ./builtin/providers/azurerm -v -run=TestAccAzureRMPublicIpStatic -timeout 120m
=== RUN TestAccAzureRMPublicIpStatic_basic
--- PASS: TestAccAzureRMPublicIpStatic_basic (137.74s)
=== RUN TestAccAzureRMPublicIpStatic_update
--- PASS: TestAccAzureRMPublicIpStatic_update (180.63s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm 318.392s
```
#### Writing an Acceptance Test
Terraform has a framework for writing acceptance tests which minimises the
amount of boilerplate code necessary to use common testing patterns. The entry
point to the framework is the `resource.Test()` function.
Tests are divided into `TestStep`s. Each `TestStep` proceeds by applying some
Terraform configuration using the provider under test, and then verifying that
results are as expected by making assertions using the provider API. It is
common for a single test function to excercise both the creation of and updates
to a single resource. Most tests follow a similar structure.
1. Pre-flight checks are made to ensure that sufficient provider configuration
is available to be able to proceed - for example in an acceptance test
targetting AWS, `AWS_ACCESS_KEY_ID` and `AWS_SECRET_KEY` must be set prior
to running acceptance tests. This is common to all tests exercising a single
provider.
Each `TestStep` is defined in the call to `resource.Test()`. Most assertion
functions are defined out of band with the tests. This keeps the tests
readable, and allows reuse of assertion functions across different tests of the
same type of resource. The definition of a complete test looks like this:
```go
func TestAccAzureRMPublicIpStatic_update(t *testing.T) {
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testCheckAzureRMPublicIpDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAzureRMVPublicIpStatic_basic,
Check: resource.ComposeTestCheckFunc(
testCheckAzureRMPublicIpExists("azurerm_public_ip.test"),
),
},
},
})
}
```
When executing the test, the the following steps are taken for each `TestStep`:
1. The Terraform configuration required for the test is applied. This is
responsible for configuring the resource under test, and any dependencies it
may have. For example, to test the `azurerm_public_ip` resource, an
`azurerm_resource_group` is required. This results in configuration which
looks like this:
```hcl
resource "azurerm_resource_group" "test" {
name = "acceptanceTestResourceGroup1"
location = "West US"
}
resource "azurerm_public_ip" "test" {
name = "acceptanceTestPublicIp1"
location = "West US"
resource_group_name = "${azurerm_resource_group.test.name}"
public_ip_address_allocation = "static"
}
``` ```
3. [Follow the development guide](https://github.com/hashicorp/terraform#developing-terraform) 1. Assertions are run using the provider API. These use the provider API
directly rather than asserting against the resource state. For example, to
verify that the `azurerm_public_ip` described above was created
successfully, a test function like this is used:
5. Make your changes to the Terraform source, being sure to run the basic ```go
func testCheckAzureRMPublicIpExists(name string) resource.TestCheckFunc {
return func(s *terraform.State) error {
// Ensure we have enough information in state to look up in API
rs, ok := s.RootModule().Resources[name]
if !ok {
return fmt.Errorf("Not found: %s", name)
}
publicIPName := rs.Primary.Attributes["name"]
resourceGroup, hasResourceGroup := rs.Primary.Attributes["resource_group_name"]
if !hasResourceGroup {
return fmt.Errorf("Bad: no resource group found in state for public ip: %s", availSetName)
}
conn := testAccProvider.Meta().(*ArmClient).publicIPClient
resp, err := conn.Get(resourceGroup, publicIPName, "")
if err != nil {
return fmt.Errorf("Bad: Get on publicIPClient: %s", err)
}
if resp.StatusCode == http.StatusNotFound {
return fmt.Errorf("Bad: Public IP %q (resource group: %q) does not exist", name, resourceGroup)
}
return nil
}
}
```
Notice that the only information used from the Terraform state is the ID of
the resource - though in this case it is necessary to split the ID into
constituent parts in order to use the provider API. For computed properties,
we instead assert that the value saved in the Terraform state was the
expected value if possible. The testing framework providers helper functions
for several common types of check - for example:
```go
resource.TestCheckResourceAttr("azurerm_public_ip.test", "domain_name_label", "mylabel01"),
```
1. The resources created by the test are destroyed. This step happens
automatically, and is the equivalent of calling `terraform destroy`.
1. Assertions are made against the provider API to verify that the resources
have indeed been removed. If these checks fail, the test fails and reports
"dangling resources". The code to ensure that the `azurerm_public_ip` shown
above looks like this:
```go
func testCheckAzureRMPublicIpDestroy(s *terraform.State) error {
conn := testAccProvider.Meta().(*ArmClient).publicIPClient
for _, rs := range s.RootModule().Resources {
if rs.Type != "azurerm_public_ip" {
continue
}
name := rs.Primary.Attributes["name"]
resourceGroup := rs.Primary.Attributes["resource_group_name"]
resp, err := conn.Get(resourceGroup, name, "")
if err != nil {
return nil
}
if resp.StatusCode != http.StatusNotFound {
return fmt.Errorf("Public IP still exists:\n%#v", resp.Properties)
}
}
return nil
}
```
These functions usually test only for the resource directly under test: we
skip the check that the `azurerm_resource_group` has been destroyed when
testing `azurerm_resource_group`, under the assumption that
`azurerm_resource_group` is tested independently in its own acceptance
tests. tests.
7. If everything works well and the tests pass, run `go fmt` on your code [website]: https://github.com/hashicorp/terraform/tree/master/website
before submitting a pull request. [acctests]: https://github.com/hashicorp/terraform#acceptance-tests
[ml]: https://groups.google.com/group/terraform-tool
[1]: https://github.com/hashicorp/terraform/tree/master/website
[2]: https://github.com/hashicorp/terraform#acceptance-tests
[3]: https://groups.google.com/group/terraform-tool

View File

@ -21,6 +21,10 @@ quickdev: generate
core-dev: fmtcheck generate core-dev: fmtcheck generate
go install github.com/hashicorp/terraform go install github.com/hashicorp/terraform
# Shorthand for quickly testing the core of Terraform (i.e. "not providers")
core-test: generate
@echo "Testing core packages..." && go test $(shell go list ./... | grep -v builtin)
# Shorthand for building and installing just one plugin for local testing. # Shorthand for building and installing just one plugin for local testing.
# Run as (for example): make plugin-dev PLUGIN=provider-aws # Run as (for example): make plugin-dev PLUGIN=provider-aws
plugin-dev: fmtcheck generate plugin-dev: fmtcheck generate

View File

@ -142,7 +142,7 @@ func resourceAwsCloudFormationStackCreate(d *schema.ResourceData, meta interface
wait := resource.StateChangeConf{ wait := resource.StateChangeConf{
Pending: []string{"CREATE_IN_PROGRESS", "ROLLBACK_IN_PROGRESS", "ROLLBACK_COMPLETE"}, Pending: []string{"CREATE_IN_PROGRESS", "ROLLBACK_IN_PROGRESS", "ROLLBACK_COMPLETE"},
Target: "CREATE_COMPLETE", Target: []string{"CREATE_COMPLETE"},
Timeout: 30 * time.Minute, Timeout: 30 * time.Minute,
MinTimeout: 5 * time.Second, MinTimeout: 5 * time.Second,
Refresh: func() (interface{}, string, error) { Refresh: func() (interface{}, string, error) {
@ -311,7 +311,7 @@ func resourceAwsCloudFormationStackUpdate(d *schema.ResourceData, meta interface
"UPDATE_ROLLBACK_COMPLETE_CLEANUP_IN_PROGRESS", "UPDATE_ROLLBACK_COMPLETE_CLEANUP_IN_PROGRESS",
"UPDATE_ROLLBACK_COMPLETE", "UPDATE_ROLLBACK_COMPLETE",
}, },
Target: "UPDATE_COMPLETE", Target: []string{"UPDATE_COMPLETE"},
Timeout: 15 * time.Minute, Timeout: 15 * time.Minute,
MinTimeout: 5 * time.Second, MinTimeout: 5 * time.Second,
Refresh: func() (interface{}, string, error) { Refresh: func() (interface{}, string, error) {
@ -370,7 +370,7 @@ func resourceAwsCloudFormationStackDelete(d *schema.ResourceData, meta interface
wait := resource.StateChangeConf{ wait := resource.StateChangeConf{
Pending: []string{"DELETE_IN_PROGRESS", "ROLLBACK_IN_PROGRESS"}, Pending: []string{"DELETE_IN_PROGRESS", "ROLLBACK_IN_PROGRESS"},
Target: "DELETE_COMPLETE", Target: []string{"DELETE_COMPLETE"},
Timeout: 30 * time.Minute, Timeout: 30 * time.Minute,
MinTimeout: 5 * time.Second, MinTimeout: 5 * time.Second,
Refresh: func() (interface{}, string, error) { Refresh: func() (interface{}, string, error) {

View File

@ -68,7 +68,7 @@ func resourceAwsCustomerGatewayCreate(d *schema.ResourceData, meta interface{})
// Wait for the CustomerGateway to be available. // Wait for the CustomerGateway to be available.
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"pending"}, Pending: []string{"pending"},
Target: "available", Target: []string{"available"},
Refresh: customerGatewayRefreshFunc(conn, *customerGateway.CustomerGatewayId), Refresh: customerGatewayRefreshFunc(conn, *customerGateway.CustomerGatewayId),
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
Delay: 10 * time.Second, Delay: 10 * time.Second,

View File

@ -388,7 +388,7 @@ func resourceAwsDbInstanceCreate(d *schema.ResourceData, meta interface{}) error
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"creating", "backing-up", "modifying", "resetting-master-credentials", Pending: []string{"creating", "backing-up", "modifying", "resetting-master-credentials",
"maintenance", "renaming", "rebooting", "upgrading"}, "maintenance", "renaming", "rebooting", "upgrading"},
Target: "available", Target: []string{"available"},
Refresh: resourceAwsDbInstanceStateRefreshFunc(d, meta), Refresh: resourceAwsDbInstanceStateRefreshFunc(d, meta),
Timeout: 40 * time.Minute, Timeout: 40 * time.Minute,
MinTimeout: 10 * time.Second, MinTimeout: 10 * time.Second,
@ -512,7 +512,7 @@ func resourceAwsDbInstanceCreate(d *schema.ResourceData, meta interface{}) error
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"creating", "backing-up", "modifying", "resetting-master-credentials", Pending: []string{"creating", "backing-up", "modifying", "resetting-master-credentials",
"maintenance", "renaming", "rebooting", "upgrading"}, "maintenance", "renaming", "rebooting", "upgrading"},
Target: "available", Target: []string{"available"},
Refresh: resourceAwsDbInstanceStateRefreshFunc(d, meta), Refresh: resourceAwsDbInstanceStateRefreshFunc(d, meta),
Timeout: 40 * time.Minute, Timeout: 40 * time.Minute,
MinTimeout: 10 * time.Second, MinTimeout: 10 * time.Second,
@ -663,7 +663,7 @@ func resourceAwsDbInstanceDelete(d *schema.ResourceData, meta interface{}) error
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"creating", "backing-up", Pending: []string{"creating", "backing-up",
"modifying", "deleting", "available"}, "modifying", "deleting", "available"},
Target: "", Target: []string{},
Refresh: resourceAwsDbInstanceStateRefreshFunc(d, meta), Refresh: resourceAwsDbInstanceStateRefreshFunc(d, meta),
Timeout: 40 * time.Minute, Timeout: 40 * time.Minute,
MinTimeout: 10 * time.Second, MinTimeout: 10 * time.Second,

View File

@ -226,7 +226,7 @@ func resourceAwsDbParameterGroupUpdate(d *schema.ResourceData, meta interface{})
func resourceAwsDbParameterGroupDelete(d *schema.ResourceData, meta interface{}) error { func resourceAwsDbParameterGroupDelete(d *schema.ResourceData, meta interface{}) error {
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"pending"}, Pending: []string{"pending"},
Target: "destroyed", Target: []string{"destroyed"},
Refresh: resourceAwsDbParameterGroupDeleteRefreshFunc(d, meta), Refresh: resourceAwsDbParameterGroupDeleteRefreshFunc(d, meta),
Timeout: 3 * time.Minute, Timeout: 3 * time.Minute,
MinTimeout: 1 * time.Second, MinTimeout: 1 * time.Second,

View File

@ -125,7 +125,7 @@ func resourceAwsDbSecurityGroupCreate(d *schema.ResourceData, meta interface{})
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"authorizing"}, Pending: []string{"authorizing"},
Target: "authorized", Target: []string{"authorized"},
Refresh: resourceAwsDbSecurityGroupStateRefreshFunc(d, meta), Refresh: resourceAwsDbSecurityGroupStateRefreshFunc(d, meta),
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
} }

View File

@ -189,7 +189,7 @@ func resourceAwsDbSubnetGroupUpdate(d *schema.ResourceData, meta interface{}) er
func resourceAwsDbSubnetGroupDelete(d *schema.ResourceData, meta interface{}) error { func resourceAwsDbSubnetGroupDelete(d *schema.ResourceData, meta interface{}) error {
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"pending"}, Pending: []string{"pending"},
Target: "destroyed", Target: []string{"destroyed"},
Refresh: resourceAwsDbSubnetGroupDeleteRefreshFunc(d, meta), Refresh: resourceAwsDbSubnetGroupDeleteRefreshFunc(d, meta),
Timeout: 3 * time.Minute, Timeout: 3 * time.Minute,
MinTimeout: 1 * time.Second, MinTimeout: 1 * time.Second,

View File

@ -321,7 +321,7 @@ func resourceAwsDirectoryServiceDirectoryCreate(d *schema.ResourceData, meta int
log.Printf("[DEBUG] Waiting for DS (%q) to become available", d.Id()) log.Printf("[DEBUG] Waiting for DS (%q) to become available", d.Id())
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"Requested", "Creating", "Created"}, Pending: []string{"Requested", "Creating", "Created"},
Target: "Active", Target: []string{"Active"},
Refresh: func() (interface{}, string, error) { Refresh: func() (interface{}, string, error) {
resp, err := dsconn.DescribeDirectories(&directoryservice.DescribeDirectoriesInput{ resp, err := dsconn.DescribeDirectories(&directoryservice.DescribeDirectoriesInput{
DirectoryIds: []*string{aws.String(d.Id())}, DirectoryIds: []*string{aws.String(d.Id())},
@ -449,7 +449,7 @@ func resourceAwsDirectoryServiceDirectoryDelete(d *schema.ResourceData, meta int
log.Printf("[DEBUG] Waiting for DS (%q) to be deleted", d.Id()) log.Printf("[DEBUG] Waiting for DS (%q) to be deleted", d.Id())
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"Deleting"}, Pending: []string{"Deleting"},
Target: "Deleted", Target: []string{"Deleted"},
Refresh: func() (interface{}, string, error) { Refresh: func() (interface{}, string, error) {
resp, err := dsconn.DescribeDirectories(&directoryservice.DescribeDirectoriesInput{ resp, err := dsconn.DescribeDirectories(&directoryservice.DescribeDirectoriesInput{
DirectoryIds: []*string{aws.String(d.Id())}, DirectoryIds: []*string{aws.String(d.Id())},

View File

@ -117,7 +117,7 @@ func resourceAwsEbsVolumeCreate(d *schema.ResourceData, meta interface{}) error
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"creating"}, Pending: []string{"creating"},
Target: "available", Target: []string{"available"},
Refresh: volumeStateRefreshFunc(conn, *result.VolumeId), Refresh: volumeStateRefreshFunc(conn, *result.VolumeId),
Timeout: 5 * time.Minute, Timeout: 5 * time.Minute,
Delay: 10 * time.Second, Delay: 10 * time.Second,

View File

@ -313,7 +313,7 @@ func resourceAwsEcsServiceDelete(d *schema.ResourceData, meta interface{}) error
// Wait until it's deleted // Wait until it's deleted
wait := resource.StateChangeConf{ wait := resource.StateChangeConf{
Pending: []string{"DRAINING"}, Pending: []string{"DRAINING"},
Target: "INACTIVE", Target: []string{"INACTIVE"},
Timeout: 5 * time.Minute, Timeout: 5 * time.Minute,
MinTimeout: 1 * time.Second, MinTimeout: 1 * time.Second,
Refresh: func() (interface{}, string, error) { Refresh: func() (interface{}, string, error) {

View File

@ -51,7 +51,7 @@ func resourceAwsEfsFileSystemCreate(d *schema.ResourceData, meta interface{}) er
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"creating"}, Pending: []string{"creating"},
Target: "available", Target: []string{"available"},
Refresh: func() (interface{}, string, error) { Refresh: func() (interface{}, string, error) {
resp, err := conn.DescribeFileSystems(&efs.DescribeFileSystemsInput{ resp, err := conn.DescribeFileSystems(&efs.DescribeFileSystemsInput{
FileSystemId: aws.String(d.Id()), FileSystemId: aws.String(d.Id()),
@ -127,7 +127,7 @@ func resourceAwsEfsFileSystemDelete(d *schema.ResourceData, meta interface{}) er
}) })
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"available", "deleting"}, Pending: []string{"available", "deleting"},
Target: "", Target: []string{},
Refresh: func() (interface{}, string, error) { Refresh: func() (interface{}, string, error) {
resp, err := conn.DescribeFileSystems(&efs.DescribeFileSystemsInput{ resp, err := conn.DescribeFileSystems(&efs.DescribeFileSystemsInput{
FileSystemId: aws.String(d.Id()), FileSystemId: aws.String(d.Id()),

View File

@ -81,7 +81,7 @@ func resourceAwsEfsMountTargetCreate(d *schema.ResourceData, meta interface{}) e
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"creating"}, Pending: []string{"creating"},
Target: "available", Target: []string{"available"},
Refresh: func() (interface{}, string, error) { Refresh: func() (interface{}, string, error) {
resp, err := conn.DescribeMountTargets(&efs.DescribeMountTargetsInput{ resp, err := conn.DescribeMountTargets(&efs.DescribeMountTargetsInput{
MountTargetId: aws.String(d.Id()), MountTargetId: aws.String(d.Id()),
@ -179,7 +179,7 @@ func resourceAwsEfsMountTargetDelete(d *schema.ResourceData, meta interface{}) e
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"available", "deleting", "deleted"}, Pending: []string{"available", "deleting", "deleted"},
Target: "", Target: []string{},
Refresh: func() (interface{}, string, error) { Refresh: func() (interface{}, string, error) {
resp, err := conn.DescribeMountTargets(&efs.DescribeMountTargetsInput{ resp, err := conn.DescribeMountTargets(&efs.DescribeMountTargetsInput{
MountTargetId: aws.String(d.Id()), MountTargetId: aws.String(d.Id()),

View File

@ -290,7 +290,7 @@ func resourceAwsElasticacheClusterCreate(d *schema.ResourceData, meta interface{
pending := []string{"creating"} pending := []string{"creating"}
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: pending, Pending: pending,
Target: "available", Target: []string{"available"},
Refresh: cacheClusterStateRefreshFunc(conn, d.Id(), "available", pending), Refresh: cacheClusterStateRefreshFunc(conn, d.Id(), "available", pending),
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
Delay: 10 * time.Second, Delay: 10 * time.Second,
@ -466,7 +466,7 @@ func resourceAwsElasticacheClusterUpdate(d *schema.ResourceData, meta interface{
pending := []string{"modifying", "rebooting cache cluster nodes", "snapshotting"} pending := []string{"modifying", "rebooting cache cluster nodes", "snapshotting"}
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: pending, Pending: pending,
Target: "available", Target: []string{"available"},
Refresh: cacheClusterStateRefreshFunc(conn, d.Id(), "available", pending), Refresh: cacheClusterStateRefreshFunc(conn, d.Id(), "available", pending),
Timeout: 5 * time.Minute, Timeout: 5 * time.Minute,
Delay: 5 * time.Second, Delay: 5 * time.Second,
@ -537,7 +537,7 @@ func resourceAwsElasticacheClusterDelete(d *schema.ResourceData, meta interface{
log.Printf("[DEBUG] Waiting for deletion: %v", d.Id()) log.Printf("[DEBUG] Waiting for deletion: %v", d.Id())
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"creating", "available", "deleting", "incompatible-parameters", "incompatible-network", "restore-failed"}, Pending: []string{"creating", "available", "deleting", "incompatible-parameters", "incompatible-network", "restore-failed"},
Target: "", Target: []string{},
Refresh: cacheClusterStateRefreshFunc(conn, d.Id(), "", []string{}), Refresh: cacheClusterStateRefreshFunc(conn, d.Id(), "", []string{}),
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
Delay: 10 * time.Second, Delay: 10 * time.Second,

View File

@ -169,7 +169,7 @@ func resourceAwsElasticacheParameterGroupUpdate(d *schema.ResourceData, meta int
func resourceAwsElasticacheParameterGroupDelete(d *schema.ResourceData, meta interface{}) error { func resourceAwsElasticacheParameterGroupDelete(d *schema.ResourceData, meta interface{}) error {
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"pending"}, Pending: []string{"pending"},
Target: "destroyed", Target: []string{"destroyed"},
Refresh: resourceAwsElasticacheParameterGroupDeleteRefreshFunc(d, meta), Refresh: resourceAwsElasticacheParameterGroupDeleteRefreshFunc(d, meta),
Timeout: 3 * time.Minute, Timeout: 3 * time.Minute,
MinTimeout: 1 * time.Second, MinTimeout: 1 * time.Second,

View File

@ -401,7 +401,7 @@ func resourceAwsInstanceCreate(d *schema.ResourceData, meta interface{}) error {
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"pending"}, Pending: []string{"pending"},
Target: "running", Target: []string{"running"},
Refresh: InstanceStateRefreshFunc(conn, *instance.InstanceId), Refresh: InstanceStateRefreshFunc(conn, *instance.InstanceId),
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
Delay: 10 * time.Second, Delay: 10 * time.Second,
@ -1082,7 +1082,7 @@ func awsTerminateInstance(conn *ec2.EC2, id string) error {
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"pending", "running", "shutting-down", "stopped", "stopping"}, Pending: []string{"pending", "running", "shutting-down", "stopped", "stopping"},
Target: "terminated", Target: []string{"terminated"},
Refresh: InstanceStateRefreshFunc(conn, id), Refresh: InstanceStateRefreshFunc(conn, id),
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
Delay: 10 * time.Second, Delay: 10 * time.Second,

View File

@ -513,6 +513,41 @@ func TestAccAWSInstance_rootBlockDeviceMismatch(t *testing.T) {
}) })
} }
// This test reproduces the bug here:
// https://github.com/hashicorp/terraform/issues/1752
//
// I wish there were a way to exercise resources built with helper.Schema in a
// unit context, in which case this test could be moved there, but for now this
// will cover the bugfix.
//
// The following triggers "diffs didn't match during apply" without the fix in to
// set NewRemoved on the .# field when it changes to 0.
func TestAccAWSInstance_forceNewAndTagsDrift(t *testing.T) {
var v ec2.Instance
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckInstanceDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccInstanceConfigForceNewAndTagsDrift,
Check: resource.ComposeTestCheckFunc(
testAccCheckInstanceExists("aws_instance.foo", &v),
driftTags(&v),
),
ExpectNonEmptyPlan: true,
},
resource.TestStep{
Config: testAccInstanceConfigForceNewAndTagsDrift_Update,
Check: resource.ComposeTestCheckFunc(
testAccCheckInstanceExists("aws_instance.foo", &v),
),
},
},
})
}
func testAccCheckInstanceDestroy(s *terraform.State) error { func testAccCheckInstanceDestroy(s *terraform.State) error {
return testAccCheckInstanceDestroyWithProvider(s, testAccProvider) return testAccCheckInstanceDestroyWithProvider(s, testAccProvider)
} }
@ -622,6 +657,22 @@ func TestInstanceTenancySchema(t *testing.T) {
} }
} }
func driftTags(instance *ec2.Instance) resource.TestCheckFunc {
return func(s *terraform.State) error {
conn := testAccProvider.Meta().(*AWSClient).ec2conn
_, err := conn.CreateTags(&ec2.CreateTagsInput{
Resources: []*string{instance.InstanceId},
Tags: []*ec2.Tag{
&ec2.Tag{
Key: aws.String("Drift"),
Value: aws.String("Happens"),
},
},
})
return err
}
}
const testAccInstanceConfig_pre = ` const testAccInstanceConfig_pre = `
resource "aws_security_group" "tf_test_foo" { resource "aws_security_group" "tf_test_foo" {
name = "tf_test_foo" name = "tf_test_foo"
@ -988,3 +1039,37 @@ resource "aws_instance" "foo" {
} }
} }
` `
const testAccInstanceConfigForceNewAndTagsDrift = `
resource "aws_vpc" "foo" {
cidr_block = "10.1.0.0/16"
}
resource "aws_subnet" "foo" {
cidr_block = "10.1.1.0/24"
vpc_id = "${aws_vpc.foo.id}"
}
resource "aws_instance" "foo" {
ami = "ami-22b9a343"
instance_type = "t2.nano"
subnet_id = "${aws_subnet.foo.id}"
}
`
const testAccInstanceConfigForceNewAndTagsDrift_Update = `
resource "aws_vpc" "foo" {
cidr_block = "10.1.0.0/16"
}
resource "aws_subnet" "foo" {
cidr_block = "10.1.1.0/24"
vpc_id = "${aws_vpc.foo.id}"
}
resource "aws_instance" "foo" {
ami = "ami-22b9a343"
instance_type = "t2.micro"
subnet_id = "${aws_subnet.foo.id}"
}
`

View File

@ -170,7 +170,7 @@ func resourceAwsInternetGatewayAttach(d *schema.ResourceData, meta interface{})
log.Printf("[DEBUG] Waiting for internet gateway (%s) to attach", d.Id()) log.Printf("[DEBUG] Waiting for internet gateway (%s) to attach", d.Id())
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"detached", "attaching"}, Pending: []string{"detached", "attaching"},
Target: "available", Target: []string{"available"},
Refresh: IGAttachStateRefreshFunc(conn, d.Id(), "available"), Refresh: IGAttachStateRefreshFunc(conn, d.Id(), "available"),
Timeout: 1 * time.Minute, Timeout: 1 * time.Minute,
} }
@ -205,7 +205,7 @@ func resourceAwsInternetGatewayDetach(d *schema.ResourceData, meta interface{})
log.Printf("[DEBUG] Waiting for internet gateway (%s) to detach", d.Id()) log.Printf("[DEBUG] Waiting for internet gateway (%s) to detach", d.Id())
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"detaching"}, Pending: []string{"detaching"},
Target: "detached", Target: []string{"detached"},
Refresh: detachIGStateRefreshFunc(conn, d.Id(), vpcID.(string)), Refresh: detachIGStateRefreshFunc(conn, d.Id(), vpcID.(string)),
Timeout: 5 * time.Minute, Timeout: 5 * time.Minute,
Delay: 10 * time.Second, Delay: 10 * time.Second,

View File

@ -141,7 +141,7 @@ func resourceAwsKinesisFirehoseDeliveryStreamCreate(d *schema.ResourceData, meta
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"CREATING"}, Pending: []string{"CREATING"},
Target: "ACTIVE", Target: []string{"ACTIVE"},
Refresh: firehoseStreamStateRefreshFunc(conn, sn), Refresh: firehoseStreamStateRefreshFunc(conn, sn),
Timeout: 5 * time.Minute, Timeout: 5 * time.Minute,
Delay: 10 * time.Second, Delay: 10 * time.Second,
@ -256,7 +256,7 @@ func resourceAwsKinesisFirehoseDeliveryStreamDelete(d *schema.ResourceData, meta
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"DELETING"}, Pending: []string{"DELETING"},
Target: "DESTROYED", Target: []string{"DESTROYED"},
Refresh: firehoseStreamStateRefreshFunc(conn, sn), Refresh: firehoseStreamStateRefreshFunc(conn, sn),
Timeout: 5 * time.Minute, Timeout: 5 * time.Minute,
Delay: 10 * time.Second, Delay: 10 * time.Second,

View File

@ -60,7 +60,7 @@ func resourceAwsKinesisStreamCreate(d *schema.ResourceData, meta interface{}) er
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"CREATING"}, Pending: []string{"CREATING"},
Target: "ACTIVE", Target: []string{"ACTIVE"},
Refresh: streamStateRefreshFunc(conn, sn), Refresh: streamStateRefreshFunc(conn, sn),
Timeout: 5 * time.Minute, Timeout: 5 * time.Minute,
Delay: 10 * time.Second, Delay: 10 * time.Second,
@ -142,7 +142,7 @@ func resourceAwsKinesisStreamDelete(d *schema.ResourceData, meta interface{}) er
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"DELETING"}, Pending: []string{"DELETING"},
Target: "DESTROYED", Target: []string{"DESTROYED"},
Refresh: streamStateRefreshFunc(conn, sn), Refresh: streamStateRefreshFunc(conn, sn),
Timeout: 5 * time.Minute, Timeout: 5 * time.Minute,
Delay: 10 * time.Second, Delay: 10 * time.Second,

View File

@ -77,7 +77,7 @@ func resourceAwsNatGatewayCreate(d *schema.ResourceData, meta interface{}) error
log.Printf("[DEBUG] Waiting for NAT Gateway (%s) to become available", d.Id()) log.Printf("[DEBUG] Waiting for NAT Gateway (%s) to become available", d.Id())
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"pending"}, Pending: []string{"pending"},
Target: "available", Target: []string{"available"},
Refresh: NGStateRefreshFunc(conn, d.Id()), Refresh: NGStateRefreshFunc(conn, d.Id()),
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
} }
@ -137,7 +137,7 @@ func resourceAwsNatGatewayDelete(d *schema.ResourceData, meta interface{}) error
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"deleting"}, Pending: []string{"deleting"},
Target: "deleted", Target: []string{"deleted"},
Refresh: NGStateRefreshFunc(conn, d.Id()), Refresh: NGStateRefreshFunc(conn, d.Id()),
Timeout: 30 * time.Minute, Timeout: 30 * time.Minute,
Delay: 10 * time.Second, Delay: 10 * time.Second,

View File

@ -186,7 +186,7 @@ func resourceAwsNetworkInterfaceDetach(oa *schema.Set, meta interface{}, eniId s
log.Printf("[DEBUG] Waiting for ENI (%s) to become dettached", eniId) log.Printf("[DEBUG] Waiting for ENI (%s) to become dettached", eniId)
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"true"}, Pending: []string{"true"},
Target: "false", Target: []string{"false"},
Refresh: networkInterfaceAttachmentRefreshFunc(conn, eniId), Refresh: networkInterfaceAttachmentRefreshFunc(conn, eniId),
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
} }

View File

@ -49,7 +49,7 @@ func resourceAwsPlacementGroupCreate(d *schema.ResourceData, meta interface{}) e
wait := resource.StateChangeConf{ wait := resource.StateChangeConf{
Pending: []string{"pending"}, Pending: []string{"pending"},
Target: "available", Target: []string{"available"},
Timeout: 5 * time.Minute, Timeout: 5 * time.Minute,
MinTimeout: 1 * time.Second, MinTimeout: 1 * time.Second,
Refresh: func() (interface{}, string, error) { Refresh: func() (interface{}, string, error) {
@ -114,7 +114,7 @@ func resourceAwsPlacementGroupDelete(d *schema.ResourceData, meta interface{}) e
wait := resource.StateChangeConf{ wait := resource.StateChangeConf{
Pending: []string{"deleting"}, Pending: []string{"deleting"},
Target: "deleted", Target: []string{"deleted"},
Timeout: 5 * time.Minute, Timeout: 5 * time.Minute,
MinTimeout: 1 * time.Second, MinTimeout: 1 * time.Second,
Refresh: func() (interface{}, string, error) { Refresh: func() (interface{}, string, error) {

View File

@ -212,7 +212,7 @@ func resourceAwsRDSClusterCreate(d *schema.ResourceData, meta interface{}) error
d.SetId(*resp.DBCluster.DBClusterIdentifier) d.SetId(*resp.DBCluster.DBClusterIdentifier)
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"creating", "backing-up", "modifying"}, Pending: []string{"creating", "backing-up", "modifying"},
Target: "available", Target: []string{"available"},
Refresh: resourceAwsRDSClusterStateRefreshFunc(d, meta), Refresh: resourceAwsRDSClusterStateRefreshFunc(d, meta),
Timeout: 5 * time.Minute, Timeout: 5 * time.Minute,
MinTimeout: 3 * time.Second, MinTimeout: 3 * time.Second,
@ -352,7 +352,7 @@ func resourceAwsRDSClusterDelete(d *schema.ResourceData, meta interface{}) error
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"deleting", "backing-up", "modifying"}, Pending: []string{"deleting", "backing-up", "modifying"},
Target: "destroyed", Target: []string{"destroyed"},
Refresh: resourceAwsRDSClusterStateRefreshFunc(d, meta), Refresh: resourceAwsRDSClusterStateRefreshFunc(d, meta),
Timeout: 5 * time.Minute, Timeout: 5 * time.Minute,
MinTimeout: 3 * time.Second, MinTimeout: 3 * time.Second,

View File

@ -105,7 +105,7 @@ func resourceAwsRDSClusterInstanceCreate(d *schema.ResourceData, meta interface{
// reuse db_instance refresh func // reuse db_instance refresh func
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"creating", "backing-up", "modifying"}, Pending: []string{"creating", "backing-up", "modifying"},
Target: "available", Target: []string{"available"},
Refresh: resourceAwsDbInstanceStateRefreshFunc(d, meta), Refresh: resourceAwsDbInstanceStateRefreshFunc(d, meta),
Timeout: 40 * time.Minute, Timeout: 40 * time.Minute,
MinTimeout: 10 * time.Second, MinTimeout: 10 * time.Second,
@ -205,7 +205,7 @@ func resourceAwsRDSClusterInstanceDelete(d *schema.ResourceData, meta interface{
log.Println("[INFO] Waiting for RDS Cluster Instance to be destroyed") log.Println("[INFO] Waiting for RDS Cluster Instance to be destroyed")
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"modifying", "deleting"}, Pending: []string{"modifying", "deleting"},
Target: "", Target: []string{},
Refresh: resourceAwsDbInstanceStateRefreshFunc(d, meta), Refresh: resourceAwsDbInstanceStateRefreshFunc(d, meta),
Timeout: 40 * time.Minute, Timeout: 40 * time.Minute,
MinTimeout: 10 * time.Second, MinTimeout: 10 * time.Second,

View File

@ -261,7 +261,7 @@ func resourceAwsRedshiftClusterCreate(d *schema.ResourceData, meta interface{})
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"creating", "backing-up", "modifying"}, Pending: []string{"creating", "backing-up", "modifying"},
Target: "available", Target: []string{"available"},
Refresh: resourceAwsRedshiftClusterStateRefreshFunc(d, meta), Refresh: resourceAwsRedshiftClusterStateRefreshFunc(d, meta),
Timeout: 5 * time.Minute, Timeout: 5 * time.Minute,
MinTimeout: 3 * time.Second, MinTimeout: 3 * time.Second,
@ -402,7 +402,7 @@ func resourceAwsRedshiftClusterUpdate(d *schema.ResourceData, meta interface{})
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"creating", "deleting", "rebooting", "resizing", "renaming"}, Pending: []string{"creating", "deleting", "rebooting", "resizing", "renaming"},
Target: "available", Target: []string{"available"},
Refresh: resourceAwsRedshiftClusterStateRefreshFunc(d, meta), Refresh: resourceAwsRedshiftClusterStateRefreshFunc(d, meta),
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
MinTimeout: 5 * time.Second, MinTimeout: 5 * time.Second,
@ -444,7 +444,7 @@ func resourceAwsRedshiftClusterDelete(d *schema.ResourceData, meta interface{})
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"available", "creating", "deleting", "rebooting", "resizing", "renaming"}, Pending: []string{"available", "creating", "deleting", "rebooting", "resizing", "renaming"},
Target: "destroyed", Target: []string{"destroyed"},
Refresh: resourceAwsRedshiftClusterStateRefreshFunc(d, meta), Refresh: resourceAwsRedshiftClusterStateRefreshFunc(d, meta),
Timeout: 40 * time.Minute, Timeout: 40 * time.Minute,
MinTimeout: 5 * time.Second, MinTimeout: 5 * time.Second,

View File

@ -167,7 +167,7 @@ func resourceAwsRedshiftParameterGroupUpdate(d *schema.ResourceData, meta interf
func resourceAwsRedshiftParameterGroupDelete(d *schema.ResourceData, meta interface{}) error { func resourceAwsRedshiftParameterGroupDelete(d *schema.ResourceData, meta interface{}) error {
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"pending"}, Pending: []string{"pending"},
Target: "destroyed", Target: []string{"destroyed"},
Refresh: resourceAwsRedshiftParameterGroupDeleteRefreshFunc(d, meta), Refresh: resourceAwsRedshiftParameterGroupDeleteRefreshFunc(d, meta),
Timeout: 3 * time.Minute, Timeout: 3 * time.Minute,
MinTimeout: 1 * time.Second, MinTimeout: 1 * time.Second,

View File

@ -107,7 +107,7 @@ func resourceAwsRedshiftSecurityGroupCreate(d *schema.ResourceData, meta interfa
log.Println("[INFO] Waiting for Redshift Security Group Ingress Authorizations to be authorized") log.Println("[INFO] Waiting for Redshift Security Group Ingress Authorizations to be authorized")
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"authorizing"}, Pending: []string{"authorizing"},
Target: "authorized", Target: []string{"authorized"},
Refresh: resourceAwsRedshiftSecurityGroupStateRefreshFunc(d, meta), Refresh: resourceAwsRedshiftSecurityGroupStateRefreshFunc(d, meta),
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
} }

View File

@ -127,7 +127,7 @@ func resourceAwsRedshiftSubnetGroupUpdate(d *schema.ResourceData, meta interface
func resourceAwsRedshiftSubnetGroupDelete(d *schema.ResourceData, meta interface{}) error { func resourceAwsRedshiftSubnetGroupDelete(d *schema.ResourceData, meta interface{}) error {
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"pending"}, Pending: []string{"pending"},
Target: "destroyed", Target: []string{"destroyed"},
Refresh: resourceAwsRedshiftSubnetGroupDeleteRefreshFunc(d, meta), Refresh: resourceAwsRedshiftSubnetGroupDeleteRefreshFunc(d, meta),
Timeout: 3 * time.Minute, Timeout: 3 * time.Minute,
MinTimeout: 1 * time.Second, MinTimeout: 1 * time.Second,

View File

@ -180,7 +180,7 @@ func resourceAwsRoute53RecordCreate(d *schema.ResourceData, meta interface{}) er
wait := resource.StateChangeConf{ wait := resource.StateChangeConf{
Pending: []string{"rejected"}, Pending: []string{"rejected"},
Target: "accepted", Target: []string{"accepted"},
Timeout: 5 * time.Minute, Timeout: 5 * time.Minute,
MinTimeout: 1 * time.Second, MinTimeout: 1 * time.Second,
Refresh: func() (interface{}, string, error) { Refresh: func() (interface{}, string, error) {
@ -223,7 +223,7 @@ func resourceAwsRoute53RecordCreate(d *schema.ResourceData, meta interface{}) er
wait = resource.StateChangeConf{ wait = resource.StateChangeConf{
Delay: 30 * time.Second, Delay: 30 * time.Second,
Pending: []string{"PENDING"}, Pending: []string{"PENDING"},
Target: "INSYNC", Target: []string{"INSYNC"},
Timeout: 30 * time.Minute, Timeout: 30 * time.Minute,
MinTimeout: 5 * time.Second, MinTimeout: 5 * time.Second,
Refresh: func() (result interface{}, state string, err error) { Refresh: func() (result interface{}, state string, err error) {
@ -357,7 +357,7 @@ func resourceAwsRoute53RecordDelete(d *schema.ResourceData, meta interface{}) er
wait := resource.StateChangeConf{ wait := resource.StateChangeConf{
Pending: []string{"rejected"}, Pending: []string{"rejected"},
Target: "accepted", Target: []string{"accepted"},
Timeout: 5 * time.Minute, Timeout: 5 * time.Minute,
MinTimeout: 1 * time.Second, MinTimeout: 1 * time.Second,
Refresh: func() (interface{}, string, error) { Refresh: func() (interface{}, string, error) {

View File

@ -109,7 +109,7 @@ func resourceAwsRoute53ZoneCreate(d *schema.ResourceData, meta interface{}) erro
wait := resource.StateChangeConf{ wait := resource.StateChangeConf{
Delay: 30 * time.Second, Delay: 30 * time.Second,
Pending: []string{"PENDING"}, Pending: []string{"PENDING"},
Target: "INSYNC", Target: []string{"INSYNC"},
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
MinTimeout: 2 * time.Second, MinTimeout: 2 * time.Second,
Refresh: func() (result interface{}, state string, err error) { Refresh: func() (result interface{}, state string, err error) {

View File

@ -71,7 +71,7 @@ func resourceAwsRoute53ZoneAssociationCreate(d *schema.ResourceData, meta interf
wait := resource.StateChangeConf{ wait := resource.StateChangeConf{
Delay: 30 * time.Second, Delay: 30 * time.Second,
Pending: []string{"PENDING"}, Pending: []string{"PENDING"},
Target: "INSYNC", Target: []string{"INSYNC"},
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
MinTimeout: 2 * time.Second, MinTimeout: 2 * time.Second,
Refresh: func() (result interface{}, state string, err error) { Refresh: func() (result interface{}, state string, err error) {

View File

@ -107,7 +107,7 @@ func resourceAwsRouteTableCreate(d *schema.ResourceData, meta interface{}) error
d.Id()) d.Id())
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"pending"}, Pending: []string{"pending"},
Target: "ready", Target: []string{"ready"},
Refresh: resourceAwsRouteTableStateRefreshFunc(conn, d.Id()), Refresh: resourceAwsRouteTableStateRefreshFunc(conn, d.Id()),
Timeout: 1 * time.Minute, Timeout: 1 * time.Minute,
} }
@ -372,7 +372,7 @@ func resourceAwsRouteTableDelete(d *schema.ResourceData, meta interface{}) error
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"ready"}, Pending: []string{"ready"},
Target: "", Target: []string{},
Refresh: resourceAwsRouteTableStateRefreshFunc(conn, d.Id()), Refresh: resourceAwsRouteTableStateRefreshFunc(conn, d.Id()),
Timeout: 1 * time.Minute, Timeout: 1 * time.Minute,
} }

View File

@ -218,7 +218,7 @@ func resourceAwsSecurityGroupCreate(d *schema.ResourceData, meta interface{}) er
d.Id()) d.Id())
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{""}, Pending: []string{""},
Target: "exists", Target: []string{"exists"},
Refresh: SGStateRefreshFunc(conn, d.Id()), Refresh: SGStateRefreshFunc(conn, d.Id()),
Timeout: 1 * time.Minute, Timeout: 1 * time.Minute,
} }

View File

@ -119,7 +119,7 @@ func resourceAwsSnsTopicUpdate(d *schema.ResourceData, meta interface{}) error {
log.Printf("[DEBUG] Updating SNS Topic (%s) attributes request: %s", d.Id(), req) log.Printf("[DEBUG] Updating SNS Topic (%s) attributes request: %s", d.Id(), req)
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"retrying"}, Pending: []string{"retrying"},
Target: "success", Target: []string{"success"},
Refresh: resourceAwsSNSUpdateRefreshFunc(meta, req), Refresh: resourceAwsSNSUpdateRefreshFunc(meta, req),
Timeout: 1 * time.Minute, Timeout: 1 * time.Minute,
MinTimeout: 3 * time.Second, MinTimeout: 3 * time.Second,

View File

@ -132,7 +132,7 @@ func resourceAwsSpotInstanceRequestCreate(d *schema.ResourceData, meta interface
spotStateConf := &resource.StateChangeConf{ spotStateConf := &resource.StateChangeConf{
// http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-bid-status.html // http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-bid-status.html
Pending: []string{"start", "pending-evaluation", "pending-fulfillment"}, Pending: []string{"start", "pending-evaluation", "pending-fulfillment"},
Target: "fulfilled", Target: []string{"fulfilled"},
Refresh: SpotInstanceStateRefreshFunc(conn, sir), Refresh: SpotInstanceStateRefreshFunc(conn, sir),
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
Delay: 10 * time.Second, Delay: 10 * time.Second,

View File

@ -75,7 +75,7 @@ func resourceAwsSubnetCreate(d *schema.ResourceData, meta interface{}) error {
log.Printf("[DEBUG] Waiting for subnet (%s) to become available", *subnet.SubnetId) log.Printf("[DEBUG] Waiting for subnet (%s) to become available", *subnet.SubnetId)
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"pending"}, Pending: []string{"pending"},
Target: "available", Target: []string{"available"},
Refresh: SubnetStateRefreshFunc(conn, *subnet.SubnetId), Refresh: SubnetStateRefreshFunc(conn, *subnet.SubnetId),
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
} }
@ -166,7 +166,7 @@ func resourceAwsSubnetDelete(d *schema.ResourceData, meta interface{}) error {
wait := resource.StateChangeConf{ wait := resource.StateChangeConf{
Pending: []string{"pending"}, Pending: []string{"pending"},
Target: "destroyed", Target: []string{"destroyed"},
Timeout: 5 * time.Minute, Timeout: 5 * time.Minute,
MinTimeout: 1 * time.Second, MinTimeout: 1 * time.Second,
Refresh: func() (interface{}, string, error) { Refresh: func() (interface{}, string, error) {

View File

@ -72,7 +72,7 @@ func resourceAwsVolumeAttachmentCreate(d *schema.ResourceData, meta interface{})
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"attaching"}, Pending: []string{"attaching"},
Target: "attached", Target: []string{"attached"},
Refresh: volumeAttachmentStateRefreshFunc(conn, vID, iID), Refresh: volumeAttachmentStateRefreshFunc(conn, vID, iID),
Timeout: 5 * time.Minute, Timeout: 5 * time.Minute,
Delay: 10 * time.Second, Delay: 10 * time.Second,
@ -163,7 +163,7 @@ func resourceAwsVolumeAttachmentDelete(d *schema.ResourceData, meta interface{})
_, err := conn.DetachVolume(opts) _, err := conn.DetachVolume(opts)
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"detaching"}, Pending: []string{"detaching"},
Target: "detached", Target: []string{"detached"},
Refresh: volumeAttachmentStateRefreshFunc(conn, vID, iID), Refresh: volumeAttachmentStateRefreshFunc(conn, vID, iID),
Timeout: 5 * time.Minute, Timeout: 5 * time.Minute,
Delay: 10 * time.Second, Delay: 10 * time.Second,

View File

@ -118,7 +118,7 @@ func resourceAwsVpcCreate(d *schema.ResourceData, meta interface{}) error {
d.Id()) d.Id())
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"pending"}, Pending: []string{"pending"},
Target: "available", Target: []string{"available"},
Refresh: VPCStateRefreshFunc(conn, d.Id()), Refresh: VPCStateRefreshFunc(conn, d.Id()),
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
} }

View File

@ -121,7 +121,7 @@ func resourceAwsVpcDhcpOptionsCreate(d *schema.ResourceData, meta interface{}) e
log.Printf("[DEBUG] Waiting for DHCP Options (%s) to become available", d.Id()) log.Printf("[DEBUG] Waiting for DHCP Options (%s) to become available", d.Id())
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"pending"}, Pending: []string{"pending"},
Target: "", Target: []string{},
Refresh: DHCPOptionsStateRefreshFunc(conn, d.Id()), Refresh: DHCPOptionsStateRefreshFunc(conn, d.Id()),
Timeout: 1 * time.Minute, Timeout: 1 * time.Minute,
} }

View File

@ -75,7 +75,7 @@ func resourceAwsVPCPeeringCreate(d *schema.ResourceData, meta interface{}) error
d.Id()) d.Id())
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"pending"}, Pending: []string{"pending"},
Target: "pending-acceptance", Target: []string{"pending-acceptance"},
Refresh: resourceAwsVPCPeeringConnectionStateRefreshFunc(conn, d.Id()), Refresh: resourceAwsVPCPeeringConnectionStateRefreshFunc(conn, d.Id()),
Timeout: 1 * time.Minute, Timeout: 1 * time.Minute,
} }

View File

@ -171,7 +171,7 @@ func resourceAwsVpnConnectionCreate(d *schema.ResourceData, meta interface{}) er
// more frequently than every ten seconds. // more frequently than every ten seconds.
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"pending"}, Pending: []string{"pending"},
Target: "available", Target: []string{"available"},
Refresh: vpnConnectionRefreshFunc(conn, *vpnConnection.VpnConnectionId), Refresh: vpnConnectionRefreshFunc(conn, *vpnConnection.VpnConnectionId),
Timeout: 30 * time.Minute, Timeout: 30 * time.Minute,
Delay: 10 * time.Second, Delay: 10 * time.Second,
@ -303,7 +303,7 @@ func resourceAwsVpnConnectionDelete(d *schema.ResourceData, meta interface{}) er
// VPC stack can safely run. // VPC stack can safely run.
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"deleting"}, Pending: []string{"deleting"},
Target: "deleted", Target: []string{"deleted"},
Refresh: vpnConnectionRefreshFunc(conn, d.Id()), Refresh: vpnConnectionRefreshFunc(conn, d.Id()),
Timeout: 30 * time.Minute, Timeout: 30 * time.Minute,
Delay: 10 * time.Second, Delay: 10 * time.Second,

View File

@ -195,7 +195,7 @@ func resourceAwsVpnGatewayAttach(d *schema.ResourceData, meta interface{}) error
log.Printf("[DEBUG] Waiting for VPN gateway (%s) to attach", d.Id()) log.Printf("[DEBUG] Waiting for VPN gateway (%s) to attach", d.Id())
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"detached", "attaching"}, Pending: []string{"detached", "attaching"},
Target: "attached", Target: []string{"attached"},
Refresh: vpnGatewayAttachStateRefreshFunc(conn, d.Id(), "available"), Refresh: vpnGatewayAttachStateRefreshFunc(conn, d.Id(), "available"),
Timeout: 1 * time.Minute, Timeout: 1 * time.Minute,
} }
@ -256,7 +256,7 @@ func resourceAwsVpnGatewayDetach(d *schema.ResourceData, meta interface{}) error
log.Printf("[DEBUG] Waiting for VPN gateway (%s) to detach", d.Id()) log.Printf("[DEBUG] Waiting for VPN gateway (%s) to detach", d.Id())
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"attached", "detaching", "available"}, Pending: []string{"attached", "detaching", "available"},
Target: "detached", Target: []string{"detached"},
Refresh: vpnGatewayAttachStateRefreshFunc(conn, d.Id(), "detached"), Refresh: vpnGatewayAttachStateRefreshFunc(conn, d.Id(), "detached"),
Timeout: 1 * time.Minute, Timeout: 1 * time.Minute,
} }

View File

@ -4,9 +4,11 @@ import (
"fmt" "fmt"
"log" "log"
"net/http" "net/http"
"time"
"github.com/Azure/azure-sdk-for-go/Godeps/_workspace/src/github.com/Azure/go-autorest/autorest" "github.com/Azure/azure-sdk-for-go/Godeps/_workspace/src/github.com/Azure/go-autorest/autorest"
"github.com/Azure/azure-sdk-for-go/Godeps/_workspace/src/github.com/Azure/go-autorest/autorest/azure" "github.com/Azure/azure-sdk-for-go/Godeps/_workspace/src/github.com/Azure/go-autorest/autorest/azure"
"github.com/Azure/azure-sdk-for-go/arm/cdn"
"github.com/Azure/azure-sdk-for-go/arm/compute" "github.com/Azure/azure-sdk-for-go/arm/compute"
"github.com/Azure/azure-sdk-for-go/arm/network" "github.com/Azure/azure-sdk-for-go/arm/network"
"github.com/Azure/azure-sdk-for-go/arm/resources/resources" "github.com/Azure/azure-sdk-for-go/arm/resources/resources"
@ -40,6 +42,9 @@ type ArmClient struct {
routeTablesClient network.RouteTablesClient routeTablesClient network.RouteTablesClient
routesClient network.RoutesClient routesClient network.RoutesClient
cdnProfilesClient cdn.ProfilesClient
cdnEndpointsClient cdn.EndpointsClient
providers resources.ProvidersClient providers resources.ProvidersClient
resourceGroupClient resources.GroupsClient resourceGroupClient resources.GroupsClient
tagsClient resources.TagsClient tagsClient resources.TagsClient
@ -54,7 +59,7 @@ type ArmClient struct {
func withRequestLogging() autorest.SendDecorator { func withRequestLogging() autorest.SendDecorator {
return func(s autorest.Sender) autorest.Sender { return func(s autorest.Sender) autorest.Sender {
return autorest.SenderFunc(func(r *http.Request) (*http.Response, error) { return autorest.SenderFunc(func(r *http.Request) (*http.Response, error) {
log.Printf("[DEBUG] Sending Azure RM Request %s to %s\n", r.Method, r.URL) log.Printf("[DEBUG] Sending Azure RM Request %q to %q\n", r.Method, r.URL)
resp, err := s.Do(r) resp, err := s.Do(r)
if resp != nil { if resp != nil {
log.Printf("[DEBUG] Received Azure RM Request status code %s for %s\n", resp.Status, r.URL) log.Printf("[DEBUG] Received Azure RM Request status code %s for %s\n", resp.Status, r.URL)
@ -66,6 +71,22 @@ func withRequestLogging() autorest.SendDecorator {
} }
} }
func withPollWatcher() autorest.SendDecorator {
return func(s autorest.Sender) autorest.Sender {
return autorest.SenderFunc(func(r *http.Request) (*http.Response, error) {
fmt.Printf("[DEBUG] Sending Azure RM Request %q to %q\n", r.Method, r.URL)
resp, err := s.Do(r)
fmt.Printf("[DEBUG] Received Azure RM Request status code %s for %s\n", resp.Status, r.URL)
if autorest.ResponseRequiresPolling(resp) {
fmt.Printf("[DEBUG] Azure RM request will poll %s after %d seconds\n",
autorest.GetPollingLocation(resp),
int(autorest.GetPollingDelay(resp, time.Duration(0))/time.Second))
}
return resp, err
})
}
}
func setUserAgent(client *autorest.Client) { func setUserAgent(client *autorest.Client) {
var version string var version string
if terraform.VersionPrerelease != "" { if terraform.VersionPrerelease != "" {
@ -237,7 +258,7 @@ func (c *Config) getArmClient() (*ArmClient, error) {
ssc := storage.NewAccountsClient(c.SubscriptionID) ssc := storage.NewAccountsClient(c.SubscriptionID)
setUserAgent(&ssc.Client) setUserAgent(&ssc.Client)
ssc.Authorizer = spt ssc.Authorizer = spt
ssc.Sender = autorest.CreateSender(withRequestLogging()) ssc.Sender = autorest.CreateSender(withRequestLogging(), withPollWatcher())
client.storageServiceClient = ssc client.storageServiceClient = ssc
suc := storage.NewUsageOperationsClient(c.SubscriptionID) suc := storage.NewUsageOperationsClient(c.SubscriptionID)
@ -246,5 +267,17 @@ func (c *Config) getArmClient() (*ArmClient, error) {
suc.Sender = autorest.CreateSender(withRequestLogging()) suc.Sender = autorest.CreateSender(withRequestLogging())
client.storageUsageClient = suc client.storageUsageClient = suc
cpc := cdn.NewProfilesClient(c.SubscriptionID)
setUserAgent(&cpc.Client)
cpc.Authorizer = spt
cpc.Sender = autorest.CreateSender(withRequestLogging())
client.cdnProfilesClient = cpc
cec := cdn.NewEndpointsClient(c.SubscriptionID)
setUserAgent(&cec.Client)
cec.Authorizer = spt
cec.Sender = autorest.CreateSender(withRequestLogging())
client.cdnEndpointsClient = cec
return &client, nil return &client, nil
} }

View File

@ -2,9 +2,11 @@ package azurerm
import ( import (
"fmt" "fmt"
"log"
"net/http" "net/http"
"strings" "strings"
"github.com/Azure/azure-sdk-for-go/Godeps/_workspace/src/github.com/Azure/go-autorest/autorest"
"github.com/hashicorp/terraform/helper/mutexkv" "github.com/hashicorp/terraform/helper/mutexkv"
"github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/helper/schema"
"github.com/hashicorp/terraform/terraform" "github.com/hashicorp/terraform/terraform"
@ -51,6 +53,9 @@ func Provider() terraform.ResourceProvider {
"azurerm_network_interface": resourceArmNetworkInterface(), "azurerm_network_interface": resourceArmNetworkInterface(),
"azurerm_route_table": resourceArmRouteTable(), "azurerm_route_table": resourceArmRouteTable(),
"azurerm_route": resourceArmRoute(), "azurerm_route": resourceArmRoute(),
"azurerm_cdn_profile": resourceArmCdnProfile(),
"azurerm_cdn_endpoint": resourceArmCdnEndpoint(),
"azurerm_storage_account": resourceArmStorageAccount(),
}, },
ConfigureFunc: providerConfigure, ConfigureFunc: providerConfigure,
} }
@ -95,7 +100,7 @@ func providerConfigure(d *schema.ResourceData) (interface{}, error) {
func registerAzureResourceProvidersWithSubscription(config *Config, client *ArmClient) error { func registerAzureResourceProvidersWithSubscription(config *Config, client *ArmClient) error {
providerClient := client.providers providerClient := client.providers
providers := []string{"Microsoft.Network", "Microsoft.Compute"} providers := []string{"Microsoft.Network", "Microsoft.Compute", "Microsoft.Cdn", "Microsoft.Storage"}
for _, v := range providers { for _, v := range providers {
res, err := providerClient.Register(v) res, err := providerClient.Register(v)
@ -111,10 +116,47 @@ func registerAzureResourceProvidersWithSubscription(config *Config, client *ArmC
return nil return nil
} }
// azureRMNormalizeLocation is a function which normalises human-readable region/location
// names (e.g. "West US") to the values used and returned by the Azure API (e.g. "westus").
// In state we track the API internal version as it is easier to go from the human form
// to the canonical form than the other way around.
func azureRMNormalizeLocation(location interface{}) string { func azureRMNormalizeLocation(location interface{}) string {
input := location.(string) input := location.(string)
return strings.Replace(strings.ToLower(input), " ", "", -1) return strings.Replace(strings.ToLower(input), " ", "", -1)
} }
// pollIndefinitelyAsNeeded is a terrible hack which is necessary because the Azure
// Storage API (and perhaps others) can have response times way beyond the default
// retry timeouts, with no apparent upper bound. This effectively causes the client
// to continue polling when it reaches the configured timeout. My investigations
// suggest that this is neccesary when deleting and recreating a storage account with
// the same name in a short (though undetermined) time period.
//
// It is possible that this will give Terraform the appearance of being slow in
// future: I have attempted to mitigate this by logging whenever this happens. We
// may want to revisit this with configurable timeouts in the future as clearly
// unbounded wait loops is not ideal. It does seem preferable to the current situation
// where our polling loop will time out _with an operation in progress_, but no ID
// for the resource - so the state will not know about it, and conflicts will occur
// on the next run.
func pollIndefinitelyAsNeeded(client autorest.Client, response *http.Response, acceptableCodes ...int) (*http.Response, error) {
var resp *http.Response
var err error
for {
resp, err = client.PollAsNeeded(response, acceptableCodes...)
if err != nil {
if resp.StatusCode != http.StatusAccepted {
log.Printf("[DEBUG] Starting new polling loop for %q", response.Request.URL.Path)
continue
}
return resp, err
}
return resp, nil
}
}
// armMutexKV is the instance of MutexKV for ARM resources // armMutexKV is the instance of MutexKV for ARM resources
var armMutexKV = mutexkv.NewMutexKV() var armMutexKV = mutexkv.NewMutexKV()

View File

@ -119,7 +119,7 @@ func testCheckAzureRMAvailabilitySetExists(name string) resource.TestCheckFunc {
} }
func testCheckAzureRMAvailabilitySetDestroy(s *terraform.State) error { func testCheckAzureRMAvailabilitySetDestroy(s *terraform.State) error {
conn := testAccProvider.Meta().(*ArmClient).vnetClient conn := testAccProvider.Meta().(*ArmClient).availSetClient
for _, rs := range s.RootModule().Resources { for _, rs := range s.RootModule().Resources {
if rs.Type != "azurerm_availability_set" { if rs.Type != "azurerm_availability_set" {
@ -129,7 +129,7 @@ func testCheckAzureRMAvailabilitySetDestroy(s *terraform.State) error {
name := rs.Primary.Attributes["name"] name := rs.Primary.Attributes["name"]
resourceGroup := rs.Primary.Attributes["resource_group_name"] resourceGroup := rs.Primary.Attributes["resource_group_name"]
resp, err := conn.Get(resourceGroup, name, "") resp, err := conn.Get(resourceGroup, name)
if err != nil { if err != nil {
return nil return nil

View File

@ -0,0 +1,451 @@
package azurerm
import (
"bytes"
"fmt"
"log"
"net/http"
"strings"
"time"
"github.com/Azure/azure-sdk-for-go/arm/cdn"
"github.com/hashicorp/terraform/helper/hashcode"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/helper/schema"
)
func resourceArmCdnEndpoint() *schema.Resource {
return &schema.Resource{
Create: resourceArmCdnEndpointCreate,
Read: resourceArmCdnEndpointRead,
Update: resourceArmCdnEndpointUpdate,
Delete: resourceArmCdnEndpointDelete,
Schema: map[string]*schema.Schema{
"name": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"location": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
StateFunc: azureRMNormalizeLocation,
},
"resource_group_name": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"profile_name": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"origin_host_header": &schema.Schema{
Type: schema.TypeString,
Optional: true,
Computed: true,
},
"is_http_allowed": &schema.Schema{
Type: schema.TypeBool,
Optional: true,
Default: true,
},
"is_https_allowed": &schema.Schema{
Type: schema.TypeBool,
Optional: true,
Default: true,
},
"origin": &schema.Schema{
Type: schema.TypeSet,
Required: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"name": &schema.Schema{
Type: schema.TypeString,
Required: true,
},
"host_name": &schema.Schema{
Type: schema.TypeString,
Required: true,
},
"http_port": &schema.Schema{
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
"https_port": &schema.Schema{
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
},
},
Set: resourceArmCdnEndpointOriginHash,
},
"origin_path": &schema.Schema{
Type: schema.TypeString,
Optional: true,
Computed: true,
},
"querystring_caching_behaviour": &schema.Schema{
Type: schema.TypeString,
Optional: true,
Default: "IgnoreQueryString",
ValidateFunc: validateCdnEndpointQuerystringCachingBehaviour,
},
"content_types_to_compress": &schema.Schema{
Type: schema.TypeSet,
Optional: true,
Computed: true,
Elem: &schema.Schema{Type: schema.TypeString},
Set: schema.HashString,
},
"is_compression_enabled": &schema.Schema{
Type: schema.TypeBool,
Optional: true,
Default: false,
},
"host_name": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
"tags": tagsSchema(),
},
}
}
func resourceArmCdnEndpointCreate(d *schema.ResourceData, meta interface{}) error {
client := meta.(*ArmClient)
cdnEndpointsClient := client.cdnEndpointsClient
log.Printf("[INFO] preparing arguments for Azure ARM CDN EndPoint creation.")
name := d.Get("name").(string)
location := d.Get("location").(string)
resGroup := d.Get("resource_group_name").(string)
profileName := d.Get("profile_name").(string)
http_allowed := d.Get("is_http_allowed").(bool)
https_allowed := d.Get("is_https_allowed").(bool)
compression_enabled := d.Get("is_compression_enabled").(bool)
caching_behaviour := d.Get("querystring_caching_behaviour").(string)
tags := d.Get("tags").(map[string]interface{})
properties := cdn.EndpointPropertiesCreateUpdateParameters{
IsHTTPAllowed: &http_allowed,
IsHTTPSAllowed: &https_allowed,
IsCompressionEnabled: &compression_enabled,
QueryStringCachingBehavior: cdn.QueryStringCachingBehavior(caching_behaviour),
}
origins, originsErr := expandAzureRmCdnEndpointOrigins(d)
if originsErr != nil {
return fmt.Errorf("Error Building list of CDN Endpoint Origins: %s", originsErr)
}
if len(origins) > 0 {
properties.Origins = &origins
}
if v, ok := d.GetOk("origin_host_header"); ok {
host_header := v.(string)
properties.OriginHostHeader = &host_header
}
if v, ok := d.GetOk("origin_path"); ok {
origin_path := v.(string)
properties.OriginPath = &origin_path
}
if v, ok := d.GetOk("content_types_to_compress"); ok {
var content_types []string
ctypes := v.(*schema.Set).List()
for _, ct := range ctypes {
str := ct.(string)
content_types = append(content_types, str)
}
properties.ContentTypesToCompress = &content_types
}
cdnEndpoint := cdn.EndpointCreateParameters{
Location: &location,
Properties: &properties,
Tags: expandTags(tags),
}
resp, err := cdnEndpointsClient.Create(name, cdnEndpoint, profileName, resGroup)
if err != nil {
return err
}
d.SetId(*resp.ID)
log.Printf("[DEBUG] Waiting for CDN Endpoint (%s) to become available", name)
stateConf := &resource.StateChangeConf{
Pending: []string{"Accepted", "Updating", "Creating"},
Target: []string{"Succeeded"},
Refresh: cdnEndpointStateRefreshFunc(client, resGroup, profileName, name),
Timeout: 10 * time.Minute,
}
if _, err := stateConf.WaitForState(); err != nil {
return fmt.Errorf("Error waiting for CDN Endpoint (%s) to become available: %s", name, err)
}
return resourceArmCdnEndpointRead(d, meta)
}
func resourceArmCdnEndpointRead(d *schema.ResourceData, meta interface{}) error {
cdnEndpointsClient := meta.(*ArmClient).cdnEndpointsClient
id, err := parseAzureResourceID(d.Id())
if err != nil {
return err
}
resGroup := id.ResourceGroup
name := id.Path["endpoints"]
profileName := id.Path["profiles"]
if profileName == "" {
profileName = id.Path["Profiles"]
}
log.Printf("[INFO] Trying to find the AzureRM CDN Endpoint %s (Profile: %s, RG: %s)", name, profileName, resGroup)
resp, err := cdnEndpointsClient.Get(name, profileName, resGroup)
if resp.StatusCode == http.StatusNotFound {
d.SetId("")
return nil
}
if err != nil {
return fmt.Errorf("Error making Read request on Azure CDN Endpoint %s: %s", name, err)
}
d.Set("name", resp.Name)
d.Set("host_name", resp.Properties.HostName)
d.Set("is_compression_enabled", resp.Properties.IsCompressionEnabled)
d.Set("is_http_allowed", resp.Properties.IsHTTPAllowed)
d.Set("is_https_allowed", resp.Properties.IsHTTPSAllowed)
d.Set("querystring_caching_behaviour", resp.Properties.QueryStringCachingBehavior)
if resp.Properties.OriginHostHeader != nil && *resp.Properties.OriginHostHeader != "" {
d.Set("origin_host_header", resp.Properties.OriginHostHeader)
}
if resp.Properties.OriginPath != nil && *resp.Properties.OriginPath != "" {
d.Set("origin_path", resp.Properties.OriginPath)
}
if resp.Properties.ContentTypesToCompress != nil && len(*resp.Properties.ContentTypesToCompress) > 0 {
d.Set("content_types_to_compress", flattenAzureRMCdnEndpointContentTypes(resp.Properties.ContentTypesToCompress))
}
d.Set("origin", flattenAzureRMCdnEndpointOrigin(resp.Properties.Origins))
flattenAndSetTags(d, resp.Tags)
return nil
}
func resourceArmCdnEndpointUpdate(d *schema.ResourceData, meta interface{}) error {
cdnEndpointsClient := meta.(*ArmClient).cdnEndpointsClient
if !d.HasChange("tags") {
return nil
}
name := d.Get("name").(string)
resGroup := d.Get("resource_group_name").(string)
profileName := d.Get("profile_name").(string)
http_allowed := d.Get("is_http_allowed").(bool)
https_allowed := d.Get("is_https_allowed").(bool)
compression_enabled := d.Get("is_compression_enabled").(bool)
caching_behaviour := d.Get("querystring_caching_behaviour").(string)
newTags := d.Get("tags").(map[string]interface{})
properties := cdn.EndpointPropertiesCreateUpdateParameters{
IsHTTPAllowed: &http_allowed,
IsHTTPSAllowed: &https_allowed,
IsCompressionEnabled: &compression_enabled,
QueryStringCachingBehavior: cdn.QueryStringCachingBehavior(caching_behaviour),
}
if d.HasChange("origin") {
origins, originsErr := expandAzureRmCdnEndpointOrigins(d)
if originsErr != nil {
return fmt.Errorf("Error Building list of CDN Endpoint Origins: %s", originsErr)
}
if len(origins) > 0 {
properties.Origins = &origins
}
}
if d.HasChange("origin_host_header") {
host_header := d.Get("origin_host_header").(string)
properties.OriginHostHeader = &host_header
}
if d.HasChange("origin_path") {
origin_path := d.Get("origin_path").(string)
properties.OriginPath = &origin_path
}
if d.HasChange("content_types_to_compress") {
var content_types []string
ctypes := d.Get("content_types_to_compress").(*schema.Set).List()
for _, ct := range ctypes {
str := ct.(string)
content_types = append(content_types, str)
}
properties.ContentTypesToCompress = &content_types
}
updateProps := cdn.EndpointUpdateParameters{
Tags: expandTags(newTags),
Properties: &properties,
}
_, err := cdnEndpointsClient.Update(name, updateProps, profileName, resGroup)
if err != nil {
return fmt.Errorf("Error issuing Azure ARM update request to update CDN Endpoint %q: %s", name, err)
}
return resourceArmCdnEndpointRead(d, meta)
}
func resourceArmCdnEndpointDelete(d *schema.ResourceData, meta interface{}) error {
client := meta.(*ArmClient).cdnEndpointsClient
id, err := parseAzureResourceID(d.Id())
if err != nil {
return err
}
resGroup := id.ResourceGroup
profileName := id.Path["profiles"]
if profileName == "" {
profileName = id.Path["Profiles"]
}
name := id.Path["endpoints"]
accResp, err := client.DeleteIfExists(name, profileName, resGroup)
if err != nil {
if accResp.StatusCode == http.StatusNotFound {
return nil
}
return fmt.Errorf("Error issuing AzureRM delete request for CDN Endpoint %q: %s", name, err)
}
_, err = pollIndefinitelyAsNeeded(client.Client, accResp.Response, http.StatusNotFound)
if err != nil {
return fmt.Errorf("Error polling for AzureRM delete request for CDN Endpoint %q: %s", name, err)
}
return err
}
func cdnEndpointStateRefreshFunc(client *ArmClient, resourceGroupName string, profileName string, name string) resource.StateRefreshFunc {
return func() (interface{}, string, error) {
res, err := client.cdnEndpointsClient.Get(name, profileName, resourceGroupName)
if err != nil {
return nil, "", fmt.Errorf("Error issuing read request in cdnEndpointStateRefreshFunc to Azure ARM for CDN Endpoint '%s' (RG: '%s'): %s", name, resourceGroupName, err)
}
return res, string(res.Properties.ProvisioningState), nil
}
}
func validateCdnEndpointQuerystringCachingBehaviour(v interface{}, k string) (ws []string, errors []error) {
value := strings.ToLower(v.(string))
cachingTypes := map[string]bool{
"ignorequerystring": true,
"bypasscaching": true,
"usequerystring": true,
}
if !cachingTypes[value] {
errors = append(errors, fmt.Errorf("CDN Endpoint querystringCachingBehaviours can only be IgnoreQueryString, BypassCaching or UseQueryString"))
}
return
}
func resourceArmCdnEndpointOriginHash(v interface{}) int {
var buf bytes.Buffer
m := v.(map[string]interface{})
buf.WriteString(fmt.Sprintf("%s-", m["name"].(string)))
buf.WriteString(fmt.Sprintf("%s-", m["host_name"].(string)))
return hashcode.String(buf.String())
}
func expandAzureRmCdnEndpointOrigins(d *schema.ResourceData) ([]cdn.DeepCreatedOrigin, error) {
configs := d.Get("origin").(*schema.Set).List()
origins := make([]cdn.DeepCreatedOrigin, 0, len(configs))
for _, configRaw := range configs {
data := configRaw.(map[string]interface{})
host_name := data["host_name"].(string)
properties := cdn.DeepCreatedOriginProperties{
HostName: &host_name,
}
if v, ok := data["https_port"]; ok {
https_port := v.(int)
properties.HTTPSPort = &https_port
}
if v, ok := data["http_port"]; ok {
http_port := v.(int)
properties.HTTPPort = &http_port
}
name := data["name"].(string)
origin := cdn.DeepCreatedOrigin{
Name: &name,
Properties: &properties,
}
origins = append(origins, origin)
}
return origins, nil
}
func flattenAzureRMCdnEndpointOrigin(list *[]cdn.DeepCreatedOrigin) []map[string]interface{} {
result := make([]map[string]interface{}, 0, len(*list))
for _, i := range *list {
l := map[string]interface{}{
"name": *i.Name,
"host_name": *i.Properties.HostName,
}
if i.Properties.HTTPPort != nil {
l["http_port"] = *i.Properties.HTTPPort
}
if i.Properties.HTTPSPort != nil {
l["https_port"] = *i.Properties.HTTPSPort
}
result = append(result, l)
}
return result
}
func flattenAzureRMCdnEndpointContentTypes(list *[]string) []interface{} {
vs := make([]interface{}, 0, len(*list))
for _, v := range *list {
vs = append(vs, v)
}
return vs
}

View File

@ -0,0 +1,201 @@
package azurerm
import (
"fmt"
"net/http"
"testing"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
)
func TestAccAzureRMCdnEndpoint_basic(t *testing.T) {
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testCheckAzureRMCdnEndpointDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAzureRMCdnEndpoint_basic,
Check: resource.ComposeTestCheckFunc(
testCheckAzureRMCdnEndpointExists("azurerm_cdn_endpoint.test"),
),
},
},
})
}
func TestAccAzureRMCdnEndpoints_withTags(t *testing.T) {
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testCheckAzureRMCdnEndpointDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAzureRMCdnEndpoint_withTags,
Check: resource.ComposeTestCheckFunc(
testCheckAzureRMCdnEndpointExists("azurerm_cdn_endpoint.test"),
resource.TestCheckResourceAttr(
"azurerm_cdn_endpoint.test", "tags.#", "2"),
resource.TestCheckResourceAttr(
"azurerm_cdn_endpoint.test", "tags.environment", "Production"),
resource.TestCheckResourceAttr(
"azurerm_cdn_endpoint.test", "tags.cost_center", "MSFT"),
),
},
resource.TestStep{
Config: testAccAzureRMCdnEndpoint_withTagsUpdate,
Check: resource.ComposeTestCheckFunc(
testCheckAzureRMCdnEndpointExists("azurerm_cdn_endpoint.test"),
resource.TestCheckResourceAttr(
"azurerm_cdn_endpoint.test", "tags.#", "1"),
resource.TestCheckResourceAttr(
"azurerm_cdn_endpoint.test", "tags.environment", "staging"),
),
},
},
})
}
func testCheckAzureRMCdnEndpointExists(name string) resource.TestCheckFunc {
return func(s *terraform.State) error {
// Ensure we have enough information in state to look up in API
rs, ok := s.RootModule().Resources[name]
if !ok {
return fmt.Errorf("Not found: %s", name)
}
name := rs.Primary.Attributes["name"]
profileName := rs.Primary.Attributes["profile_name"]
resourceGroup, hasResourceGroup := rs.Primary.Attributes["resource_group_name"]
if !hasResourceGroup {
return fmt.Errorf("Bad: no resource group found in state for cdn endpoint: %s", name)
}
conn := testAccProvider.Meta().(*ArmClient).cdnEndpointsClient
resp, err := conn.Get(name, profileName, resourceGroup)
if err != nil {
return fmt.Errorf("Bad: Get on cdnEndpointsClient: %s", err)
}
if resp.StatusCode == http.StatusNotFound {
return fmt.Errorf("Bad: CDN Endpoint %q (resource group: %q) does not exist", name, resourceGroup)
}
return nil
}
}
func testCheckAzureRMCdnEndpointDestroy(s *terraform.State) error {
conn := testAccProvider.Meta().(*ArmClient).cdnEndpointsClient
for _, rs := range s.RootModule().Resources {
if rs.Type != "azurerm_cdn_endpoint" {
continue
}
name := rs.Primary.Attributes["name"]
resourceGroup := rs.Primary.Attributes["resource_group_name"]
profileName := rs.Primary.Attributes["profile_name"]
resp, err := conn.Get(name, profileName, resourceGroup)
if err != nil {
return nil
}
if resp.StatusCode != http.StatusNotFound {
return fmt.Errorf("CDN Endpoint still exists:\n%#v", resp.Properties)
}
}
return nil
}
var testAccAzureRMCdnEndpoint_basic = `
resource "azurerm_resource_group" "test" {
name = "acceptanceTestResourceGroup1"
location = "West US"
}
resource "azurerm_cdn_profile" "test" {
name = "acceptanceTestCdnProfile1"
location = "West US"
resource_group_name = "${azurerm_resource_group.test.name}"
sku = "Standard"
}
resource "azurerm_cdn_endpoint" "test" {
name = "acceptanceTestCdnEndpoint1"
profile_name = "${azurerm_cdn_profile.test.name}"
location = "West US"
resource_group_name = "${azurerm_resource_group.test.name}"
origin {
name = "acceptanceTestCdnOrigin1"
host_name = "www.example.com"
}
}
`
var testAccAzureRMCdnEndpoint_withTags = `
resource "azurerm_resource_group" "test" {
name = "acceptanceTestResourceGroup2"
location = "West US"
}
resource "azurerm_cdn_profile" "test" {
name = "acceptanceTestCdnProfile2"
location = "West US"
resource_group_name = "${azurerm_resource_group.test.name}"
sku = "Standard"
}
resource "azurerm_cdn_endpoint" "test" {
name = "acceptanceTestCdnEndpoint2"
profile_name = "${azurerm_cdn_profile.test.name}"
location = "West US"
resource_group_name = "${azurerm_resource_group.test.name}"
origin {
name = "acceptanceTestCdnOrigin2"
host_name = "www.example.com"
}
tags {
environment = "Production"
cost_center = "MSFT"
}
}
`
var testAccAzureRMCdnEndpoint_withTagsUpdate = `
resource "azurerm_resource_group" "test" {
name = "acceptanceTestResourceGroup2"
location = "West US"
}
resource "azurerm_cdn_profile" "test" {
name = "acceptanceTestCdnProfile2"
location = "West US"
resource_group_name = "${azurerm_resource_group.test.name}"
sku = "Standard"
}
resource "azurerm_cdn_endpoint" "test" {
name = "acceptanceTestCdnEndpoint2"
profile_name = "${azurerm_cdn_profile.test.name}"
location = "West US"
resource_group_name = "${azurerm_resource_group.test.name}"
origin {
name = "acceptanceTestCdnOrigin2"
host_name = "www.example.com"
}
tags {
environment = "staging"
}
}
`

View File

@ -0,0 +1,186 @@
package azurerm
import (
"fmt"
"log"
"net/http"
"strings"
"time"
"github.com/Azure/azure-sdk-for-go/arm/cdn"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/helper/schema"
)
func resourceArmCdnProfile() *schema.Resource {
return &schema.Resource{
Create: resourceArmCdnProfileCreate,
Read: resourceArmCdnProfileRead,
Update: resourceArmCdnProfileUpdate,
Delete: resourceArmCdnProfileDelete,
Schema: map[string]*schema.Schema{
"name": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"location": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
StateFunc: azureRMNormalizeLocation,
},
"resource_group_name": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"sku": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
ValidateFunc: validateCdnProfileSku,
},
"tags": tagsSchema(),
},
}
}
func resourceArmCdnProfileCreate(d *schema.ResourceData, meta interface{}) error {
client := meta.(*ArmClient)
cdnProfilesClient := client.cdnProfilesClient
log.Printf("[INFO] preparing arguments for Azure ARM CDN Profile creation.")
name := d.Get("name").(string)
location := d.Get("location").(string)
resGroup := d.Get("resource_group_name").(string)
sku := d.Get("sku").(string)
tags := d.Get("tags").(map[string]interface{})
properties := cdn.ProfilePropertiesCreateParameters{
Sku: &cdn.Sku{
Name: cdn.SkuName(sku),
},
}
cdnProfile := cdn.ProfileCreateParameters{
Location: &location,
Properties: &properties,
Tags: expandTags(tags),
}
resp, err := cdnProfilesClient.Create(name, cdnProfile, resGroup)
if err != nil {
return err
}
d.SetId(*resp.ID)
log.Printf("[DEBUG] Waiting for CDN Profile (%s) to become available", name)
stateConf := &resource.StateChangeConf{
Pending: []string{"Accepted", "Updating", "Creating"},
Target: []string{"Succeeded"},
Refresh: cdnProfileStateRefreshFunc(client, resGroup, name),
Timeout: 10 * time.Minute,
}
if _, err := stateConf.WaitForState(); err != nil {
return fmt.Errorf("Error waiting for CDN Profile (%s) to become available: %s", name, err)
}
return resourceArmCdnProfileRead(d, meta)
}
func resourceArmCdnProfileRead(d *schema.ResourceData, meta interface{}) error {
cdnProfilesClient := meta.(*ArmClient).cdnProfilesClient
id, err := parseAzureResourceID(d.Id())
if err != nil {
return err
}
resGroup := id.ResourceGroup
name := id.Path["Profiles"]
resp, err := cdnProfilesClient.Get(name, resGroup)
if resp.StatusCode == http.StatusNotFound {
d.SetId("")
return nil
}
if err != nil {
return fmt.Errorf("Error making Read request on Azure CDN Profile %s: %s", name, err)
}
if resp.Properties != nil && resp.Properties.Sku != nil {
d.Set("sku", string(resp.Properties.Sku.Name))
}
flattenAndSetTags(d, resp.Tags)
return nil
}
func resourceArmCdnProfileUpdate(d *schema.ResourceData, meta interface{}) error {
cdnProfilesClient := meta.(*ArmClient).cdnProfilesClient
if !d.HasChange("tags") {
return nil
}
name := d.Get("name").(string)
resGroup := d.Get("resource_group_name").(string)
newTags := d.Get("tags").(map[string]interface{})
props := cdn.ProfileUpdateParameters{
Tags: expandTags(newTags),
}
_, err := cdnProfilesClient.Update(name, props, resGroup)
if err != nil {
return fmt.Errorf("Error issuing Azure ARM update request to update CDN Profile %q: %s", name, err)
}
return resourceArmCdnProfileRead(d, meta)
}
func resourceArmCdnProfileDelete(d *schema.ResourceData, meta interface{}) error {
cdnProfilesClient := meta.(*ArmClient).cdnProfilesClient
id, err := parseAzureResourceID(d.Id())
if err != nil {
return err
}
resGroup := id.ResourceGroup
name := id.Path["Profiles"]
_, err = cdnProfilesClient.DeleteIfExists(name, resGroup)
return err
}
func cdnProfileStateRefreshFunc(client *ArmClient, resourceGroupName string, cdnProfileName string) resource.StateRefreshFunc {
return func() (interface{}, string, error) {
res, err := client.cdnProfilesClient.Get(cdnProfileName, resourceGroupName)
if err != nil {
return nil, "", fmt.Errorf("Error issuing read request in cdnProfileStateRefreshFunc to Azure ARM for CND Profile '%s' (RG: '%s'): %s", cdnProfileName, resourceGroupName, err)
}
return res, string(res.Properties.ProvisioningState), nil
}
}
func validateCdnProfileSku(v interface{}, k string) (ws []string, errors []error) {
value := strings.ToLower(v.(string))
skus := map[string]bool{
"standard": true,
"premium": true,
}
if !skus[value] {
errors = append(errors, fmt.Errorf("CDN Profile SKU can only be Standard or Premium"))
}
return
}

View File

@ -0,0 +1,199 @@
package azurerm
import (
"fmt"
"net/http"
"testing"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
)
func TestResourceAzureRMCdnProfileSKU_validation(t *testing.T) {
cases := []struct {
Value string
ErrCount int
}{
{
Value: "Random",
ErrCount: 1,
},
{
Value: "Standard",
ErrCount: 0,
},
{
Value: "Premium",
ErrCount: 0,
},
{
Value: "STANDARD",
ErrCount: 0,
},
{
Value: "PREMIUM",
ErrCount: 0,
},
}
for _, tc := range cases {
_, errors := validateCdnProfileSku(tc.Value, "azurerm_cdn_profile")
if len(errors) != tc.ErrCount {
t.Fatalf("Expected the Azure RM CDN Profile SKU to trigger a validation error")
}
}
}
func TestAccAzureRMCdnProfile_basic(t *testing.T) {
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testCheckAzureRMCdnProfileDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAzureRMCdnProfile_basic,
Check: resource.ComposeTestCheckFunc(
testCheckAzureRMCdnProfileExists("azurerm_cdn_profile.test"),
),
},
},
})
}
func TestAccAzureRMCdnProfile_withTags(t *testing.T) {
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testCheckAzureRMCdnProfileDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAzureRMCdnProfile_withTags,
Check: resource.ComposeTestCheckFunc(
testCheckAzureRMCdnProfileExists("azurerm_cdn_profile.test"),
resource.TestCheckResourceAttr(
"azurerm_cdn_profile.test", "tags.#", "2"),
resource.TestCheckResourceAttr(
"azurerm_cdn_profile.test", "tags.environment", "Production"),
resource.TestCheckResourceAttr(
"azurerm_cdn_profile.test", "tags.cost_center", "MSFT"),
),
},
resource.TestStep{
Config: testAccAzureRMCdnProfile_withTagsUpdate,
Check: resource.ComposeTestCheckFunc(
testCheckAzureRMCdnProfileExists("azurerm_cdn_profile.test"),
resource.TestCheckResourceAttr(
"azurerm_cdn_profile.test", "tags.#", "1"),
resource.TestCheckResourceAttr(
"azurerm_cdn_profile.test", "tags.environment", "staging"),
),
},
},
})
}
func testCheckAzureRMCdnProfileExists(name string) resource.TestCheckFunc {
return func(s *terraform.State) error {
// Ensure we have enough information in state to look up in API
rs, ok := s.RootModule().Resources[name]
if !ok {
return fmt.Errorf("Not found: %s", name)
}
name := rs.Primary.Attributes["name"]
resourceGroup, hasResourceGroup := rs.Primary.Attributes["resource_group_name"]
if !hasResourceGroup {
return fmt.Errorf("Bad: no resource group found in state for cdn profile: %s", name)
}
conn := testAccProvider.Meta().(*ArmClient).cdnProfilesClient
resp, err := conn.Get(name, resourceGroup)
if err != nil {
return fmt.Errorf("Bad: Get on cdnProfilesClient: %s", err)
}
if resp.StatusCode == http.StatusNotFound {
return fmt.Errorf("Bad: CDN Profile %q (resource group: %q) does not exist", name, resourceGroup)
}
return nil
}
}
func testCheckAzureRMCdnProfileDestroy(s *terraform.State) error {
conn := testAccProvider.Meta().(*ArmClient).cdnProfilesClient
for _, rs := range s.RootModule().Resources {
if rs.Type != "azurerm_cdn_profile" {
continue
}
name := rs.Primary.Attributes["name"]
resourceGroup := rs.Primary.Attributes["resource_group_name"]
resp, err := conn.Get(name, resourceGroup)
if err != nil {
return nil
}
if resp.StatusCode != http.StatusNotFound {
return fmt.Errorf("CDN Profile still exists:\n%#v", resp.Properties)
}
}
return nil
}
var testAccAzureRMCdnProfile_basic = `
resource "azurerm_resource_group" "test" {
name = "acceptanceTestResourceGroup1"
location = "West US"
}
resource "azurerm_cdn_profile" "test" {
name = "acceptanceTestCdnProfile1"
location = "West US"
resource_group_name = "${azurerm_resource_group.test.name}"
sku = "Standard"
}
`
var testAccAzureRMCdnProfile_withTags = `
resource "azurerm_resource_group" "test" {
name = "acceptanceTestResourceGroup1"
location = "West US"
}
resource "azurerm_cdn_profile" "test" {
name = "acceptanceTestCdnProfile1"
location = "West US"
resource_group_name = "${azurerm_resource_group.test.name}"
sku = "Standard"
tags {
environment = "Production"
cost_center = "MSFT"
}
}
`
var testAccAzureRMCdnProfile_withTagsUpdate = `
resource "azurerm_resource_group" "test" {
name = "acceptanceTestResourceGroup1"
location = "West US"
}
resource "azurerm_cdn_profile" "test" {
name = "acceptanceTestCdnProfile1"
location = "West US"
resource_group_name = "${azurerm_resource_group.test.name}"
sku = "Standard"
tags {
environment = "staging"
}
}
`

View File

@ -214,7 +214,7 @@ func resourceArmNetworkInterfaceCreate(d *schema.ResourceData, meta interface{})
log.Printf("[DEBUG] Waiting for Network Interface (%s) to become available", name) log.Printf("[DEBUG] Waiting for Network Interface (%s) to become available", name)
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"Accepted", "Updating"}, Pending: []string{"Accepted", "Updating"},
Target: "Succeeded", Target: []string{"Succeeded"},
Refresh: networkInterfaceStateRefreshFunc(client, resGroup, name), Refresh: networkInterfaceStateRefreshFunc(client, resGroup, name),
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
} }

View File

@ -157,7 +157,7 @@ func resourceArmNetworkSecurityGroupCreate(d *schema.ResourceData, meta interfac
log.Printf("[DEBUG] Waiting for Network Security Group (%s) to become available", name) log.Printf("[DEBUG] Waiting for Network Security Group (%s) to become available", name)
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"Accepted", "Updating"}, Pending: []string{"Accepted", "Updating"},
Target: "Succeeded", Target: []string{"Succeeded"},
Refresh: securityGroupStateRefreshFunc(client, resGroup, name), Refresh: securityGroupStateRefreshFunc(client, resGroup, name),
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
} }

View File

@ -153,7 +153,7 @@ func resourceArmNetworkSecurityRuleCreate(d *schema.ResourceData, meta interface
log.Printf("[DEBUG] Waiting for Network Security Rule (%s) to become available", name) log.Printf("[DEBUG] Waiting for Network Security Rule (%s) to become available", name)
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"Accepted", "Updating"}, Pending: []string{"Accepted", "Updating"},
Target: "Succeeded", Target: []string{"Succeeded"},
Refresh: securityRuleStateRefreshFunc(client, resGroup, nsgName, name), Refresh: securityRuleStateRefreshFunc(client, resGroup, nsgName, name),
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
} }

View File

@ -142,7 +142,7 @@ func resourceArmPublicIpCreate(d *schema.ResourceData, meta interface{}) error {
log.Printf("[DEBUG] Waiting for Public IP (%s) to become available", name) log.Printf("[DEBUG] Waiting for Public IP (%s) to become available", name)
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"Accepted", "Updating"}, Pending: []string{"Accepted", "Updating"},
Target: "Succeeded", Target: []string{"Succeeded"},
Refresh: publicIPStateRefreshFunc(client, resGroup, name), Refresh: publicIPStateRefreshFunc(client, resGroup, name),
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
} }

View File

@ -2,11 +2,10 @@ package azurerm
import ( import (
"fmt" "fmt"
"math/rand"
"net/http" "net/http"
"testing" "testing"
"time"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform" "github.com/hashicorp/terraform/terraform"
) )
@ -65,7 +64,7 @@ func TestResourceAzureRMPublicIpDomainNameLabel_validation(t *testing.T) {
ErrCount: 1, ErrCount: 1,
}, },
{ {
Value: randomString(80), Value: acctest.RandString(80),
ErrCount: 1, ErrCount: 1,
}, },
} }
@ -227,16 +226,6 @@ func testCheckAzureRMPublicIpDestroy(s *terraform.State) error {
return nil return nil
} }
func randomString(strlen int) string {
rand.Seed(time.Now().UTC().UnixNano())
const chars = "abcdefghijklmnopqrstuvwxyz0123456789-"
result := make([]byte, strlen)
for i := 0; i < strlen; i++ {
result[i] = chars[rand.Intn(len(chars))]
}
return string(result)
}
var testAccAzureRMVPublicIpStatic_basic = ` var testAccAzureRMVPublicIpStatic_basic = `
resource "azurerm_resource_group" "test" { resource "azurerm_resource_group" "test" {
name = "acceptanceTestResourceGroup1" name = "acceptanceTestResourceGroup1"

View File

@ -103,7 +103,7 @@ func resourceArmResourceGroupCreate(d *schema.ResourceData, meta interface{}) er
log.Printf("[DEBUG] Waiting for Resource Group (%s) to become available", name) log.Printf("[DEBUG] Waiting for Resource Group (%s) to become available", name)
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"Accepted"}, Pending: []string{"Accepted"},
Target: "Succeeded", Target: []string{"Succeeded"},
Refresh: resourceGroupStateRefreshFunc(client, name), Refresh: resourceGroupStateRefreshFunc(client, name),
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
} }

View File

@ -95,7 +95,7 @@ func resourceArmRouteCreate(d *schema.ResourceData, meta interface{}) error {
log.Printf("[DEBUG] Waiting for Route (%s) to become available", name) log.Printf("[DEBUG] Waiting for Route (%s) to become available", name)
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"Accepted", "Updating"}, Pending: []string{"Accepted", "Updating"},
Target: "Succeeded", Target: []string{"Succeeded"},
Refresh: routeStateRefreshFunc(client, resGroup, rtName, name), Refresh: routeStateRefreshFunc(client, resGroup, rtName, name),
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
} }

View File

@ -125,7 +125,7 @@ func resourceArmRouteTableCreate(d *schema.ResourceData, meta interface{}) error
log.Printf("[DEBUG] Waiting for Route Table (%s) to become available", name) log.Printf("[DEBUG] Waiting for Route Table (%s) to become available", name)
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"Accepted", "Updating"}, Pending: []string{"Accepted", "Updating"},
Target: "Succeeded", Target: []string{"Succeeded"},
Refresh: routeTableStateRefreshFunc(client, resGroup, name), Refresh: routeTableStateRefreshFunc(client, resGroup, name),
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
} }

View File

@ -0,0 +1,292 @@
package azurerm
import (
"fmt"
"net/http"
"regexp"
"strings"
"github.com/Azure/azure-sdk-for-go/arm/storage"
"github.com/hashicorp/terraform/helper/schema"
)
func resourceArmStorageAccount() *schema.Resource {
return &schema.Resource{
Create: resourceArmStorageAccountCreate,
Read: resourceArmStorageAccountRead,
Update: resourceArmStorageAccountUpdate,
Delete: resourceArmStorageAccountDelete,
Schema: map[string]*schema.Schema{
"name": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
ValidateFunc: validateArmStorageAccountName,
},
"resource_group_name": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
"location": &schema.Schema{
Type: schema.TypeString,
Required: true,
ForceNew: true,
StateFunc: azureRMNormalizeLocation,
},
"account_type": &schema.Schema{
Type: schema.TypeString,
Required: true,
ValidateFunc: validateArmStorageAccountType,
},
"primary_location": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
"secondary_location": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
"primary_blob_endpoint": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
"secondary_blob_endpoint": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
"primary_queue_endpoint": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
"secondary_queue_endpoint": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
"primary_table_endpoint": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
"secondary_table_endpoint": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
// NOTE: The API does not appear to expose a secondary file endpoint
"primary_file_endpoint": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
"tags": tagsSchema(),
},
}
}
func resourceArmStorageAccountCreate(d *schema.ResourceData, meta interface{}) error {
client := meta.(*ArmClient).storageServiceClient
resourceGroupName := d.Get("resource_group_name").(string)
storageAccountName := d.Get("name").(string)
accountType := d.Get("account_type").(string)
location := d.Get("location").(string)
tags := d.Get("tags").(map[string]interface{})
opts := storage.AccountCreateParameters{
Location: &location,
Properties: &storage.AccountPropertiesCreateParameters{
AccountType: storage.AccountType(accountType),
},
Tags: expandTags(tags),
}
accResp, err := client.Create(resourceGroupName, storageAccountName, opts)
if err != nil {
return fmt.Errorf("Error creating Azure Storage Account '%s': %s", storageAccountName, err)
}
_, err = pollIndefinitelyAsNeeded(client.Client, accResp.Response.Response, http.StatusOK)
if err != nil {
return fmt.Errorf("Error creating Azure Storage Account %q: %s", storageAccountName, err)
}
// The only way to get the ID back apparently is to read the resource again
account, err := client.GetProperties(resourceGroupName, storageAccountName)
if err != nil {
return fmt.Errorf("Error retrieving Azure Storage Account %q: %s", storageAccountName, err)
}
d.SetId(*account.ID)
return resourceArmStorageAccountRead(d, meta)
}
// resourceArmStorageAccountUpdate is unusual in the ARM API where most resources have a combined
// and idempotent operation for CreateOrUpdate. In particular updating all of the parameters
// available requires a call to Update per parameter...
func resourceArmStorageAccountUpdate(d *schema.ResourceData, meta interface{}) error {
client := meta.(*ArmClient).storageServiceClient
id, err := parseAzureResourceID(d.Id())
if err != nil {
return err
}
storageAccountName := id.Path["storageAccounts"]
resourceGroupName := id.ResourceGroup
d.Partial(true)
if d.HasChange("account_type") {
accountType := d.Get("account_type").(string)
opts := storage.AccountUpdateParameters{
Properties: &storage.AccountPropertiesUpdateParameters{
AccountType: storage.AccountType(accountType),
},
}
accResp, err := client.Update(resourceGroupName, storageAccountName, opts)
if err != nil {
return fmt.Errorf("Error updating Azure Storage Account type %q: %s", storageAccountName, err)
}
_, err = pollIndefinitelyAsNeeded(client.Client, accResp.Response.Response, http.StatusOK)
if err != nil {
return fmt.Errorf("Error updating Azure Storage Account type %q: %s", storageAccountName, err)
}
d.SetPartial("account_type")
}
if d.HasChange("tags") {
tags := d.Get("tags").(map[string]interface{})
opts := storage.AccountUpdateParameters{
Tags: expandTags(tags),
}
accResp, err := client.Update(resourceGroupName, storageAccountName, opts)
if err != nil {
return fmt.Errorf("Error updating Azure Storage Account tags %q: %s", storageAccountName, err)
}
_, err = pollIndefinitelyAsNeeded(client.Client, accResp.Response.Response, http.StatusOK)
if err != nil {
return fmt.Errorf("Error updating Azure Storage Account tags %q: %s", storageAccountName, err)
}
d.SetPartial("tags")
}
d.Partial(false)
return nil
}
func resourceArmStorageAccountRead(d *schema.ResourceData, meta interface{}) error {
client := meta.(*ArmClient).storageServiceClient
id, err := parseAzureResourceID(d.Id())
if err != nil {
return err
}
name := id.Path["storageAccounts"]
resGroup := id.ResourceGroup
resp, err := client.GetProperties(resGroup, name)
if err != nil {
if resp.StatusCode == http.StatusNoContent {
d.SetId("")
return nil
}
return fmt.Errorf("Error reading the state of AzureRM Storage Account %q: %s", name, err)
}
d.Set("location", resp.Location)
d.Set("account_type", resp.Properties.AccountType)
d.Set("primary_location", resp.Properties.PrimaryLocation)
d.Set("secondary_location", resp.Properties.SecondaryLocation)
if resp.Properties.PrimaryEndpoints != nil {
d.Set("primary_blob_endpoint", resp.Properties.PrimaryEndpoints.Blob)
d.Set("primary_queue_endpoint", resp.Properties.PrimaryEndpoints.Queue)
d.Set("primary_table_endpoint", resp.Properties.PrimaryEndpoints.Table)
d.Set("primary_file_endpoint", resp.Properties.PrimaryEndpoints.File)
}
if resp.Properties.SecondaryEndpoints != nil {
if resp.Properties.SecondaryEndpoints.Blob != nil {
d.Set("secondary_blob_endpoint", resp.Properties.SecondaryEndpoints.Blob)
} else {
d.Set("secondary_blob_endpoint", "")
}
if resp.Properties.SecondaryEndpoints.Queue != nil {
d.Set("secondary_queue_endpoint", resp.Properties.SecondaryEndpoints.Queue)
} else {
d.Set("secondary_queue_endpoint", "")
}
if resp.Properties.SecondaryEndpoints.Table != nil {
d.Set("secondary_table_endpoint", resp.Properties.SecondaryEndpoints.Table)
} else {
d.Set("secondary_table_endpoint", "")
}
}
flattenAndSetTags(d, resp.Tags)
return nil
}
func resourceArmStorageAccountDelete(d *schema.ResourceData, meta interface{}) error {
client := meta.(*ArmClient).storageServiceClient
id, err := parseAzureResourceID(d.Id())
if err != nil {
return err
}
name := id.Path["storageAccounts"]
resGroup := id.ResourceGroup
accResp, err := client.Delete(resGroup, name)
if err != nil {
return fmt.Errorf("Error issuing AzureRM delete request for storage account %q: %s", name, err)
}
_, err = pollIndefinitelyAsNeeded(client.Client, accResp.Response, http.StatusNotFound)
if err != nil {
return fmt.Errorf("Error polling for AzureRM delete request for storage account %q: %s", name, err)
}
return nil
}
func validateArmStorageAccountName(v interface{}, k string) (ws []string, es []error) {
input := v.(string)
if !regexp.MustCompile(`\A([a-z0-9]{3,24})\z`).MatchString(input) {
es = append(es, fmt.Errorf("name can only consist of lowercase letters and numbers, and must be between 3 and 24 characters long"))
}
return
}
func validateArmStorageAccountType(v interface{}, k string) (ws []string, es []error) {
validAccountTypes := []string{"standard_lrs", "standard_zrs",
"standard_grs", "standard_ragrs", "premium_lrs"}
input := strings.ToLower(v.(string))
for _, valid := range validAccountTypes {
if valid == input {
return
}
}
es = append(es, fmt.Errorf("Invalid storage account type %q", input))
return
}

View File

@ -0,0 +1,166 @@
package azurerm
import (
"fmt"
"net/http"
"testing"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform"
)
func TestValidateArmStorageAccountType(t *testing.T) {
testCases := []struct {
input string
shouldError bool
}{
{"standard_lrs", false},
{"invalid", true},
}
for _, test := range testCases {
_, es := validateArmStorageAccountType(test.input, "account_type")
if test.shouldError && len(es) == 0 {
t.Fatalf("Expected validating account_type %q to fail", test.input)
}
}
}
func TestValidateArmStorageAccountName(t *testing.T) {
testCases := []struct {
input string
shouldError bool
}{
{"ab", true},
{"ABC", true},
{"abc", false},
{"123456789012345678901234", false},
{"1234567890123456789012345", true},
{"abc12345", false},
}
for _, test := range testCases {
_, es := validateArmStorageAccountName(test.input, "name")
if test.shouldError && len(es) == 0 {
t.Fatalf("Expected validating name %q to fail", test.input)
}
}
}
func TestAccAzureRMStorageAccount_basic(t *testing.T) {
resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testCheckAzureRMStorageAccountDestroy,
Steps: []resource.TestStep{
resource.TestStep{
Config: testAccAzureRMStorageAccount_basic,
Check: resource.ComposeTestCheckFunc(
testCheckAzureRMStorageAccountExists("azurerm_storage_account.testsa"),
resource.TestCheckResourceAttr("azurerm_storage_account.testsa", "account_type", "Standard_LRS"),
resource.TestCheckResourceAttr("azurerm_storage_account.testsa", "tags.#", "1"),
resource.TestCheckResourceAttr("azurerm_storage_account.testsa", "tags.environment", "production"),
),
},
resource.TestStep{
Config: testAccAzureRMStorageAccount_update,
Check: resource.ComposeTestCheckFunc(
testCheckAzureRMStorageAccountExists("azurerm_storage_account.testsa"),
resource.TestCheckResourceAttr("azurerm_storage_account.testsa", "account_type", "Standard_GRS"),
resource.TestCheckResourceAttr("azurerm_storage_account.testsa", "tags.#", "1"),
resource.TestCheckResourceAttr("azurerm_storage_account.testsa", "tags.environment", "staging"),
),
},
},
})
}
func testCheckAzureRMStorageAccountExists(name string) resource.TestCheckFunc {
return func(s *terraform.State) error {
// Ensure we have enough information in state to look up in API
rs, ok := s.RootModule().Resources[name]
if !ok {
return fmt.Errorf("Not found: %s", name)
}
storageAccount := rs.Primary.Attributes["name"]
resourceGroup := rs.Primary.Attributes["resource_group_name"]
// Ensure resource group exists in API
conn := testAccProvider.Meta().(*ArmClient).storageServiceClient
resp, err := conn.GetProperties(resourceGroup, storageAccount)
if err != nil {
return fmt.Errorf("Bad: Get on storageServiceClient: %s", err)
}
if resp.StatusCode == http.StatusNotFound {
return fmt.Errorf("Bad: StorageAccount %q (resource group: %q) does not exist", name, resourceGroup)
}
return nil
}
}
func testCheckAzureRMStorageAccountDestroy(s *terraform.State) error {
conn := testAccProvider.Meta().(*ArmClient).storageServiceClient
for _, rs := range s.RootModule().Resources {
if rs.Type != "azurerm_storage_account" {
continue
}
name := rs.Primary.Attributes["name"]
resourceGroup := rs.Primary.Attributes["resource_group_name"]
resp, err := conn.GetProperties(resourceGroup, name)
if err != nil {
return nil
}
if resp.StatusCode != http.StatusNotFound {
return fmt.Errorf("Storage Account still exists:\n%#v", resp.Properties)
}
}
return nil
}
var testAccAzureRMStorageAccount_basic = `
resource "azurerm_resource_group" "testrg" {
name = "testAccAzureRMStorageAccountBasic"
location = "westus"
}
resource "azurerm_storage_account" "testsa" {
name = "unlikely23exst2acct1435"
resource_group_name = "${azurerm_resource_group.testrg.name}"
location = "westus"
account_type = "Standard_LRS"
tags {
environment = "production"
}
}`
var testAccAzureRMStorageAccount_update = `
resource "azurerm_resource_group" "testrg" {
name = "testAccAzureRMStorageAccountBasic"
location = "westus"
}
resource "azurerm_storage_account" "testsa" {
name = "unlikely23exst2acct1435"
resource_group_name = "${azurerm_resource_group.testrg.name}"
location = "westus"
account_type = "Standard_GRS"
tags {
environment = "staging"
}
}`

View File

@ -112,7 +112,7 @@ func resourceArmSubnetCreate(d *schema.ResourceData, meta interface{}) error {
log.Printf("[DEBUG] Waiting for Subnet (%s) to become available", name) log.Printf("[DEBUG] Waiting for Subnet (%s) to become available", name)
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"Accepted", "Updating"}, Pending: []string{"Accepted", "Updating"},
Target: "Succeeded", Target: []string{"Succeeded"},
Refresh: subnetRuleStateRefreshFunc(client, resGroup, vnetName, name), Refresh: subnetRuleStateRefreshFunc(client, resGroup, vnetName, name),
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
} }

View File

@ -109,7 +109,7 @@ func resourceArmVirtualNetworkCreate(d *schema.ResourceData, meta interface{}) e
log.Printf("[DEBUG] Waiting for Virtual Network (%s) to become available", name) log.Printf("[DEBUG] Waiting for Virtual Network (%s) to become available", name)
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"Accepted", "Updating"}, Pending: []string{"Accepted", "Updating"},
Target: "Succeeded", Target: []string{"Succeeded"},
Refresh: virtualNetworkStateRefreshFunc(client, resGroup, name), Refresh: virtualNetworkStateRefreshFunc(client, resGroup, name),
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
} }

View File

@ -2,7 +2,6 @@ package cloudstack
import ( import (
"fmt" "fmt"
"regexp"
"strconv" "strconv"
"strings" "strings"
"sync" "sync"
@ -82,6 +81,12 @@ func resourceCloudStackEgressFirewall() *schema.Resource {
}, },
}, },
}, },
"parallelism": &schema.Schema{
Type: schema.TypeInt,
Optional: true,
Default: 2,
},
}, },
} }
} }
@ -131,7 +136,7 @@ func createEgressFirewallRules(
var wg sync.WaitGroup var wg sync.WaitGroup
wg.Add(nrs.Len()) wg.Add(nrs.Len())
sem := make(chan struct{}, 10) sem := make(chan struct{}, d.Get("parallelism").(int))
for _, rule := range nrs.List() { for _, rule := range nrs.List() {
// Put in a tiny sleep here to avoid DoS'ing the API // Put in a tiny sleep here to avoid DoS'ing the API
time.Sleep(500 * time.Millisecond) time.Sleep(500 * time.Millisecond)
@ -198,9 +203,6 @@ func createEgressFirewallRule(
// Create an empty schema.Set to hold all processed ports // Create an empty schema.Set to hold all processed ports
ports := &schema.Set{F: schema.HashString} ports := &schema.Set{F: schema.HashString}
// Define a regexp for parsing the port
re := regexp.MustCompile(`^(\d+)(?:-(\d+))?$`)
for _, port := range ps.List() { for _, port := range ps.List() {
if _, ok := uuids[port.(string)]; ok { if _, ok := uuids[port.(string)]; ok {
ports.Add(port) ports.Add(port)
@ -208,7 +210,7 @@ func createEgressFirewallRule(
continue continue
} }
m := re.FindStringSubmatch(port.(string)) m := splitPorts.FindStringSubmatch(port.(string))
startPort, err := strconv.Atoi(m[1]) startPort, err := strconv.Atoi(m[1])
if err != nil { if err != nil {
@ -441,7 +443,7 @@ func deleteEgressFirewallRules(
var wg sync.WaitGroup var wg sync.WaitGroup
wg.Add(ors.Len()) wg.Add(ors.Len())
sem := make(chan struct{}, 10) sem := make(chan struct{}, d.Get("parallelism").(int))
for _, rule := range ors.List() { for _, rule := range ors.List() {
// Put a sleep here to avoid DoS'ing the API // Put a sleep here to avoid DoS'ing the API
time.Sleep(500 * time.Millisecond) time.Sleep(500 * time.Millisecond)
@ -536,7 +538,7 @@ func verifyEgressFirewallRuleParams(d *schema.ResourceData, rule map[string]inte
protocol := rule["protocol"].(string) protocol := rule["protocol"].(string)
if protocol != "tcp" && protocol != "udp" && protocol != "icmp" { if protocol != "tcp" && protocol != "udp" && protocol != "icmp" {
return fmt.Errorf( return fmt.Errorf(
"%s is not a valid protocol. Valid options are 'tcp', 'udp' and 'icmp'", protocol) "%q is not a valid protocol. Valid options are 'tcp', 'udp' and 'icmp'", protocol)
} }
if protocol == "icmp" { if protocol == "icmp" {
@ -549,9 +551,17 @@ func verifyEgressFirewallRuleParams(d *schema.ResourceData, rule map[string]inte
"Parameter icmp_code is a required parameter when using protocol 'icmp'") "Parameter icmp_code is a required parameter when using protocol 'icmp'")
} }
} else { } else {
if _, ok := rule["ports"]; !ok { if ports, ok := rule["ports"].(*schema.Set); ok {
for _, port := range ports.List() {
m := splitPorts.FindStringSubmatch(port.(string))
if m == nil {
return fmt.Errorf( return fmt.Errorf(
"Parameter port is a required parameter when using protocol 'tcp' or 'udp'") "%q is not a valid port value. Valid options are '80' or '80-90'", port.(string))
}
}
} else {
return fmt.Errorf(
"Parameter ports is a required parameter when *not* using protocol 'icmp'")
} }
} }

View File

@ -2,7 +2,6 @@ package cloudstack
import ( import (
"fmt" "fmt"
"regexp"
"strconv" "strconv"
"strings" "strings"
"sync" "sync"
@ -82,6 +81,12 @@ func resourceCloudStackFirewall() *schema.Resource {
}, },
}, },
}, },
"parallelism": &schema.Schema{
Type: schema.TypeInt,
Optional: true,
Default: 2,
},
}, },
} }
} }
@ -130,7 +135,7 @@ func createFirewallRules(
var wg sync.WaitGroup var wg sync.WaitGroup
wg.Add(nrs.Len()) wg.Add(nrs.Len())
sem := make(chan struct{}, 10) sem := make(chan struct{}, d.Get("parallelism").(int))
for _, rule := range nrs.List() { for _, rule := range nrs.List() {
// Put in a tiny sleep here to avoid DoS'ing the API // Put in a tiny sleep here to avoid DoS'ing the API
time.Sleep(500 * time.Millisecond) time.Sleep(500 * time.Millisecond)
@ -199,9 +204,6 @@ func createFirewallRule(
// Create an empty schema.Set to hold all processed ports // Create an empty schema.Set to hold all processed ports
ports := &schema.Set{F: schema.HashString} ports := &schema.Set{F: schema.HashString}
// Define a regexp for parsing the port
re := regexp.MustCompile(`^(\d+)(?:-(\d+))?$`)
for _, port := range ps.List() { for _, port := range ps.List() {
if _, ok := uuids[port.(string)]; ok { if _, ok := uuids[port.(string)]; ok {
ports.Add(port) ports.Add(port)
@ -209,7 +211,7 @@ func createFirewallRule(
continue continue
} }
m := re.FindStringSubmatch(port.(string)) m := splitPorts.FindStringSubmatch(port.(string))
startPort, err := strconv.Atoi(m[1]) startPort, err := strconv.Atoi(m[1])
if err != nil { if err != nil {
@ -442,7 +444,7 @@ func deleteFirewallRules(
var wg sync.WaitGroup var wg sync.WaitGroup
wg.Add(ors.Len()) wg.Add(ors.Len())
sem := make(chan struct{}, 10) sem := make(chan struct{}, d.Get("parallelism").(int))
for _, rule := range ors.List() { for _, rule := range ors.List() {
// Put a sleep here to avoid DoS'ing the API // Put a sleep here to avoid DoS'ing the API
time.Sleep(500 * time.Millisecond) time.Sleep(500 * time.Millisecond)
@ -537,7 +539,7 @@ func verifyFirewallRuleParams(d *schema.ResourceData, rule map[string]interface{
protocol := rule["protocol"].(string) protocol := rule["protocol"].(string)
if protocol != "tcp" && protocol != "udp" && protocol != "icmp" { if protocol != "tcp" && protocol != "udp" && protocol != "icmp" {
return fmt.Errorf( return fmt.Errorf(
"%s is not a valid protocol. Valid options are 'tcp', 'udp' and 'icmp'", protocol) "%q is not a valid protocol. Valid options are 'tcp', 'udp' and 'icmp'", protocol)
} }
if protocol == "icmp" { if protocol == "icmp" {
@ -550,9 +552,17 @@ func verifyFirewallRuleParams(d *schema.ResourceData, rule map[string]interface{
"Parameter icmp_code is a required parameter when using protocol 'icmp'") "Parameter icmp_code is a required parameter when using protocol 'icmp'")
} }
} else { } else {
if _, ok := rule["ports"]; !ok { if ports, ok := rule["ports"].(*schema.Set); ok {
for _, port := range ports.List() {
m := splitPorts.FindStringSubmatch(port.(string))
if m == nil {
return fmt.Errorf( return fmt.Errorf(
"Parameter port is a required parameter when using protocol 'tcp' or 'udp'") "%q is not a valid port value. Valid options are '80' or '80-90'", port.(string))
}
}
} else {
return fmt.Errorf(
"Parameter ports is a required parameter when *not* using protocol 'icmp'")
} }
} }

View File

@ -4,6 +4,7 @@ import (
"fmt" "fmt"
"log" "log"
"net" "net"
"strconv"
"strings" "strings"
"github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/helper/schema"
@ -35,11 +36,38 @@ func resourceCloudStackNetwork() *schema.Resource {
ForceNew: true, ForceNew: true,
}, },
"gateway": &schema.Schema{
Type: schema.TypeString,
Optional: true,
Computed: true,
ForceNew: true,
},
"startip": &schema.Schema{
Type: schema.TypeString,
Optional: true,
Computed: true,
ForceNew: true,
},
"endip": &schema.Schema{
Type: schema.TypeString,
Optional: true,
Computed: true,
ForceNew: true,
},
"network_offering": &schema.Schema{ "network_offering": &schema.Schema{
Type: schema.TypeString, Type: schema.TypeString,
Required: true, Required: true,
}, },
"vlan": &schema.Schema{
Type: schema.TypeInt,
Optional: true,
ForceNew: true,
},
"vpc": &schema.Schema{ "vpc": &schema.Schema{
Type: schema.TypeString, Type: schema.TypeString,
Optional: true, Optional: true,
@ -91,22 +119,24 @@ func resourceCloudStackNetworkCreate(d *schema.ResourceData, meta interface{}) e
if !ok { if !ok {
displaytext = name displaytext = name
} }
// Create a new parameter struct // Create a new parameter struct
p := cs.Network.NewCreateNetworkParams(displaytext.(string), name, networkofferingid, zoneid) p := cs.Network.NewCreateNetworkParams(displaytext.(string), name, networkofferingid, zoneid)
// Get the network details from the CIDR m, err := parseCIDR(d)
m, err := parseCIDR(d.Get("cidr").(string))
if err != nil { if err != nil {
return err return err
} }
// Set the needed IP config // Set the needed IP config
p.SetStartip(m["start"]) p.SetStartip(m["startip"])
p.SetGateway(m["gateway"]) p.SetGateway(m["gateway"])
p.SetEndip(m["end"]) p.SetEndip(m["endip"])
p.SetNetmask(m["netmask"]) p.SetNetmask(m["netmask"])
if vlan, ok := d.GetOk("vlan"); ok {
p.SetVlan(strconv.Itoa(vlan.(int)))
}
// Check is this network needs to be created in a VPC // Check is this network needs to be created in a VPC
vpc := d.Get("vpc").(string) vpc := d.Get("vpc").(string)
if vpc != "" { if vpc != "" {
@ -170,6 +200,7 @@ func resourceCloudStackNetworkRead(d *schema.ResourceData, meta interface{}) err
d.Set("name", n.Name) d.Set("name", n.Name)
d.Set("display_text", n.Displaytext) d.Set("display_text", n.Displaytext)
d.Set("cidr", n.Cidr) d.Set("cidr", n.Cidr)
d.Set("gateway", n.Gateway)
// Read the tags and sort them on a map // Read the tags and sort them on a map
tags := make(map[string]string) tags := make(map[string]string)
@ -256,9 +287,10 @@ func resourceCloudStackNetworkDelete(d *schema.ResourceData, meta interface{}) e
return nil return nil
} }
func parseCIDR(cidr string) (map[string]string, error) { func parseCIDR(d *schema.ResourceData) (map[string]string, error) {
m := make(map[string]string, 4) m := make(map[string]string, 4)
cidr := d.Get("cidr").(string)
ip, ipnet, err := net.ParseCIDR(cidr) ip, ipnet, err := net.ParseCIDR(cidr)
if err != nil { if err != nil {
return nil, fmt.Errorf("Unable to parse cidr %s: %s", cidr, err) return nil, fmt.Errorf("Unable to parse cidr %s: %s", cidr, err)
@ -268,10 +300,25 @@ func parseCIDR(cidr string) (map[string]string, error) {
sub := ip.Mask(msk) sub := ip.Mask(msk)
m["netmask"] = fmt.Sprintf("%d.%d.%d.%d", msk[0], msk[1], msk[2], msk[3]) m["netmask"] = fmt.Sprintf("%d.%d.%d.%d", msk[0], msk[1], msk[2], msk[3])
if gateway, ok := d.GetOk("gateway"); ok {
m["gateway"] = gateway.(string)
} else {
m["gateway"] = fmt.Sprintf("%d.%d.%d.%d", sub[0], sub[1], sub[2], sub[3]+1) m["gateway"] = fmt.Sprintf("%d.%d.%d.%d", sub[0], sub[1], sub[2], sub[3]+1)
m["start"] = fmt.Sprintf("%d.%d.%d.%d", sub[0], sub[1], sub[2], sub[3]+2) }
m["end"] = fmt.Sprintf("%d.%d.%d.%d",
if startip, ok := d.GetOk("startip"); ok {
m["startip"] = startip.(string)
} else {
m["startip"] = fmt.Sprintf("%d.%d.%d.%d", sub[0], sub[1], sub[2], sub[3]+2)
}
if endip, ok := d.GetOk("endip"); ok {
m["endip"] = endip.(string)
} else {
m["endip"] = fmt.Sprintf("%d.%d.%d.%d",
sub[0]+(0xff-msk[0]), sub[1]+(0xff-msk[1]), sub[2]+(0xff-msk[2]), sub[3]+(0xff-msk[3]-1)) sub[0]+(0xff-msk[0]), sub[1]+(0xff-msk[1]), sub[2]+(0xff-msk[2]), sub[3]+(0xff-msk[3]-1))
}
return m, nil return m, nil
} }

View File

@ -2,7 +2,6 @@ package cloudstack
import ( import (
"fmt" "fmt"
"regexp"
"strconv" "strconv"
"strings" "strings"
"sync" "sync"
@ -94,6 +93,12 @@ func resourceCloudStackNetworkACLRule() *schema.Resource {
}, },
}, },
}, },
"parallelism": &schema.Schema{
Type: schema.TypeInt,
Optional: true,
Default: 2,
},
}, },
} }
} }
@ -135,7 +140,7 @@ func createNetworkACLRules(
var wg sync.WaitGroup var wg sync.WaitGroup
wg.Add(nrs.Len()) wg.Add(nrs.Len())
sem := make(chan struct{}, 10) sem := make(chan struct{}, d.Get("parallelism").(int))
for _, rule := range nrs.List() { for _, rule := range nrs.List() {
// Put in a tiny sleep here to avoid DoS'ing the API // Put in a tiny sleep here to avoid DoS'ing the API
time.Sleep(500 * time.Millisecond) time.Sleep(500 * time.Millisecond)
@ -224,9 +229,6 @@ func createNetworkACLRule(
// Create an empty schema.Set to hold all processed ports // Create an empty schema.Set to hold all processed ports
ports := &schema.Set{F: schema.HashString} ports := &schema.Set{F: schema.HashString}
// Define a regexp for parsing the port
re := regexp.MustCompile(`^(\d+)(?:-(\d+))?$`)
for _, port := range ps.List() { for _, port := range ps.List() {
if _, ok := uuids[port.(string)]; ok { if _, ok := uuids[port.(string)]; ok {
ports.Add(port) ports.Add(port)
@ -234,7 +236,7 @@ func createNetworkACLRule(
continue continue
} }
m := re.FindStringSubmatch(port.(string)) m := splitPorts.FindStringSubmatch(port.(string))
startPort, err := strconv.Atoi(m[1]) startPort, err := strconv.Atoi(m[1])
if err != nil { if err != nil {
@ -495,7 +497,7 @@ func deleteNetworkACLRules(
var wg sync.WaitGroup var wg sync.WaitGroup
wg.Add(ors.Len()) wg.Add(ors.Len())
sem := make(chan struct{}, 10) sem := make(chan struct{}, d.Get("parallelism").(int))
for _, rule := range ors.List() { for _, rule := range ors.List() {
// Put a sleep here to avoid DoS'ing the API // Put a sleep here to avoid DoS'ing the API
time.Sleep(500 * time.Millisecond) time.Sleep(500 * time.Millisecond)
@ -607,7 +609,15 @@ func verifyNetworkACLRuleParams(d *schema.ResourceData, rule map[string]interfac
case "all": case "all":
// No additional test are needed, so just leave this empty... // No additional test are needed, so just leave this empty...
case "tcp", "udp": case "tcp", "udp":
if _, ok := rule["ports"]; !ok { if ports, ok := rule["ports"].(*schema.Set); ok {
for _, port := range ports.List() {
m := splitPorts.FindStringSubmatch(port.(string))
if m == nil {
return fmt.Errorf(
"%q is not a valid port value. Valid options are '80' or '80-90'", port.(string))
}
}
} else {
return fmt.Errorf( return fmt.Errorf(
"Parameter ports is a required parameter when *not* using protocol 'icmp'") "Parameter ports is a required parameter when *not* using protocol 'icmp'")
} }
@ -615,7 +625,7 @@ func verifyNetworkACLRuleParams(d *schema.ResourceData, rule map[string]interfac
_, err := strconv.ParseInt(protocol, 0, 0) _, err := strconv.ParseInt(protocol, 0, 0)
if err != nil { if err != nil {
return fmt.Errorf( return fmt.Errorf(
"%s is not a valid protocol. Valid options are 'tcp', 'udp', "+ "%q is not a valid protocol. Valid options are 'tcp', 'udp', "+
"'icmp', 'all' or a valid protocol number", protocol) "'icmp', 'all' or a valid protocol number", protocol)
} }
} }

View File

@ -14,6 +14,9 @@ import (
// UnlimitedResourceID is a "special" ID to define an unlimited resource // UnlimitedResourceID is a "special" ID to define an unlimited resource
const UnlimitedResourceID = "-1" const UnlimitedResourceID = "-1"
// Define a regexp for parsing the port
var splitPorts = regexp.MustCompile(`^(\d+)(?:-(\d+))?$`)
type retrieveError struct { type retrieveError struct {
name string name string
value string value string

View File

@ -418,7 +418,7 @@ func WaitForDropletAttribute(
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: pending, Pending: pending,
Target: target, Target: []string{target},
Refresh: newDropletStateRefreshFunc(d, attribute, meta), Refresh: newDropletStateRefreshFunc(d, attribute, meta),
Timeout: 60 * time.Minute, Timeout: 60 * time.Minute,
Delay: 10 * time.Second, Delay: 10 * time.Second,

View File

@ -13,6 +13,7 @@ import (
func resourceDigitalOceanFloatingIp() *schema.Resource { func resourceDigitalOceanFloatingIp() *schema.Resource {
return &schema.Resource{ return &schema.Resource{
Create: resourceDigitalOceanFloatingIpCreate, Create: resourceDigitalOceanFloatingIpCreate,
Update: resourceDigitalOceanFloatingIpUpdate,
Read: resourceDigitalOceanFloatingIpRead, Read: resourceDigitalOceanFloatingIpRead,
Delete: resourceDigitalOceanFloatingIpDelete, Delete: resourceDigitalOceanFloatingIpDelete,
@ -32,7 +33,6 @@ func resourceDigitalOceanFloatingIp() *schema.Resource {
"droplet_id": &schema.Schema{ "droplet_id": &schema.Schema{
Type: schema.TypeInt, Type: schema.TypeInt,
Optional: true, Optional: true,
ForceNew: true,
}, },
}, },
} }
@ -73,6 +73,42 @@ func resourceDigitalOceanFloatingIpCreate(d *schema.ResourceData, meta interface
return resourceDigitalOceanFloatingIpRead(d, meta) return resourceDigitalOceanFloatingIpRead(d, meta)
} }
func resourceDigitalOceanFloatingIpUpdate(d *schema.ResourceData, meta interface{}) error {
client := meta.(*godo.Client)
if d.HasChange("droplet_id") {
if v, ok := d.GetOk("droplet_id"); ok {
log.Printf("[INFO] Assigning the Floating IP %s to the Droplet %d", d.Id(), v.(int))
action, _, err := client.FloatingIPActions.Assign(d.Id(), v.(int))
if err != nil {
return fmt.Errorf(
"Error Assigning FloatingIP (%s) to the droplet: %s", d.Id(), err)
}
_, unassignedErr := waitForFloatingIPReady(d, "completed", []string{"new", "in-progress"}, "status", meta, action.ID)
if unassignedErr != nil {
return fmt.Errorf(
"Error waiting for FloatingIP (%s) to be Assigned: %s", d.Id(), unassignedErr)
}
} else {
log.Printf("[INFO] Unassigning the Floating IP %s", d.Id())
action, _, err := client.FloatingIPActions.Unassign(d.Id())
if err != nil {
return fmt.Errorf(
"Error Unassigning FloatingIP (%s): %s", d.Id(), err)
}
_, unassignedErr := waitForFloatingIPReady(d, "completed", []string{"new", "in-progress"}, "status", meta, action.ID)
if unassignedErr != nil {
return fmt.Errorf(
"Error waiting for FloatingIP (%s) to be Unassigned: %s", d.Id(), unassignedErr)
}
}
}
return resourceDigitalOceanFloatingIpRead(d, meta)
}
func resourceDigitalOceanFloatingIpRead(d *schema.ResourceData, meta interface{}) error { func resourceDigitalOceanFloatingIpRead(d *schema.ResourceData, meta interface{}) error {
client := meta.(*godo.Client) client := meta.(*godo.Client)
@ -131,7 +167,7 @@ func waitForFloatingIPReady(
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: pending, Pending: pending,
Target: target, Target: []string{target},
Refresh: newFloatingIPStateRefreshFunc(d, attribute, meta, actionId), Refresh: newFloatingIPStateRefreshFunc(d, attribute, meta, actionId),
Timeout: 60 * time.Minute, Timeout: 60 * time.Minute,
Delay: 10 * time.Second, Delay: 10 * time.Second,

View File

@ -74,6 +74,10 @@ func resourceDMERecord() *schema.Resource {
Type: schema.TypeString, Type: schema.TypeString,
Optional: true, Optional: true,
}, },
"gtdLocation": &schema.Schema{
Type: schema.TypeString,
Optional: true,
},
}, },
} }
} }
@ -168,6 +172,9 @@ func getAll(d *schema.ResourceData, cr map[string]interface{}) error {
if attr, ok := d.GetOk("value"); ok { if attr, ok := d.GetOk("value"); ok {
cr["value"] = attr.(string) cr["value"] = attr.(string)
} }
if attr, ok := d.GetOk("gtdLocation"); ok {
cr["gtdLocation"] = attr.(string)
}
switch strings.ToUpper(d.Get("type").(string)) { switch strings.ToUpper(d.Get("type").(string)) {
case "A", "CNAME", "ANAME", "TXT", "SPF", "NS", "PTR", "AAAA": case "A", "CNAME", "ANAME", "TXT", "SPF", "NS", "PTR", "AAAA":
@ -213,6 +220,10 @@ func setAll(d *schema.ResourceData, rec *dnsmadeeasy.Record) error {
d.Set("name", rec.Name) d.Set("name", rec.Name)
d.Set("ttl", rec.TTL) d.Set("ttl", rec.TTL)
d.Set("value", rec.Value) d.Set("value", rec.Value)
// only set gtdLocation if it is given as this is optional.
if rec.GtdLocation != "" {
d.Set("gtdLocation", rec.GtdLocation)
}
switch rec.Type { switch rec.Type {
case "A", "CNAME", "ANAME", "TXT", "SPF", "NS", "PTR": case "A", "CNAME", "ANAME", "TXT", "SPF", "NS", "PTR":

View File

@ -36,6 +36,8 @@ func TestAccDMERecord_basic(t *testing.T) {
"dme_record.test", "value", "1.1.1.1"), "dme_record.test", "value", "1.1.1.1"),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"dme_record.test", "ttl", "2000"), "dme_record.test", "ttl", "2000"),
resource.TestCheckResourceAttr(
"dme_record.test", "gtdLocation", "DEFAULT"),
), ),
}, },
}, },
@ -65,6 +67,8 @@ func TestAccDMERecordCName(t *testing.T) {
"dme_record.test", "value", "foo"), "dme_record.test", "value", "foo"),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"dme_record.test", "ttl", "2000"), "dme_record.test", "ttl", "2000"),
resource.TestCheckResourceAttr(
"dme_record.test", "gtdLocation", "DEFAULT"),
), ),
}, },
}, },
@ -131,6 +135,8 @@ func TestAccDMERecordMX(t *testing.T) {
"dme_record.test", "mxLevel", "10"), "dme_record.test", "mxLevel", "10"),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"dme_record.test", "ttl", "2000"), "dme_record.test", "ttl", "2000"),
resource.TestCheckResourceAttr(
"dme_record.test", "gtdLocation", "DEFAULT"),
), ),
}, },
}, },
@ -172,6 +178,8 @@ func TestAccDMERecordHTTPRED(t *testing.T) {
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"dme_record.test", "ttl", "2000"), "dme_record.test", "ttl", "2000"),
resource.TestCheckResourceAttr(
"dme_record.test", "gtdLocation", "DEFAULT"),
), ),
}, },
}, },
@ -201,6 +209,8 @@ func TestAccDMERecordTXT(t *testing.T) {
"dme_record.test", "value", "\"foo\""), "dme_record.test", "value", "\"foo\""),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"dme_record.test", "ttl", "2000"), "dme_record.test", "ttl", "2000"),
resource.TestCheckResourceAttr(
"dme_record.test", "gtdLocation", "DEFAULT"),
), ),
}, },
}, },
@ -230,6 +240,8 @@ func TestAccDMERecordSPF(t *testing.T) {
"dme_record.test", "value", "\"foo\""), "dme_record.test", "value", "\"foo\""),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"dme_record.test", "ttl", "2000"), "dme_record.test", "ttl", "2000"),
resource.TestCheckResourceAttr(
"dme_record.test", "gtdLocation", "DEFAULT"),
), ),
}, },
}, },
@ -259,6 +271,8 @@ func TestAccDMERecordPTR(t *testing.T) {
"dme_record.test", "value", "foo"), "dme_record.test", "value", "foo"),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"dme_record.test", "ttl", "2000"), "dme_record.test", "ttl", "2000"),
resource.TestCheckResourceAttr(
"dme_record.test", "gtdLocation", "DEFAULT"),
), ),
}, },
}, },
@ -288,6 +302,8 @@ func TestAccDMERecordNS(t *testing.T) {
"dme_record.test", "value", "foo"), "dme_record.test", "value", "foo"),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"dme_record.test", "ttl", "2000"), "dme_record.test", "ttl", "2000"),
resource.TestCheckResourceAttr(
"dme_record.test", "gtdLocation", "DEFAULT"),
), ),
}, },
}, },
@ -317,6 +333,8 @@ func TestAccDMERecordAAAA(t *testing.T) {
"dme_record.test", "value", "fe80::0202:b3ff:fe1e:8329"), "dme_record.test", "value", "fe80::0202:b3ff:fe1e:8329"),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"dme_record.test", "ttl", "2000"), "dme_record.test", "ttl", "2000"),
resource.TestCheckResourceAttr(
"dme_record.test", "gtdLocation", "DEFAULT"),
), ),
}, },
}, },
@ -352,6 +370,8 @@ func TestAccDMERecordSRV(t *testing.T) {
"dme_record.test", "port", "30"), "dme_record.test", "port", "30"),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"dme_record.test", "ttl", "2000"), "dme_record.test", "ttl", "2000"),
resource.TestCheckResourceAttr(
"dme_record.test", "gtdLocation", "DEFAULT"),
), ),
}, },
}, },
@ -413,6 +433,7 @@ resource "dme_record" "test" {
type = "A" type = "A"
value = "1.1.1.1" value = "1.1.1.1"
ttl = 2000 ttl = 2000
gtdLocation = "DEFAULT"
}` }`
const testDMERecordConfigCName = ` const testDMERecordConfigCName = `
@ -422,6 +443,7 @@ resource "dme_record" "test" {
type = "CNAME" type = "CNAME"
value = "foo" value = "foo"
ttl = 2000 ttl = 2000
gtdLocation = "DEFAULT"
}` }`
const testDMERecordConfigAName = ` const testDMERecordConfigAName = `
@ -431,6 +453,7 @@ resource "dme_record" "test" {
type = "ANAME" type = "ANAME"
value = "foo" value = "foo"
ttl = 2000 ttl = 2000
gtdLocation = "DEFAULT"
}` }`
const testDMERecordConfigMX = ` const testDMERecordConfigMX = `
@ -441,6 +464,7 @@ resource "dme_record" "test" {
value = "foo" value = "foo"
mxLevel = 10 mxLevel = 10
ttl = 2000 ttl = 2000
gtdLocation = "DEFAULT"
}` }`
const testDMERecordConfigHTTPRED = ` const testDMERecordConfigHTTPRED = `
@ -455,6 +479,7 @@ resource "dme_record" "test" {
keywords = "terraform example" keywords = "terraform example"
description = "This is a description" description = "This is a description"
ttl = 2000 ttl = 2000
gtdLocation = "DEFAULT"
}` }`
const testDMERecordConfigTXT = ` const testDMERecordConfigTXT = `
@ -464,6 +489,7 @@ resource "dme_record" "test" {
type = "TXT" type = "TXT"
value = "foo" value = "foo"
ttl = 2000 ttl = 2000
gtdLocation = "DEFAULT"
}` }`
const testDMERecordConfigSPF = ` const testDMERecordConfigSPF = `
@ -473,6 +499,7 @@ resource "dme_record" "test" {
type = "SPF" type = "SPF"
value = "foo" value = "foo"
ttl = 2000 ttl = 2000
gtdLocation = "DEFAULT"
}` }`
const testDMERecordConfigPTR = ` const testDMERecordConfigPTR = `
@ -482,6 +509,7 @@ resource "dme_record" "test" {
type = "PTR" type = "PTR"
value = "foo" value = "foo"
ttl = 2000 ttl = 2000
gtdLocation = "DEFAULT"
}` }`
const testDMERecordConfigNS = ` const testDMERecordConfigNS = `
@ -491,6 +519,7 @@ resource "dme_record" "test" {
type = "NS" type = "NS"
value = "foo" value = "foo"
ttl = 2000 ttl = 2000
gtdLocation = "DEFAULT"
}` }`
const testDMERecordConfigAAAA = ` const testDMERecordConfigAAAA = `
@ -500,6 +529,7 @@ resource "dme_record" "test" {
type = "AAAA" type = "AAAA"
value = "FE80::0202:B3FF:FE1E:8329" value = "FE80::0202:B3FF:FE1E:8329"
ttl = 2000 ttl = 2000
gtdLocation = "DEFAULT"
}` }`
const testDMERecordConfigSRV = ` const testDMERecordConfigSRV = `
@ -512,4 +542,5 @@ resource "dme_record" "test" {
weight = 20 weight = 20
port = 30 port = 30
ttl = 2000 ttl = 2000
gtdLocation = "DEFAULT"
}` }`

View File

@ -63,7 +63,7 @@ func (w *ComputeOperationWaiter) RefreshFunc() resource.StateRefreshFunc {
func (w *ComputeOperationWaiter) Conf() *resource.StateChangeConf { func (w *ComputeOperationWaiter) Conf() *resource.StateChangeConf {
return &resource.StateChangeConf{ return &resource.StateChangeConf{
Pending: []string{"PENDING", "RUNNING"}, Pending: []string{"PENDING", "RUNNING"},
Target: "DONE", Target: []string{"DONE"},
Refresh: w.RefreshFunc(), Refresh: w.RefreshFunc(),
} }
} }

View File

@ -32,7 +32,7 @@ func (w *DnsChangeWaiter) RefreshFunc() resource.StateRefreshFunc {
func (w *DnsChangeWaiter) Conf() *resource.StateChangeConf { func (w *DnsChangeWaiter) Conf() *resource.StateChangeConf {
return &resource.StateChangeConf{ return &resource.StateChangeConf{
Pending: []string{"pending"}, Pending: []string{"pending"},
Target: "done", Target: []string{"done"},
Refresh: w.RefreshFunc(), Refresh: w.RefreshFunc(),
} }
} }

View File

@ -53,6 +53,25 @@ func resourceComputeInstanceGroupManager() *schema.Resource {
Required: true, Required: true,
}, },
"named_port": &schema.Schema{
Type: schema.TypeList,
Optional: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"name": &schema.Schema{
Type: schema.TypeString,
Required: true,
},
"port": &schema.Schema{
Type: schema.TypeInt,
Required: true,
},
},
},
},
"update_strategy": &schema.Schema{ "update_strategy": &schema.Schema{
Type: schema.TypeString, Type: schema.TypeString,
Optional: true, Optional: true,
@ -88,6 +107,18 @@ func resourceComputeInstanceGroupManager() *schema.Resource {
} }
} }
func getNamedPorts(nps []interface{}) []*compute.NamedPort {
namedPorts := make([]*compute.NamedPort, 0, len(nps))
for _, v := range nps {
np := v.(map[string]interface{})
namedPorts = append(namedPorts, &compute.NamedPort{
Name: np["name"].(string),
Port: int64(np["port"].(int)),
})
}
return namedPorts
}
func resourceComputeInstanceGroupManagerCreate(d *schema.ResourceData, meta interface{}) error { func resourceComputeInstanceGroupManagerCreate(d *schema.ResourceData, meta interface{}) error {
config := meta.(*Config) config := meta.(*Config)
@ -110,6 +141,10 @@ func resourceComputeInstanceGroupManagerCreate(d *schema.ResourceData, meta inte
manager.Description = v.(string) manager.Description = v.(string)
} }
if v, ok := d.GetOk("named_port"); ok {
manager.NamedPorts = getNamedPorts(v.([]interface{}))
}
if attr := d.Get("target_pools").(*schema.Set); attr.Len() > 0 { if attr := d.Get("target_pools").(*schema.Set); attr.Len() > 0 {
var s []string var s []string
for _, v := range attr.List() { for _, v := range attr.List() {
@ -160,6 +195,7 @@ func resourceComputeInstanceGroupManagerRead(d *schema.ResourceData, meta interf
} }
// Set computed fields // Set computed fields
d.Set("named_port", manager.NamedPorts)
d.Set("fingerprint", manager.Fingerprint) d.Set("fingerprint", manager.Fingerprint)
d.Set("instance_group", manager.InstanceGroup) d.Set("instance_group", manager.InstanceGroup)
d.Set("target_size", manager.TargetSize) d.Set("target_size", manager.TargetSize)
@ -253,6 +289,31 @@ func resourceComputeInstanceGroupManagerUpdate(d *schema.ResourceData, meta inte
d.SetPartial("instance_template") d.SetPartial("instance_template")
} }
// If named_port changes then update:
if d.HasChange("named_port") {
// Build the parameters for a "SetNamedPorts" request:
namedPorts := getNamedPorts(d.Get("named_port").([]interface{}))
setNamedPorts := &compute.InstanceGroupsSetNamedPortsRequest{
NamedPorts: namedPorts,
}
// Make the request:
op, err := config.clientCompute.InstanceGroups.SetNamedPorts(
config.Project, d.Get("zone").(string), d.Id(), setNamedPorts).Do()
if err != nil {
return fmt.Errorf("Error updating InstanceGroupManager: %s", err)
}
// Wait for the operation to complete:
err = computeOperationWaitZone(config, op, d.Get("zone").(string), "Updating InstanceGroupManager")
if err != nil {
return err
}
d.SetPartial("named_port")
}
// If size changes trigger a resize // If size changes trigger a resize
if d.HasChange("target_size") { if d.HasChange("target_size") {
if v, ok := d.GetOk("target_size"); ok { if v, ok := d.GetOk("target_size"); ok {

View File

@ -55,6 +55,10 @@ func TestAccInstanceGroupManager_update(t *testing.T) {
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckInstanceGroupManagerExists( testAccCheckInstanceGroupManagerExists(
"google_compute_instance_group_manager.igm-update", &manager), "google_compute_instance_group_manager.igm-update", &manager),
testAccCheckInstanceGroupManagerNamedPorts(
"google_compute_instance_group_manager.igm-update",
map[string]int64{"customhttp": 8080},
&manager),
), ),
}, },
resource.TestStep{ resource.TestStep{
@ -65,6 +69,10 @@ func TestAccInstanceGroupManager_update(t *testing.T) {
testAccCheckInstanceGroupManagerUpdated( testAccCheckInstanceGroupManagerUpdated(
"google_compute_instance_group_manager.igm-update", 3, "google_compute_instance_group_manager.igm-update", 3,
"google_compute_target_pool.igm-update", template2), "google_compute_target_pool.igm-update", template2),
testAccCheckInstanceGroupManagerNamedPorts(
"google_compute_instance_group_manager.igm-update",
map[string]int64{"customhttp": 8080, "customhttps": 8443},
&manager),
), ),
}, },
}, },
@ -157,6 +165,42 @@ func testAccCheckInstanceGroupManagerUpdated(n string, size int64, targetPool st
} }
} }
func testAccCheckInstanceGroupManagerNamedPorts(n string, np map[string]int64, instanceGroupManager *compute.InstanceGroupManager) resource.TestCheckFunc {
return func(s *terraform.State) error {
rs, ok := s.RootModule().Resources[n]
if !ok {
return fmt.Errorf("Not found: %s", n)
}
if rs.Primary.ID == "" {
return fmt.Errorf("No ID is set")
}
config := testAccProvider.Meta().(*Config)
manager, err := config.clientCompute.InstanceGroupManagers.Get(
config.Project, rs.Primary.Attributes["zone"], rs.Primary.ID).Do()
if err != nil {
return err
}
var found bool
for _, namedPort := range manager.NamedPorts {
found = false
for name, port := range np {
if namedPort.Name == name && namedPort.Port == port {
found = true
}
}
if !found {
return fmt.Errorf("named port incorrect")
}
}
return nil
}
}
func testAccInstanceGroupManager_basic(template, target, igm1, igm2 string) string { func testAccInstanceGroupManager_basic(template, target, igm1, igm2 string) string {
return fmt.Sprintf(` return fmt.Sprintf(`
resource "google_compute_instance_template" "igm-basic" { resource "google_compute_instance_template" "igm-basic" {
@ -252,6 +296,10 @@ func testAccInstanceGroupManager_update(template, target, igm string) string {
base_instance_name = "igm-update" base_instance_name = "igm-update"
zone = "us-central1-c" zone = "us-central1-c"
target_size = 2 target_size = 2
named_port {
name = "customhttp"
port = 8080
}
}`, template, target, igm) }`, template, target, igm)
} }
@ -322,5 +370,13 @@ func testAccInstanceGroupManager_update2(template1, target, template2, igm strin
base_instance_name = "igm-update" base_instance_name = "igm-update"
zone = "us-central1-c" zone = "us-central1-c"
target_size = 3 target_size = 3
named_port {
name = "customhttp"
port = 8080
}
named_port {
name = "customhttps"
port = 8443
}
}`, template1, target, template2, igm) }`, template1, target, template2, igm)
} }

View File

@ -281,7 +281,7 @@ func resourceContainerClusterCreate(d *schema.ResourceData, meta interface{}) er
// Wait until it's created // Wait until it's created
wait := resource.StateChangeConf{ wait := resource.StateChangeConf{
Pending: []string{"PENDING", "RUNNING"}, Pending: []string{"PENDING", "RUNNING"},
Target: "DONE", Target: []string{"DONE"},
Timeout: 30 * time.Minute, Timeout: 30 * time.Minute,
MinTimeout: 3 * time.Second, MinTimeout: 3 * time.Second,
Refresh: func() (interface{}, string, error) { Refresh: func() (interface{}, string, error) {
@ -373,7 +373,7 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er
// Wait until it's updated // Wait until it's updated
wait := resource.StateChangeConf{ wait := resource.StateChangeConf{
Pending: []string{"PENDING", "RUNNING"}, Pending: []string{"PENDING", "RUNNING"},
Target: "DONE", Target: []string{"DONE"},
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
MinTimeout: 2 * time.Second, MinTimeout: 2 * time.Second,
Refresh: func() (interface{}, string, error) { Refresh: func() (interface{}, string, error) {
@ -413,7 +413,7 @@ func resourceContainerClusterDelete(d *schema.ResourceData, meta interface{}) er
// Wait until it's deleted // Wait until it's deleted
wait := resource.StateChangeConf{ wait := resource.StateChangeConf{
Pending: []string{"PENDING", "RUNNING"}, Pending: []string{"PENDING", "RUNNING"},
Target: "DONE", Target: []string{"DONE"},
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
MinTimeout: 3 * time.Second, MinTimeout: 3 * time.Second,
Refresh: func() (interface{}, string, error) { Refresh: func() (interface{}, string, error) {

View File

@ -37,7 +37,7 @@ func (w *SqlAdminOperationWaiter) RefreshFunc() resource.StateRefreshFunc {
func (w *SqlAdminOperationWaiter) Conf() *resource.StateChangeConf { func (w *SqlAdminOperationWaiter) Conf() *resource.StateChangeConf {
return &resource.StateChangeConf{ return &resource.StateChangeConf{
Pending: []string{"PENDING", "RUNNING"}, Pending: []string{"PENDING", "RUNNING"},
Target: "DONE", Target: []string{"DONE"},
Refresh: w.RefreshFunc(), Refresh: w.RefreshFunc(),
} }
} }

View File

@ -5,12 +5,14 @@ import (
"testing" "testing"
"github.com/cyberdelia/heroku-go/v3" "github.com/cyberdelia/heroku-go/v3"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform" "github.com/hashicorp/terraform/terraform"
) )
func TestAccHerokuAddon_Basic(t *testing.T) { func TestAccHerokuAddon_Basic(t *testing.T) {
var addon heroku.Addon var addon heroku.Addon
appName := fmt.Sprintf("tftest-%s", acctest.RandString(10))
resource.Test(t, resource.TestCase{ resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) }, PreCheck: func() { testAccPreCheck(t) },
@ -18,14 +20,14 @@ func TestAccHerokuAddon_Basic(t *testing.T) {
CheckDestroy: testAccCheckHerokuAddonDestroy, CheckDestroy: testAccCheckHerokuAddonDestroy,
Steps: []resource.TestStep{ Steps: []resource.TestStep{
resource.TestStep{ resource.TestStep{
Config: testAccCheckHerokuAddonConfig_basic, Config: testAccCheckHerokuAddonConfig_basic(appName),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckHerokuAddonExists("heroku_addon.foobar", &addon), testAccCheckHerokuAddonExists("heroku_addon.foobar", &addon),
testAccCheckHerokuAddonAttributes(&addon, "deployhooks:http"), testAccCheckHerokuAddonAttributes(&addon, "deployhooks:http"),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"heroku_addon.foobar", "config.0.url", "http://google.com"), "heroku_addon.foobar", "config.0.url", "http://google.com"),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"heroku_addon.foobar", "app", "terraform-test-app"), "heroku_addon.foobar", "app", appName),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"heroku_addon.foobar", "plan", "deployhooks:http"), "heroku_addon.foobar", "plan", "deployhooks:http"),
), ),
@ -37,6 +39,7 @@ func TestAccHerokuAddon_Basic(t *testing.T) {
// GH-198 // GH-198
func TestAccHerokuAddon_noPlan(t *testing.T) { func TestAccHerokuAddon_noPlan(t *testing.T) {
var addon heroku.Addon var addon heroku.Addon
appName := fmt.Sprintf("tftest-%s", acctest.RandString(10))
resource.Test(t, resource.TestCase{ resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) }, PreCheck: func() { testAccPreCheck(t) },
@ -44,23 +47,23 @@ func TestAccHerokuAddon_noPlan(t *testing.T) {
CheckDestroy: testAccCheckHerokuAddonDestroy, CheckDestroy: testAccCheckHerokuAddonDestroy,
Steps: []resource.TestStep{ Steps: []resource.TestStep{
resource.TestStep{ resource.TestStep{
Config: testAccCheckHerokuAddonConfig_no_plan, Config: testAccCheckHerokuAddonConfig_no_plan(appName),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckHerokuAddonExists("heroku_addon.foobar", &addon), testAccCheckHerokuAddonExists("heroku_addon.foobar", &addon),
testAccCheckHerokuAddonAttributes(&addon, "memcachier:dev"), testAccCheckHerokuAddonAttributes(&addon, "memcachier:dev"),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"heroku_addon.foobar", "app", "terraform-test-app"), "heroku_addon.foobar", "app", appName),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"heroku_addon.foobar", "plan", "memcachier"), "heroku_addon.foobar", "plan", "memcachier"),
), ),
}, },
resource.TestStep{ resource.TestStep{
Config: testAccCheckHerokuAddonConfig_no_plan, Config: testAccCheckHerokuAddonConfig_no_plan(appName),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckHerokuAddonExists("heroku_addon.foobar", &addon), testAccCheckHerokuAddonExists("heroku_addon.foobar", &addon),
testAccCheckHerokuAddonAttributes(&addon, "memcachier:dev"), testAccCheckHerokuAddonAttributes(&addon, "memcachier:dev"),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"heroku_addon.foobar", "app", "terraform-test-app"), "heroku_addon.foobar", "app", appName),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"heroku_addon.foobar", "plan", "memcachier"), "heroku_addon.foobar", "plan", "memcachier"),
), ),
@ -128,9 +131,10 @@ func testAccCheckHerokuAddonExists(n string, addon *heroku.Addon) resource.TestC
} }
} }
const testAccCheckHerokuAddonConfig_basic = ` func testAccCheckHerokuAddonConfig_basic(appName string) string {
return fmt.Sprintf(`
resource "heroku_app" "foobar" { resource "heroku_app" "foobar" {
name = "terraform-test-app" name = "%s"
region = "us" region = "us"
} }
@ -140,15 +144,18 @@ resource "heroku_addon" "foobar" {
config { config {
url = "http://google.com" url = "http://google.com"
} }
}` }`, appName)
}
const testAccCheckHerokuAddonConfig_no_plan = ` func testAccCheckHerokuAddonConfig_no_plan(appName string) string {
return fmt.Sprintf(`
resource "heroku_app" "foobar" { resource "heroku_app" "foobar" {
name = "terraform-test-app" name = "%s"
region = "us" region = "us"
} }
resource "heroku_addon" "foobar" { resource "heroku_addon" "foobar" {
app = "${heroku_app.foobar.name}" app = "${heroku_app.foobar.name}"
plan = "memcachier" plan = "memcachier"
}` }`, appName)
}

View File

@ -6,12 +6,14 @@ import (
"testing" "testing"
"github.com/cyberdelia/heroku-go/v3" "github.com/cyberdelia/heroku-go/v3"
"github.com/hashicorp/terraform/helper/acctest"
"github.com/hashicorp/terraform/helper/resource" "github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/terraform" "github.com/hashicorp/terraform/terraform"
) )
func TestAccHerokuApp_Basic(t *testing.T) { func TestAccHerokuApp_Basic(t *testing.T) {
var app heroku.App var app heroku.App
appName := fmt.Sprintf("tftest-%s", acctest.RandString(10))
resource.Test(t, resource.TestCase{ resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) }, PreCheck: func() { testAccPreCheck(t) },
@ -19,12 +21,12 @@ func TestAccHerokuApp_Basic(t *testing.T) {
CheckDestroy: testAccCheckHerokuAppDestroy, CheckDestroy: testAccCheckHerokuAppDestroy,
Steps: []resource.TestStep{ Steps: []resource.TestStep{
resource.TestStep{ resource.TestStep{
Config: testAccCheckHerokuAppConfig_basic, Config: testAccCheckHerokuAppConfig_basic(appName),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckHerokuAppExists("heroku_app.foobar", &app), testAccCheckHerokuAppExists("heroku_app.foobar", &app),
testAccCheckHerokuAppAttributes(&app), testAccCheckHerokuAppAttributes(&app, appName),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"heroku_app.foobar", "name", "terraform-test-app"), "heroku_app.foobar", "name", appName),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"heroku_app.foobar", "config_vars.0.FOO", "bar"), "heroku_app.foobar", "config_vars.0.FOO", "bar"),
), ),
@ -35,6 +37,8 @@ func TestAccHerokuApp_Basic(t *testing.T) {
func TestAccHerokuApp_NameChange(t *testing.T) { func TestAccHerokuApp_NameChange(t *testing.T) {
var app heroku.App var app heroku.App
appName := fmt.Sprintf("tftest-%s", acctest.RandString(10))
appName2 := fmt.Sprintf("%s-v2", appName)
resource.Test(t, resource.TestCase{ resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) }, PreCheck: func() { testAccPreCheck(t) },
@ -42,23 +46,23 @@ func TestAccHerokuApp_NameChange(t *testing.T) {
CheckDestroy: testAccCheckHerokuAppDestroy, CheckDestroy: testAccCheckHerokuAppDestroy,
Steps: []resource.TestStep{ Steps: []resource.TestStep{
resource.TestStep{ resource.TestStep{
Config: testAccCheckHerokuAppConfig_basic, Config: testAccCheckHerokuAppConfig_basic(appName),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckHerokuAppExists("heroku_app.foobar", &app), testAccCheckHerokuAppExists("heroku_app.foobar", &app),
testAccCheckHerokuAppAttributes(&app), testAccCheckHerokuAppAttributes(&app, appName),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"heroku_app.foobar", "name", "terraform-test-app"), "heroku_app.foobar", "name", appName),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"heroku_app.foobar", "config_vars.0.FOO", "bar"), "heroku_app.foobar", "config_vars.0.FOO", "bar"),
), ),
}, },
resource.TestStep{ resource.TestStep{
Config: testAccCheckHerokuAppConfig_updated, Config: testAccCheckHerokuAppConfig_updated(appName2),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckHerokuAppExists("heroku_app.foobar", &app), testAccCheckHerokuAppExists("heroku_app.foobar", &app),
testAccCheckHerokuAppAttributesUpdated(&app), testAccCheckHerokuAppAttributesUpdated(&app, appName2),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"heroku_app.foobar", "name", "terraform-test-renamed"), "heroku_app.foobar", "name", appName2),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"heroku_app.foobar", "config_vars.0.FOO", "bing"), "heroku_app.foobar", "config_vars.0.FOO", "bing"),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
@ -71,6 +75,7 @@ func TestAccHerokuApp_NameChange(t *testing.T) {
func TestAccHerokuApp_NukeVars(t *testing.T) { func TestAccHerokuApp_NukeVars(t *testing.T) {
var app heroku.App var app heroku.App
appName := fmt.Sprintf("tftest-%s", acctest.RandString(10))
resource.Test(t, resource.TestCase{ resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) }, PreCheck: func() { testAccPreCheck(t) },
@ -78,23 +83,23 @@ func TestAccHerokuApp_NukeVars(t *testing.T) {
CheckDestroy: testAccCheckHerokuAppDestroy, CheckDestroy: testAccCheckHerokuAppDestroy,
Steps: []resource.TestStep{ Steps: []resource.TestStep{
resource.TestStep{ resource.TestStep{
Config: testAccCheckHerokuAppConfig_basic, Config: testAccCheckHerokuAppConfig_basic(appName),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckHerokuAppExists("heroku_app.foobar", &app), testAccCheckHerokuAppExists("heroku_app.foobar", &app),
testAccCheckHerokuAppAttributes(&app), testAccCheckHerokuAppAttributes(&app, appName),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"heroku_app.foobar", "name", "terraform-test-app"), "heroku_app.foobar", "name", appName),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"heroku_app.foobar", "config_vars.0.FOO", "bar"), "heroku_app.foobar", "config_vars.0.FOO", "bar"),
), ),
}, },
resource.TestStep{ resource.TestStep{
Config: testAccCheckHerokuAppConfig_no_vars, Config: testAccCheckHerokuAppConfig_no_vars(appName),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckHerokuAppExists("heroku_app.foobar", &app), testAccCheckHerokuAppExists("heroku_app.foobar", &app),
testAccCheckHerokuAppAttributesNoVars(&app), testAccCheckHerokuAppAttributesNoVars(&app, appName),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"heroku_app.foobar", "name", "terraform-test-app"), "heroku_app.foobar", "name", appName),
resource.TestCheckResourceAttr( resource.TestCheckResourceAttr(
"heroku_app.foobar", "config_vars.0.FOO", ""), "heroku_app.foobar", "config_vars.0.FOO", ""),
), ),
@ -105,6 +110,7 @@ func TestAccHerokuApp_NukeVars(t *testing.T) {
func TestAccHerokuApp_Organization(t *testing.T) { func TestAccHerokuApp_Organization(t *testing.T) {
var app heroku.OrganizationApp var app heroku.OrganizationApp
appName := fmt.Sprintf("tftest-%s", acctest.RandString(10))
org := os.Getenv("HEROKU_ORGANIZATION") org := os.Getenv("HEROKU_ORGANIZATION")
resource.Test(t, resource.TestCase{ resource.Test(t, resource.TestCase{
@ -118,10 +124,10 @@ func TestAccHerokuApp_Organization(t *testing.T) {
CheckDestroy: testAccCheckHerokuAppDestroy, CheckDestroy: testAccCheckHerokuAppDestroy,
Steps: []resource.TestStep{ Steps: []resource.TestStep{
resource.TestStep{ resource.TestStep{
Config: fmt.Sprintf(testAccCheckHerokuAppConfig_organization, org), Config: testAccCheckHerokuAppConfig_organization(appName, org),
Check: resource.ComposeTestCheckFunc( Check: resource.ComposeTestCheckFunc(
testAccCheckHerokuAppExistsOrg("heroku_app.foobar", &app), testAccCheckHerokuAppExistsOrg("heroku_app.foobar", &app),
testAccCheckHerokuAppAttributesOrg(&app, org), testAccCheckHerokuAppAttributesOrg(&app, appName, org),
), ),
}, },
}, },
@ -146,7 +152,7 @@ func testAccCheckHerokuAppDestroy(s *terraform.State) error {
return nil return nil
} }
func testAccCheckHerokuAppAttributes(app *heroku.App) resource.TestCheckFunc { func testAccCheckHerokuAppAttributes(app *heroku.App, appName string) resource.TestCheckFunc {
return func(s *terraform.State) error { return func(s *terraform.State) error {
client := testAccProvider.Meta().(*heroku.Service) client := testAccProvider.Meta().(*heroku.Service)
@ -158,7 +164,7 @@ func testAccCheckHerokuAppAttributes(app *heroku.App) resource.TestCheckFunc {
return fmt.Errorf("Bad stack: %s", app.Stack.Name) return fmt.Errorf("Bad stack: %s", app.Stack.Name)
} }
if app.Name != "terraform-test-app" { if app.Name != appName {
return fmt.Errorf("Bad name: %s", app.Name) return fmt.Errorf("Bad name: %s", app.Name)
} }
@ -175,11 +181,11 @@ func testAccCheckHerokuAppAttributes(app *heroku.App) resource.TestCheckFunc {
} }
} }
func testAccCheckHerokuAppAttributesUpdated(app *heroku.App) resource.TestCheckFunc { func testAccCheckHerokuAppAttributesUpdated(app *heroku.App, appName string) resource.TestCheckFunc {
return func(s *terraform.State) error { return func(s *terraform.State) error {
client := testAccProvider.Meta().(*heroku.Service) client := testAccProvider.Meta().(*heroku.Service)
if app.Name != "terraform-test-renamed" { if app.Name != appName {
return fmt.Errorf("Bad name: %s", app.Name) return fmt.Errorf("Bad name: %s", app.Name)
} }
@ -202,11 +208,11 @@ func testAccCheckHerokuAppAttributesUpdated(app *heroku.App) resource.TestCheckF
} }
} }
func testAccCheckHerokuAppAttributesNoVars(app *heroku.App) resource.TestCheckFunc { func testAccCheckHerokuAppAttributesNoVars(app *heroku.App, appName string) resource.TestCheckFunc {
return func(s *terraform.State) error { return func(s *terraform.State) error {
client := testAccProvider.Meta().(*heroku.Service) client := testAccProvider.Meta().(*heroku.Service)
if app.Name != "terraform-test-app" { if app.Name != appName {
return fmt.Errorf("Bad name: %s", app.Name) return fmt.Errorf("Bad name: %s", app.Name)
} }
@ -223,7 +229,7 @@ func testAccCheckHerokuAppAttributesNoVars(app *heroku.App) resource.TestCheckFu
} }
} }
func testAccCheckHerokuAppAttributesOrg(app *heroku.OrganizationApp, org string) resource.TestCheckFunc { func testAccCheckHerokuAppAttributesOrg(app *heroku.OrganizationApp, appName string, org string) resource.TestCheckFunc {
return func(s *terraform.State) error { return func(s *terraform.State) error {
client := testAccProvider.Meta().(*heroku.Service) client := testAccProvider.Meta().(*heroku.Service)
@ -235,7 +241,7 @@ func testAccCheckHerokuAppAttributesOrg(app *heroku.OrganizationApp, org string)
return fmt.Errorf("Bad stack: %s", app.Stack.Name) return fmt.Errorf("Bad stack: %s", app.Stack.Name)
} }
if app.Name != "terraform-test-app" { if app.Name != appName {
return fmt.Errorf("Bad name: %s", app.Name) return fmt.Errorf("Bad name: %s", app.Name)
} }
@ -316,36 +322,43 @@ func testAccCheckHerokuAppExistsOrg(n string, app *heroku.OrganizationApp) resou
} }
} }
const testAccCheckHerokuAppConfig_basic = ` func testAccCheckHerokuAppConfig_basic(appName string) string {
return fmt.Sprintf(`
resource "heroku_app" "foobar" { resource "heroku_app" "foobar" {
name = "terraform-test-app" name = "%s"
region = "us" region = "us"
config_vars { config_vars {
FOO = "bar" FOO = "bar"
} }
}` }`, appName)
}
const testAccCheckHerokuAppConfig_updated = ` func testAccCheckHerokuAppConfig_updated(appName string) string {
return fmt.Sprintf(`
resource "heroku_app" "foobar" { resource "heroku_app" "foobar" {
name = "terraform-test-renamed" name = "%s"
region = "us" region = "us"
config_vars { config_vars {
FOO = "bing" FOO = "bing"
BAZ = "bar" BAZ = "bar"
} }
}` }`, appName)
}
const testAccCheckHerokuAppConfig_no_vars = ` func testAccCheckHerokuAppConfig_no_vars(appName string) string {
return fmt.Sprintf(`
resource "heroku_app" "foobar" { resource "heroku_app" "foobar" {
name = "terraform-test-app" name = "%s"
region = "us" region = "us"
}` }`, appName)
}
const testAccCheckHerokuAppConfig_organization = ` func testAccCheckHerokuAppConfig_organization(appName, org string) string {
return fmt.Sprintf(`
resource "heroku_app" "foobar" { resource "heroku_app" "foobar" {
name = "terraform-test-app" name = "%s"
region = "us" region = "us"
organization { organization {
@ -355,4 +368,5 @@ resource "heroku_app" "foobar" {
config_vars { config_vars {
FOO = "bar" FOO = "bar"
} }
}` }`, appName, org)
}

View File

@ -3,7 +3,9 @@ package mailgun
import ( import (
"fmt" "fmt"
"log" "log"
"time"
"github.com/hashicorp/terraform/helper/resource"
"github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/helper/schema"
"github.com/pearkes/mailgun" "github.com/pearkes/mailgun"
) )
@ -143,7 +145,16 @@ func resourceMailgunDomainDelete(d *schema.ResourceData, meta interface{}) error
return fmt.Errorf("Error deleting domain: %s", err) return fmt.Errorf("Error deleting domain: %s", err)
} }
// Give the destroy a chance to take effect
return resource.Retry(1*time.Minute, func() error {
_, err = client.RetrieveDomain(d.Id())
if err == nil {
log.Printf("[INFO] Retrying until domain disappears...")
return fmt.Errorf("Domain seems to still exist; will check again.")
}
log.Printf("[INFO] Got error looking for domain, seems gone: %s", err)
return nil return nil
})
} }
func resourceMailgunDomainRead(d *schema.ResourceData, meta interface{}) error { func resourceMailgunDomainRead(d *schema.ResourceData, meta interface{}) error {

View File

@ -48,10 +48,10 @@ func testAccCheckMailgunDomainDestroy(s *terraform.State) error {
continue continue
} }
_, err := client.RetrieveDomain(rs.Primary.ID) resp, err := client.RetrieveDomain(rs.Primary.ID)
if err == nil { if err == nil {
return fmt.Errorf("Domain still exists") return fmt.Errorf("Domain still exists: %#v", resp)
} }
} }

View File

@ -137,7 +137,7 @@ func resourceBlockStorageVolumeV1Create(d *schema.ResourceData, meta interface{}
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"downloading", "creating"}, Pending: []string{"downloading", "creating"},
Target: "available", Target: []string{"available"},
Refresh: VolumeV1StateRefreshFunc(blockStorageClient, v.ID), Refresh: VolumeV1StateRefreshFunc(blockStorageClient, v.ID),
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
Delay: 10 * time.Second, Delay: 10 * time.Second,
@ -243,7 +243,7 @@ func resourceBlockStorageVolumeV1Delete(d *schema.ResourceData, meta interface{}
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"in-use", "attaching"}, Pending: []string{"in-use", "attaching"},
Target: "available", Target: []string{"available"},
Refresh: VolumeV1StateRefreshFunc(blockStorageClient, d.Id()), Refresh: VolumeV1StateRefreshFunc(blockStorageClient, d.Id()),
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
Delay: 10 * time.Second, Delay: 10 * time.Second,
@ -273,7 +273,7 @@ func resourceBlockStorageVolumeV1Delete(d *schema.ResourceData, meta interface{}
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"deleting", "downloading", "available"}, Pending: []string{"deleting", "downloading", "available"},
Target: "deleted", Target: []string{"deleted"},
Refresh: VolumeV1StateRefreshFunc(blockStorageClient, d.Id()), Refresh: VolumeV1StateRefreshFunc(blockStorageClient, d.Id()),
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
Delay: 10 * time.Second, Delay: 10 * time.Second,

View File

@ -411,7 +411,7 @@ func resourceComputeInstanceV2Create(d *schema.ResourceData, meta interface{}) e
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"BUILD"}, Pending: []string{"BUILD"},
Target: "ACTIVE", Target: []string{"ACTIVE"},
Refresh: ServerV2StateRefreshFunc(computeClient, server.ID), Refresh: ServerV2StateRefreshFunc(computeClient, server.ID),
Timeout: 30 * time.Minute, Timeout: 30 * time.Minute,
Delay: 10 * time.Second, Delay: 10 * time.Second,
@ -744,7 +744,7 @@ func resourceComputeInstanceV2Update(d *schema.ResourceData, meta interface{}) e
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"RESIZE"}, Pending: []string{"RESIZE"},
Target: "VERIFY_RESIZE", Target: []string{"VERIFY_RESIZE"},
Refresh: ServerV2StateRefreshFunc(computeClient, d.Id()), Refresh: ServerV2StateRefreshFunc(computeClient, d.Id()),
Timeout: 3 * time.Minute, Timeout: 3 * time.Minute,
Delay: 10 * time.Second, Delay: 10 * time.Second,
@ -765,7 +765,7 @@ func resourceComputeInstanceV2Update(d *schema.ResourceData, meta interface{}) e
stateConf = &resource.StateChangeConf{ stateConf = &resource.StateChangeConf{
Pending: []string{"VERIFY_RESIZE"}, Pending: []string{"VERIFY_RESIZE"},
Target: "ACTIVE", Target: []string{"ACTIVE"},
Refresh: ServerV2StateRefreshFunc(computeClient, d.Id()), Refresh: ServerV2StateRefreshFunc(computeClient, d.Id()),
Timeout: 3 * time.Minute, Timeout: 3 * time.Minute,
Delay: 10 * time.Second, Delay: 10 * time.Second,
@ -798,7 +798,7 @@ func resourceComputeInstanceV2Delete(d *schema.ResourceData, meta interface{}) e
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"ACTIVE"}, Pending: []string{"ACTIVE"},
Target: "DELETED", Target: []string{"DELETED"},
Refresh: ServerV2StateRefreshFunc(computeClient, d.Id()), Refresh: ServerV2StateRefreshFunc(computeClient, d.Id()),
Timeout: 30 * time.Minute, Timeout: 30 * time.Minute,
Delay: 10 * time.Second, Delay: 10 * time.Second,
@ -1158,7 +1158,7 @@ func attachVolumesToInstance(computeClient *gophercloud.ServiceClient, blockClie
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"attaching", "available"}, Pending: []string{"attaching", "available"},
Target: "in-use", Target: []string{"in-use"},
Refresh: VolumeV1StateRefreshFunc(blockClient, va["volume_id"].(string)), Refresh: VolumeV1StateRefreshFunc(blockClient, va["volume_id"].(string)),
Timeout: 30 * time.Minute, Timeout: 30 * time.Minute,
Delay: 5 * time.Second, Delay: 5 * time.Second,
@ -1185,7 +1185,7 @@ func detachVolumesFromInstance(computeClient *gophercloud.ServiceClient, blockCl
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"detaching", "in-use"}, Pending: []string{"detaching", "in-use"},
Target: "available", Target: []string{"available"},
Refresh: VolumeV1StateRefreshFunc(blockClient, va["volume_id"].(string)), Refresh: VolumeV1StateRefreshFunc(blockClient, va["volume_id"].(string)),
Timeout: 30 * time.Minute, Timeout: 30 * time.Minute,
Delay: 5 * time.Second, Delay: 5 * time.Second,

View File

@ -217,7 +217,7 @@ func resourceComputeSecGroupV2Delete(d *schema.ResourceData, meta interface{}) e
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"ACTIVE"}, Pending: []string{"ACTIVE"},
Target: "DELETED", Target: []string{"DELETED"},
Refresh: SecGroupV2StateRefreshFunc(computeClient, d), Refresh: SecGroupV2StateRefreshFunc(computeClient, d),
Timeout: 10 * time.Minute, Timeout: 10 * time.Minute,
Delay: 10 * time.Second, Delay: 10 * time.Second,

View File

@ -81,7 +81,7 @@ func resourceFWFirewallV1Create(d *schema.ResourceData, meta interface{}) error
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"PENDING_CREATE"}, Pending: []string{"PENDING_CREATE"},
Target: "ACTIVE", Target: []string{"ACTIVE"},
Refresh: waitForFirewallActive(networkingClient, firewall.ID), Refresh: waitForFirewallActive(networkingClient, firewall.ID),
Timeout: 30 * time.Second, Timeout: 30 * time.Second,
Delay: 0, Delay: 0,
@ -150,7 +150,7 @@ func resourceFWFirewallV1Update(d *schema.ResourceData, meta interface{}) error
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"PENDING_CREATE", "PENDING_UPDATE"}, Pending: []string{"PENDING_CREATE", "PENDING_UPDATE"},
Target: "ACTIVE", Target: []string{"ACTIVE"},
Refresh: waitForFirewallActive(networkingClient, d.Id()), Refresh: waitForFirewallActive(networkingClient, d.Id()),
Timeout: 30 * time.Second, Timeout: 30 * time.Second,
Delay: 0, Delay: 0,
@ -178,7 +178,7 @@ func resourceFWFirewallV1Delete(d *schema.ResourceData, meta interface{}) error
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"PENDING_CREATE", "PENDING_UPDATE"}, Pending: []string{"PENDING_CREATE", "PENDING_UPDATE"},
Target: "ACTIVE", Target: []string{"ACTIVE"},
Refresh: waitForFirewallActive(networkingClient, d.Id()), Refresh: waitForFirewallActive(networkingClient, d.Id()),
Timeout: 30 * time.Second, Timeout: 30 * time.Second,
Delay: 0, Delay: 0,
@ -195,7 +195,7 @@ func resourceFWFirewallV1Delete(d *schema.ResourceData, meta interface{}) error
stateConf = &resource.StateChangeConf{ stateConf = &resource.StateChangeConf{
Pending: []string{"DELETING"}, Pending: []string{"DELETING"},
Target: "DELETED", Target: []string{"DELETED"},
Refresh: waitForFirewallDeletion(networkingClient, d.Id()), Refresh: waitForFirewallDeletion(networkingClient, d.Id()),
Timeout: 2 * time.Minute, Timeout: 2 * time.Minute,
Delay: 0, Delay: 0,

View File

@ -116,7 +116,7 @@ func resourceLBMonitorV1Create(d *schema.ResourceData, meta interface{}) error {
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"PENDING"}, Pending: []string{"PENDING"},
Target: "ACTIVE", Target: []string{"ACTIVE"},
Refresh: waitForLBMonitorActive(networkingClient, m.ID), Refresh: waitForLBMonitorActive(networkingClient, m.ID),
Timeout: 2 * time.Minute, Timeout: 2 * time.Minute,
Delay: 5 * time.Second, Delay: 5 * time.Second,
@ -206,7 +206,7 @@ func resourceLBMonitorV1Delete(d *schema.ResourceData, meta interface{}) error {
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"ACTIVE", "PENDING"}, Pending: []string{"ACTIVE", "PENDING"},
Target: "DELETED", Target: []string{"DELETED"},
Refresh: waitForLBMonitorDelete(networkingClient, d.Id()), Refresh: waitForLBMonitorDelete(networkingClient, d.Id()),
Timeout: 2 * time.Minute, Timeout: 2 * time.Minute,
Delay: 5 * time.Second, Delay: 5 * time.Second,

View File

@ -130,7 +130,7 @@ func resourceLBPoolV1Create(d *schema.ResourceData, meta interface{}) error {
log.Printf("[DEBUG] Waiting for OpenStack LB pool (%s) to become available.", p.ID) log.Printf("[DEBUG] Waiting for OpenStack LB pool (%s) to become available.", p.ID)
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Target: "ACTIVE", Target: []string{"ACTIVE"},
Refresh: waitForLBPoolActive(networkingClient, p.ID), Refresh: waitForLBPoolActive(networkingClient, p.ID),
Timeout: 2 * time.Minute, Timeout: 2 * time.Minute,
Delay: 5 * time.Second, Delay: 5 * time.Second,
@ -294,7 +294,7 @@ func resourceLBPoolV1Delete(d *schema.ResourceData, meta interface{}) error {
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"ACTIVE"}, Pending: []string{"ACTIVE"},
Target: "DELETED", Target: []string{"DELETED"},
Refresh: waitForLBPoolDelete(networkingClient, d.Id()), Refresh: waitForLBPoolDelete(networkingClient, d.Id()),
Timeout: 2 * time.Minute, Timeout: 2 * time.Minute,
Delay: 5 * time.Second, Delay: 5 * time.Second,

View File

@ -134,7 +134,7 @@ func resourceLBVipV1Create(d *schema.ResourceData, meta interface{}) error {
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"PENDING_CREATE"}, Pending: []string{"PENDING_CREATE"},
Target: "ACTIVE", Target: []string{"ACTIVE"},
Refresh: waitForLBVIPActive(networkingClient, p.ID), Refresh: waitForLBVIPActive(networkingClient, p.ID),
Timeout: 2 * time.Minute, Timeout: 2 * time.Minute,
Delay: 5 * time.Second, Delay: 5 * time.Second,
@ -265,7 +265,7 @@ func resourceLBVipV1Delete(d *schema.ResourceData, meta interface{}) error {
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"ACTIVE"}, Pending: []string{"ACTIVE"},
Target: "DELETED", Target: []string{"DELETED"},
Refresh: waitForLBVIPDelete(networkingClient, d.Id()), Refresh: waitForLBVIPDelete(networkingClient, d.Id()),
Timeout: 2 * time.Minute, Timeout: 2 * time.Minute,
Delay: 5 * time.Second, Delay: 5 * time.Second,

View File

@ -74,7 +74,7 @@ func resourceNetworkFloatingIPV2Create(d *schema.ResourceData, meta interface{})
log.Printf("[DEBUG] Waiting for OpenStack Neutron Floating IP (%s) to become available.", floatingIP.ID) log.Printf("[DEBUG] Waiting for OpenStack Neutron Floating IP (%s) to become available.", floatingIP.ID)
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Target: "ACTIVE", Target: []string{"ACTIVE"},
Refresh: waitForFloatingIPActive(networkingClient, floatingIP.ID), Refresh: waitForFloatingIPActive(networkingClient, floatingIP.ID),
Timeout: 2 * time.Minute, Timeout: 2 * time.Minute,
Delay: 5 * time.Second, Delay: 5 * time.Second,
@ -143,7 +143,7 @@ func resourceNetworkFloatingIPV2Delete(d *schema.ResourceData, meta interface{})
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"ACTIVE"}, Pending: []string{"ACTIVE"},
Target: "DELETED", Target: []string{"DELETED"},
Refresh: waitForFloatingIPDelete(networkingClient, d.Id()), Refresh: waitForFloatingIPDelete(networkingClient, d.Id()),
Timeout: 2 * time.Minute, Timeout: 2 * time.Minute,
Delay: 5 * time.Second, Delay: 5 * time.Second,

View File

@ -95,7 +95,7 @@ func resourceNetworkingNetworkV2Create(d *schema.ResourceData, meta interface{})
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"BUILD"}, Pending: []string{"BUILD"},
Target: "ACTIVE", Target: []string{"ACTIVE"},
Refresh: waitForNetworkActive(networkingClient, n.ID), Refresh: waitForNetworkActive(networkingClient, n.ID),
Timeout: 2 * time.Minute, Timeout: 2 * time.Minute,
Delay: 5 * time.Second, Delay: 5 * time.Second,
@ -182,7 +182,7 @@ func resourceNetworkingNetworkV2Delete(d *schema.ResourceData, meta interface{})
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"ACTIVE"}, Pending: []string{"ACTIVE"},
Target: "DELETED", Target: []string{"DELETED"},
Refresh: waitForNetworkDelete(networkingClient, d.Id()), Refresh: waitForNetworkDelete(networkingClient, d.Id()),
Timeout: 2 * time.Minute, Timeout: 2 * time.Minute,
Delay: 5 * time.Second, Delay: 5 * time.Second,

View File

@ -127,7 +127,7 @@ func resourceNetworkingPortV2Create(d *schema.ResourceData, meta interface{}) er
log.Printf("[DEBUG] Waiting for OpenStack Neutron Port (%s) to become available.", p.ID) log.Printf("[DEBUG] Waiting for OpenStack Neutron Port (%s) to become available.", p.ID)
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Target: "ACTIVE", Target: []string{"ACTIVE"},
Refresh: waitForNetworkPortActive(networkingClient, p.ID), Refresh: waitForNetworkPortActive(networkingClient, p.ID),
Timeout: 2 * time.Minute, Timeout: 2 * time.Minute,
Delay: 5 * time.Second, Delay: 5 * time.Second,
@ -220,7 +220,7 @@ func resourceNetworkingPortV2Delete(d *schema.ResourceData, meta interface{}) er
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"ACTIVE"}, Pending: []string{"ACTIVE"},
Target: "DELETED", Target: []string{"DELETED"},
Refresh: waitForNetworkPortDelete(networkingClient, d.Id()), Refresh: waitForNetworkPortDelete(networkingClient, d.Id()),
Timeout: 2 * time.Minute, Timeout: 2 * time.Minute,
Delay: 5 * time.Second, Delay: 5 * time.Second,

View File

@ -68,7 +68,7 @@ func resourceNetworkingRouterInterfaceV2Create(d *schema.ResourceData, meta inte
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"BUILD", "PENDING_CREATE", "PENDING_UPDATE"}, Pending: []string{"BUILD", "PENDING_CREATE", "PENDING_UPDATE"},
Target: "ACTIVE", Target: []string{"ACTIVE"},
Refresh: waitForRouterInterfaceActive(networkingClient, n.PortID), Refresh: waitForRouterInterfaceActive(networkingClient, n.PortID),
Timeout: 2 * time.Minute, Timeout: 2 * time.Minute,
Delay: 5 * time.Second, Delay: 5 * time.Second,
@ -117,7 +117,7 @@ func resourceNetworkingRouterInterfaceV2Delete(d *schema.ResourceData, meta inte
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"ACTIVE"}, Pending: []string{"ACTIVE"},
Target: "DELETED", Target: []string{"DELETED"},
Refresh: waitForRouterInterfaceDelete(networkingClient, d), Refresh: waitForRouterInterfaceDelete(networkingClient, d),
Timeout: 2 * time.Minute, Timeout: 2 * time.Minute,
Delay: 5 * time.Second, Delay: 5 * time.Second,

View File

@ -87,7 +87,7 @@ func resourceNetworkingRouterV2Create(d *schema.ResourceData, meta interface{})
log.Printf("[DEBUG] Waiting for OpenStack Neutron Router (%s) to become available", n.ID) log.Printf("[DEBUG] Waiting for OpenStack Neutron Router (%s) to become available", n.ID)
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"BUILD", "PENDING_CREATE", "PENDING_UPDATE"}, Pending: []string{"BUILD", "PENDING_CREATE", "PENDING_UPDATE"},
Target: "ACTIVE", Target: []string{"ACTIVE"},
Refresh: waitForRouterActive(networkingClient, n.ID), Refresh: waitForRouterActive(networkingClient, n.ID),
Timeout: 2 * time.Minute, Timeout: 2 * time.Minute,
Delay: 5 * time.Second, Delay: 5 * time.Second,
@ -167,7 +167,7 @@ func resourceNetworkingRouterV2Delete(d *schema.ResourceData, meta interface{})
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"ACTIVE"}, Pending: []string{"ACTIVE"},
Target: "DELETED", Target: []string{"DELETED"},
Refresh: waitForRouterDelete(networkingClient, d.Id()), Refresh: waitForRouterDelete(networkingClient, d.Id()),
Timeout: 2 * time.Minute, Timeout: 2 * time.Minute,
Delay: 5 * time.Second, Delay: 5 * time.Second,

View File

@ -146,7 +146,7 @@ func resourceNetworkingSubnetV2Create(d *schema.ResourceData, meta interface{})
log.Printf("[DEBUG] Waiting for Subnet (%s) to become available", s.ID) log.Printf("[DEBUG] Waiting for Subnet (%s) to become available", s.ID)
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Target: "ACTIVE", Target: []string{"ACTIVE"},
Refresh: waitForSubnetActive(networkingClient, s.ID), Refresh: waitForSubnetActive(networkingClient, s.ID),
Timeout: 2 * time.Minute, Timeout: 2 * time.Minute,
Delay: 5 * time.Second, Delay: 5 * time.Second,
@ -237,7 +237,7 @@ func resourceNetworkingSubnetV2Delete(d *schema.ResourceData, meta interface{})
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: []string{"ACTIVE"}, Pending: []string{"ACTIVE"},
Target: "DELETED", Target: []string{"DELETED"},
Refresh: waitForSubnetDelete(networkingClient, d.Id()), Refresh: waitForSubnetDelete(networkingClient, d.Id()),
Timeout: 2 * time.Minute, Timeout: 2 * time.Minute,
Delay: 5 * time.Second, Delay: 5 * time.Second,

View File

@ -261,7 +261,7 @@ func resourcePacketDeviceDelete(d *schema.ResourceData, meta interface{}) error
func waitForDeviceAttribute(d *schema.ResourceData, target string, pending []string, attribute string, meta interface{}) (interface{}, error) { func waitForDeviceAttribute(d *schema.ResourceData, target string, pending []string, attribute string, meta interface{}) (interface{}, error) {
stateConf := &resource.StateChangeConf{ stateConf := &resource.StateChangeConf{
Pending: pending, Pending: pending,
Target: target, Target: []string{target},
Refresh: newDeviceStateRefreshFunc(d, attribute, meta), Refresh: newDeviceStateRefreshFunc(d, attribute, meta),
Timeout: 60 * time.Minute, Timeout: 60 * time.Minute,
Delay: 10 * time.Second, Delay: 10 * time.Second,

View File

@ -19,6 +19,7 @@ func resourceCloudinitConfig() *schema.Resource {
return &schema.Resource{ return &schema.Resource{
Create: resourceCloudinitConfigCreate, Create: resourceCloudinitConfigCreate,
Delete: resourceCloudinitConfigDelete, Delete: resourceCloudinitConfigDelete,
Update: resourceCloudinitConfigCreate,
Exists: resourceCloudinitConfigExists, Exists: resourceCloudinitConfigExists,
Read: resourceCloudinitConfigRead, Read: resourceCloudinitConfigRead,
@ -26,7 +27,6 @@ func resourceCloudinitConfig() *schema.Resource {
"part": &schema.Schema{ "part": &schema.Schema{
Type: schema.TypeList, Type: schema.TypeList,
Required: true, Required: true,
ForceNew: true,
Elem: &schema.Resource{ Elem: &schema.Resource{
Schema: map[string]*schema.Schema{ Schema: map[string]*schema.Schema{
"content_type": &schema.Schema{ "content_type": &schema.Schema{

View File

@ -85,3 +85,49 @@ func TestRender(t *testing.T) {
}) })
} }
} }
func TestCloudConfig_update(t *testing.T) {
r.Test(t, r.TestCase{
Providers: testProviders,
Steps: []r.TestStep{
r.TestStep{
Config: testCloudInitConfig_basic,
Check: r.ComposeTestCheckFunc(
r.TestCheckResourceAttr("template_cloudinit_config.config", "rendered", testCloudInitConfig_basic_expected),
),
},
r.TestStep{
Config: testCloudInitConfig_update,
Check: r.ComposeTestCheckFunc(
r.TestCheckResourceAttr("template_cloudinit_config.config", "rendered", testCloudInitConfig_update_expected),
),
},
},
})
}
var testCloudInitConfig_basic = `
resource "template_cloudinit_config" "config" {
part {
content_type = "text/x-shellscript"
content = "baz"
}
}`
var testCloudInitConfig_basic_expected = `Content-Type: multipart/mixed; boundary=\"MIMEBOUNDRY\"\nMIME-Version: 1.0\r\n--MIMEBOUNDRY\r\nContent-Transfer-Encoding: 7bit\r\nContent-Type: text/x-shellscript\r\nMime-Version: 1.0\r\n\r\nbaz\r\n--MIMEBOUNDRY--\r\n`
var testCloudInitConfig_update = `
resource "template_cloudinit_config" "config" {
part {
content_type = "text/x-shellscript"
content = "baz"
}
part {
content_type = "text/x-shellscript"
content = "ffbaz"
}
}`
var testCloudInitConfig_update_expected = `Content-Type: multipart/mixed; boundary=\"MIMEBOUNDRY\"\nMIME-Version: 1.0\r\n--MIMEBOUNDRY\r\nContent-Transfer-Encoding: 7bit\r\nContent-Type: text/x-shellscript\r\nMime-Version: 1.0\r\n\r\nbaz\r\n--MIMEBOUNDRY\r\nContent-Transfer-Encoding: 7bit\r\nContent-Type: text/x-shellscript\r\nMime-Version: 1.0\r\n\r\nffbaz\r\n--MIMEBOUNDRY--\r\n`

View File

@ -9,6 +9,8 @@ import (
"encoding/pem" "encoding/pem"
"fmt" "fmt"
"golang.org/x/crypto/ssh"
"github.com/hashicorp/terraform/helper/schema" "github.com/hashicorp/terraform/helper/schema"
) )
@ -80,6 +82,16 @@ func resourcePrivateKey() *schema.Resource {
Type: schema.TypeString, Type: schema.TypeString,
Computed: true, Computed: true,
}, },
"public_key_pem": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
"public_key_openssh": &schema.Schema{
Type: schema.TypeString,
Computed: true,
},
}, },
} }
} }
@ -100,25 +112,47 @@ func CreatePrivateKey(d *schema.ResourceData, meta interface{}) error {
var keyPemBlock *pem.Block var keyPemBlock *pem.Block
switch k := key.(type) { switch k := key.(type) {
case *rsa.PrivateKey: case *rsa.PrivateKey:
keyPemBlock = &pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(k)} keyPemBlock = &pem.Block{
Type: "RSA PRIVATE KEY",
Bytes: x509.MarshalPKCS1PrivateKey(k),
}
case *ecdsa.PrivateKey: case *ecdsa.PrivateKey:
b, err := x509.MarshalECPrivateKey(k) keyBytes, err := x509.MarshalECPrivateKey(k)
if err != nil { if err != nil {
return fmt.Errorf("error encoding key to PEM: %s", err) return fmt.Errorf("error encoding key to PEM: %s", err)
} }
keyPemBlock = &pem.Block{Type: "EC PRIVATE KEY", Bytes: b} keyPemBlock = &pem.Block{
Type: "EC PRIVATE KEY",
Bytes: keyBytes,
}
default: default:
return fmt.Errorf("unsupported private key type") return fmt.Errorf("unsupported private key type")
} }
keyPem := string(pem.EncodeToMemory(keyPemBlock)) keyPem := string(pem.EncodeToMemory(keyPemBlock))
pubKeyBytes, err := x509.MarshalPKIXPublicKey(publicKey(key)) pubKey := publicKey(key)
pubKeyBytes, err := x509.MarshalPKIXPublicKey(pubKey)
if err != nil { if err != nil {
return fmt.Errorf("failed to marshal public key: %s", err) return fmt.Errorf("failed to marshal public key: %s", err)
} }
pubKeyPemBlock := &pem.Block{
Type: "PUBLIC KEY",
Bytes: pubKeyBytes,
}
d.SetId(hashForState(string((pubKeyBytes)))) d.SetId(hashForState(string((pubKeyBytes))))
d.Set("private_key_pem", keyPem) d.Set("private_key_pem", keyPem)
d.Set("public_key_pem", string(pem.EncodeToMemory(pubKeyPemBlock)))
sshPubKey, err := ssh.NewPublicKey(pubKey)
if err == nil {
// Not all EC types can be SSH keys, so we'll produce this only
// if an appropriate type was selected.
sshPubKeyBytes := ssh.MarshalAuthorizedKey(sshPubKey)
d.Set("public_key_openssh", string(sshPubKeyBytes))
} else {
d.Set("public_key_openssh", "")
}
return nil return nil
} }

View File

@ -18,18 +18,35 @@ func TestPrivateKeyRSA(t *testing.T) {
resource "tls_private_key" "test" { resource "tls_private_key" "test" {
algorithm = "RSA" algorithm = "RSA"
} }
output "key_pem" { output "private_key_pem" {
value = "${tls_private_key.test.private_key_pem}" value = "${tls_private_key.test.private_key_pem}"
} }
output "public_key_pem" {
value = "${tls_private_key.test.public_key_pem}"
}
output "public_key_openssh" {
value = "${tls_private_key.test.public_key_openssh}"
}
`, `,
Check: func(s *terraform.State) error { Check: func(s *terraform.State) error {
got := s.RootModule().Outputs["key_pem"] gotPrivate := s.RootModule().Outputs["private_key_pem"]
if !strings.HasPrefix(got, "-----BEGIN RSA PRIVATE KEY----") { if !strings.HasPrefix(gotPrivate, "-----BEGIN RSA PRIVATE KEY----") {
return fmt.Errorf("key is missing RSA key PEM preamble") return fmt.Errorf("private key is missing RSA key PEM preamble")
} }
if len(got) > 1700 { if len(gotPrivate) > 1700 {
return fmt.Errorf("key PEM looks too long for a 2048-bit key (got %v characters)", len(got)) return fmt.Errorf("private key PEM looks too long for a 2048-bit key (got %v characters)", len(gotPrivate))
} }
gotPublic := s.RootModule().Outputs["public_key_pem"]
if !strings.HasPrefix(gotPublic, "-----BEGIN PUBLIC KEY----") {
return fmt.Errorf("public key is missing public key PEM preamble")
}
gotPublicSSH := s.RootModule().Outputs["public_key_openssh"]
if !strings.HasPrefix(gotPublicSSH, "ssh-rsa ") {
return fmt.Errorf("SSH public key is missing ssh-rsa prefix")
}
return nil return nil
}, },
}, },
@ -67,15 +84,67 @@ func TestPrivateKeyECDSA(t *testing.T) {
resource "tls_private_key" "test" { resource "tls_private_key" "test" {
algorithm = "ECDSA" algorithm = "ECDSA"
} }
output "key_pem" { output "private_key_pem" {
value = "${tls_private_key.test.private_key_pem}" value = "${tls_private_key.test.private_key_pem}"
} }
output "public_key_pem" {
value = "${tls_private_key.test.public_key_pem}"
}
output "public_key_openssh" {
value = "${tls_private_key.test.public_key_openssh}"
}
`, `,
Check: func(s *terraform.State) error { Check: func(s *terraform.State) error {
got := s.RootModule().Outputs["key_pem"] gotPrivate := s.RootModule().Outputs["private_key_pem"]
if !strings.HasPrefix(got, "-----BEGIN EC PRIVATE KEY----") { if !strings.HasPrefix(gotPrivate, "-----BEGIN EC PRIVATE KEY----") {
return fmt.Errorf("Key is missing EC key PEM preamble") return fmt.Errorf("Private key is missing EC key PEM preamble")
} }
gotPublic := s.RootModule().Outputs["public_key_pem"]
if !strings.HasPrefix(gotPublic, "-----BEGIN PUBLIC KEY----") {
return fmt.Errorf("public key is missing public key PEM preamble")
}
gotPublicSSH := s.RootModule().Outputs["public_key_openssh"]
if gotPublicSSH != "" {
return fmt.Errorf("P224 EC key should not generate OpenSSH public key")
}
return nil
},
},
r.TestStep{
Config: `
resource "tls_private_key" "test" {
algorithm = "ECDSA"
ecdsa_curve = "P256"
}
output "private_key_pem" {
value = "${tls_private_key.test.private_key_pem}"
}
output "public_key_pem" {
value = "${tls_private_key.test.public_key_pem}"
}
output "public_key_openssh" {
value = "${tls_private_key.test.public_key_openssh}"
}
`,
Check: func(s *terraform.State) error {
gotPrivate := s.RootModule().Outputs["private_key_pem"]
if !strings.HasPrefix(gotPrivate, "-----BEGIN EC PRIVATE KEY----") {
return fmt.Errorf("Private key is missing EC key PEM preamble")
}
gotPublic := s.RootModule().Outputs["public_key_pem"]
if !strings.HasPrefix(gotPublic, "-----BEGIN PUBLIC KEY----") {
return fmt.Errorf("public key is missing public key PEM preamble")
}
gotPublicSSH := s.RootModule().Outputs["public_key_openssh"]
if !strings.HasPrefix(gotPublicSSH, "ecdsa-sha2-nistp256 ") {
return fmt.Errorf("P256 SSH public key is missing ecdsa prefix")
}
return nil return nil
}, },
}, },

Some files were not shown because too many files have changed in this diff Show More