Commit Graph

6586 Commits

Author SHA1 Message Date
Clint 1daac2f5c7 provider/aws: Update spot instance request to store new ipv6 (#12571) 2017-03-10 14:49:28 -06:00
Sean Chittenden 17fb98afa2 Circonus Provider (#12338)
* Begin stubbing out the Circonus provider.

* Remove all references to `reverse:secret_key`.

This value is dynamically set by the service and unused by Terraform.

* Update the `circonus_check` resource.

Still a WIP.

* Add docs for the `circonus_check` resource.

Commit miss, this should have been included in the last commit.

* "Fix" serializing check tags

I still need to figure out how I can make them order agnostic w/o using
a TypeSet.  I'm worried that's what I'm going to have to do.

* Spike a quick circonus_broker data source.

* Convert tags to a Set so the order does not matter.

* Add a `circonus_account` data source.

* Correctly spell account.

Pointed out by: @postwait

* Add the `circonus_contact_group` resource.

* Push descriptions into their own file in order to reduce the busyness of the schema when reviewing code.

* Rename `circonus_broker` and `broker` to `circonus_collector` and `collector`, respectively.

Change made with concent by Circonus to reduce confusion (@postwait, @maier, and several others).

* Use upstream contsants where available.

* Import the latest circonus-gometrics.

* Move to using a Set of collectors vs a list attached to a single attribute.

* Rename "cid" to "id" in the circonus_account data source and elsewhere
where possible.

* Inject a tag automatically.  Update gometrics.

* Checkpoint `circonus_metric` resource.

* Enable provider-level auto-tagging.  This is disabled by default.

* Rearrange metric.  This is an experimental "style" of a provider.  We'll see.

That moment. When you think you've gone off the rails on a mad scientist
experiment but like the outcome and think you may be onto something but
haven't proven it to yourself or anyone else yet?  That.  That exact
feeling of semi-confidence while being alone in the wilderness.  Please
let this not be the Terraform provider equivalent of DJB's C style of
coding.

We'll know in another resource or two if this was a horrible mistake or
not.

* Begin moving `resource_circonus_check` over to the new world order/structure:

Much of this is WIP and incomplete, but here is the new supported
structure:

```
variable "used_metric_name" {
  default = "_usage`0`_used"
}

resource "circonus_check" "usage" {
  # collectors = ["${var.collectors}"]
  collector {
    id = "${var.collectors[0]}"
  }

  name       = "${var.check_name}"
  notes      = "${var.notes}"

  json {
    url = "https://${var.target}/account/current"

    http_headers = {
      "Accept"                = "application/json"
      "X-Circonus-App-Name"   = "TerraformCheck"
      "X-Circonus-Auth-Token" = "${var.api_token}"
    }
  }

  stream {
    name = "${circonus_metric.used.name}"
    tags = "${circonus_metric.used.tags}"
    type = "${circonus_metric.used.type}"
  }

  tags = {
    source = "circonus"
  }
}

resource "circonus_metric" "used" {
  name = "${var.used_metric_name}"

  tags = {
    source = "circonus"
  }

  type = "numeric"
}
```

* Document the `circonus_metric` resource.

* Updated `circonus_check` docs.

* If a port was present, automatically set it in the Config.

* Alpha sort the check parameters now that they've been renamed.

* Fix a handful of panics as a result of the schema changing.

* Move back to a `TypeSet` for tags.  After a stint with `TypeMap`, move
back to `TypeSet`.

A set of strings seems to match the API the best.  The `map` type was
convenient because it reduced the amount of boilerplate, but you loose
out on other things.  For instance, tags come in the form of
`category:value`, so naturally it seems like you could use a map, but
you can't without severe loss of functionality because assigning two
values to the same category is common.  And you can't normalize map
input or suppress the output correctly (this was eventually what broke
the camel's back).  I tried an experiment of normalizing the input to be
`category:value` as the key in the map and a value of `""`, but... seee
diff suppress.  In this case, simple is good.

While here bring some cleanups to _Metric since that was my initial
testing target.

* Rename `providerConfig` to `_ProviderConfig`

* Checkpoint the `json` check type.

* Fix a few residual issues re: missing descriptions.

* Rename `validateRegexp` to `_ValidateRegexp`

* Use tags as real sets, not just a slice of strings.

* Move the DiffSuppressFunc for tags down to the Elem.

* Fix up unit tests to chase the updated, default hasher function being used.

* Remove `Computed` attribute from `TypeSet` objects.

This fixes a pile of issues re: update that I was having.

* Rename functions.

`GetStringOk` -> `GetStringOK`
`GetSetAsListOk` -> `GetSetAsListOK`
`GetIntOk` -> `GetIntOK`

* Various small cleanups and comments rolled into a single commit.

* Add a `postgresql` check type for the `circonus_check` resource.

* Rename various validator functions to be _CapitalCase vs capitalCase.

* Err... finish the validator renames.

* Add `GetFloat64()` support.

* Add `icmp_ping` check type support.

* Catch up to the _API*Attr renames.

Deliberately left out of the previous commit in order to create a clean
example of what is required to add a new check type to the
`circonus_check` resource.

* Clarify when the `target` attribute is required for the `postgresql`
check type.

* Correctly pull the metric ID attribute from the right location.

* Add a circonus_stream_group resource (a.k.a. a Circonus "metric cluster")

* Add support for the [`caql`](https://login.circonus.com/user/docs/caql_reference) check type.

* Add support for the `http` check type.

* `s/SSL/TLS/g`

* Add support for `tcp` check types.

* Enumerate the available metrics that are supported for each check type.

* Add [`cloudwatch`](https://login.circonus.com/user/docs/Data/CheckTypes/CloudWatch) check type support.

* Add a `circonus_trigger` resource (a.k.a Circonus Ruleset).

* Rename a handful of functions to make it clear in the function name the
direction of flow for information moving through the provider.

TL;DR: Replace `parse` and `read` with "foo to bar"-like names.

* Fix the attribute name used in a validator.  Absent != After.

* Set the minimum `absent` predicate to 70s per testing.

* Fix the regression tests for circonus_trigger now that absent has a 70s min

* Fix up the `tcp` check to require a `host` attribute.

Fix tests.  It's clear I didn't run these before committing/pushing the
`tcp` check last time.

* Fix `circonus_check` for `cloudwatch` checks.

* Rename `parsePerCheckTypeConfig()` to `_CheckConfigToAPI` to be
consistent with other function names.

grep(1)ability of code++

* Slack buttons as an integer are string encoded.

* Fix updates for `circonus_contact`.

* Fix the out parameters for contact groups.

* Move to using `_CastSchemaToTF()` where appropriate.

* Fix circonus_contact_group.  Updates work as expected now.

* Use `_StateSet()` in place of `d.Set()` everywhere.

* Make a quick pass over the collector datasource to modernize its style

* Quick pass for items identified by `golint`.

* Fix up collectors

* Fix the `json` check type.

Reconcile possible sources of drift.  Update now works as expected.

* Normalize trigger durations to seconds.

* Improve the robustness of the state handling for the `circonus_contact_group` resource.

* I'm torn on this, but sort the contact groups in the notify list.

This does mean that if the first contact group in the list has a higher
lexical sort order the plan won't converge until the offending resource
is tainted and recreated.  But there's also some sorting happening
elsewhere, so.... sort and taint for now and this will need to be
revisited in the future.

* Add support for the `httptrap` check type.

* Remove empty units from the state file.

* Metric clusters can return a 404.  Detect this accordingly in its
respective Exists handler.

* Add a `circonus_graph` resource.

* Fix a handful of bugs in the graph provider.

* Re-enable the necessary `ConflictsWith` definitions and normalize attribute names.

* Objects that have been deleted via the UI return a 404. Handle in Exists().

* Teach `circonus_graph`'s Stack set to accept nil values.

* Set `ForceNew: true` for a graph's name.

* Chase various API fixes required to make `circonus_graph` work as expected.

* Fix up the handling of sub-1 zoom resolutions for graphs.

* Add the `check_by_collector` out parameter to the `circonus_check` resource.

* Improve validation of line vs area graphs.  Fix graph_style.

* Fix up the `logarithmic` graph axis option.

* Resolve various trivial `go vet` issues.

* Add a stream_group out parameter.

* Remove incorrectly applied `Optional` attributes to the `circonus_account` resource.

* Remove various `Optional` attributes from the `circonus_collector` data source.

* Centralize the common need to suppress leading and trailing whitespace into `suppressWhitespace`.

* Sync up with upstream vendor fixes for circonus_graph.

* Update the checksum value for the http check.

* Chase `circonus_graph`'s underlying `line_style` API object change from `string` to `*string`.

* Clean up tests to use a generic terraform regression testing account.

* Add support for the MySQL to the `circonus_check` resource.

* Begin stubbing out the Circonus provider.

* Remove all references to `reverse:secret_key`.

This value is dynamically set by the service and unused by Terraform.

* Update the `circonus_check` resource.

Still a WIP.

* Add docs for the `circonus_check` resource.

Commit miss, this should have been included in the last commit.

* "Fix" serializing check tags

I still need to figure out how I can make them order agnostic w/o using
a TypeSet.  I'm worried that's what I'm going to have to do.

* Spike a quick circonus_broker data source.

* Convert tags to a Set so the order does not matter.

* Add a `circonus_account` data source.

* Correctly spell account.

Pointed out by: @postwait

* Add the `circonus_contact_group` resource.

* Push descriptions into their own file in order to reduce the busyness of the schema when reviewing code.

* Rename `circonus_broker` and `broker` to `circonus_collector` and `collector`, respectively.

Change made with concent by Circonus to reduce confusion (@postwait, @maier, and several others).

* Use upstream contsants where available.

* Import the latest circonus-gometrics.

* Move to using a Set of collectors vs a list attached to a single attribute.

* Rename "cid" to "id" in the circonus_account data source and elsewhere
where possible.

* Inject a tag automatically.  Update gometrics.

* Checkpoint `circonus_metric` resource.

* Enable provider-level auto-tagging.  This is disabled by default.

* Rearrange metric.  This is an experimental "style" of a provider.  We'll see.

That moment. When you think you've gone off the rails on a mad scientist
experiment but like the outcome and think you may be onto something but
haven't proven it to yourself or anyone else yet?  That.  That exact
feeling of semi-confidence while being alone in the wilderness.  Please
let this not be the Terraform provider equivalent of DJB's C style of
coding.

We'll know in another resource or two if this was a horrible mistake or
not.

* Begin moving `resource_circonus_check` over to the new world order/structure:

Much of this is WIP and incomplete, but here is the new supported
structure:

```
variable "used_metric_name" {
  default = "_usage`0`_used"
}

resource "circonus_check" "usage" {
  # collectors = ["${var.collectors}"]
  collector {
    id = "${var.collectors[0]}"
  }

  name       = "${var.check_name}"
  notes      = "${var.notes}"

  json {
    url = "https://${var.target}/account/current"

    http_headers = {
      "Accept"                = "application/json"
      "X-Circonus-App-Name"   = "TerraformCheck"
      "X-Circonus-Auth-Token" = "${var.api_token}"
    }
  }

  stream {
    name = "${circonus_metric.used.name}"
    tags = "${circonus_metric.used.tags}"
    type = "${circonus_metric.used.type}"
  }

  tags = {
    source = "circonus"
  }
}

resource "circonus_metric" "used" {
  name = "${var.used_metric_name}"

  tags = {
    source = "circonus"
  }

  type = "numeric"
}
```

* Document the `circonus_metric` resource.

* Updated `circonus_check` docs.

* If a port was present, automatically set it in the Config.

* Alpha sort the check parameters now that they've been renamed.

* Fix a handful of panics as a result of the schema changing.

* Move back to a `TypeSet` for tags.  After a stint with `TypeMap`, move
back to `TypeSet`.

A set of strings seems to match the API the best.  The `map` type was
convenient because it reduced the amount of boilerplate, but you loose
out on other things.  For instance, tags come in the form of
`category:value`, so naturally it seems like you could use a map, but
you can't without severe loss of functionality because assigning two
values to the same category is common.  And you can't normalize map
input or suppress the output correctly (this was eventually what broke
the camel's back).  I tried an experiment of normalizing the input to be
`category:value` as the key in the map and a value of `""`, but... seee
diff suppress.  In this case, simple is good.

While here bring some cleanups to _Metric since that was my initial
testing target.

* Rename `providerConfig` to `_ProviderConfig`

* Checkpoint the `json` check type.

* Fix a few residual issues re: missing descriptions.

* Rename `validateRegexp` to `_ValidateRegexp`

* Use tags as real sets, not just a slice of strings.

* Move the DiffSuppressFunc for tags down to the Elem.

* Fix up unit tests to chase the updated, default hasher function being used.

* Remove `Computed` attribute from `TypeSet` objects.

This fixes a pile of issues re: update that I was having.

* Rename functions.

`GetStringOk` -> `GetStringOK`
`GetSetAsListOk` -> `GetSetAsListOK`
`GetIntOk` -> `GetIntOK`

* Various small cleanups and comments rolled into a single commit.

* Add a `postgresql` check type for the `circonus_check` resource.

* Rename various validator functions to be _CapitalCase vs capitalCase.

* Err... finish the validator renames.

* Add `GetFloat64()` support.

* Add `icmp_ping` check type support.

* Catch up to the _API*Attr renames.

Deliberately left out of the previous commit in order to create a clean
example of what is required to add a new check type to the
`circonus_check` resource.

* Clarify when the `target` attribute is required for the `postgresql`
check type.

* Correctly pull the metric ID attribute from the right location.

* Add a circonus_stream_group resource (a.k.a. a Circonus "metric cluster")

* Add support for the [`caql`](https://login.circonus.com/user/docs/caql_reference) check type.

* Add support for the `http` check type.

* `s/SSL/TLS/g`

* Add support for `tcp` check types.

* Enumerate the available metrics that are supported for each check type.

* Add [`cloudwatch`](https://login.circonus.com/user/docs/Data/CheckTypes/CloudWatch) check type support.

* Add a `circonus_trigger` resource (a.k.a Circonus Ruleset).

* Rename a handful of functions to make it clear in the function name the
direction of flow for information moving through the provider.

TL;DR: Replace `parse` and `read` with "foo to bar"-like names.

* Fix the attribute name used in a validator.  Absent != After.

* Set the minimum `absent` predicate to 70s per testing.

* Fix the regression tests for circonus_trigger now that absent has a 70s min

* Fix up the `tcp` check to require a `host` attribute.

Fix tests.  It's clear I didn't run these before committing/pushing the
`tcp` check last time.

* Fix `circonus_check` for `cloudwatch` checks.

* Rename `parsePerCheckTypeConfig()` to `_CheckConfigToAPI` to be
consistent with other function names.

grep(1)ability of code++

* Slack buttons as an integer are string encoded.

* Fix updates for `circonus_contact`.

* Fix the out parameters for contact groups.

* Move to using `_CastSchemaToTF()` where appropriate.

* Fix circonus_contact_group.  Updates work as expected now.

* Use `_StateSet()` in place of `d.Set()` everywhere.

* Make a quick pass over the collector datasource to modernize its style

* Quick pass for items identified by `golint`.

* Fix up collectors

* Fix the `json` check type.

Reconcile possible sources of drift.  Update now works as expected.

* Normalize trigger durations to seconds.

* Improve the robustness of the state handling for the `circonus_contact_group` resource.

* I'm torn on this, but sort the contact groups in the notify list.

This does mean that if the first contact group in the list has a higher
lexical sort order the plan won't converge until the offending resource
is tainted and recreated.  But there's also some sorting happening
elsewhere, so.... sort and taint for now and this will need to be
revisited in the future.

* Add support for the `httptrap` check type.

* Remove empty units from the state file.

* Metric clusters can return a 404.  Detect this accordingly in its
respective Exists handler.

* Add a `circonus_graph` resource.

* Fix a handful of bugs in the graph provider.

* Re-enable the necessary `ConflictsWith` definitions and normalize attribute names.

* Objects that have been deleted via the UI return a 404. Handle in Exists().

* Teach `circonus_graph`'s Stack set to accept nil values.

* Set `ForceNew: true` for a graph's name.

* Chase various API fixes required to make `circonus_graph` work as expected.

* Fix up the handling of sub-1 zoom resolutions for graphs.

* Add the `check_by_collector` out parameter to the `circonus_check` resource.

* Improve validation of line vs area graphs.  Fix graph_style.

* Fix up the `logarithmic` graph axis option.

* Resolve various trivial `go vet` issues.

* Add a stream_group out parameter.

* Remove incorrectly applied `Optional` attributes to the `circonus_account` resource.

* Remove various `Optional` attributes from the `circonus_collector` data source.

* Centralize the common need to suppress leading and trailing whitespace into `suppressWhitespace`.

* Sync up with upstream vendor fixes for circonus_graph.

* Update the checksum value for the http check.

* Chase `circonus_graph`'s underlying `line_style` API object change from `string` to `*string`.

* Clean up tests to use a generic terraform regression testing account.

* Rename all identifiers that began with a `_` and replace with a corresponding lowercase glyph.

* Remove stale comment in types.

* Move the calls to `ResourceData`'s `SetId()` calls to be first in the
list so that no resources are lost in the event of a `panic()`.

* Remove `stateSet` from the `circonus_trigger` resource.

* Remove `stateSet` from the `circonus_stream_group` resource.

* Remove `schemaSet` from the `circonus_graph` resource.

* Remove `stateSet` from the `circonus_contact` resource.

* Remove `stateSet` from the `circonus_metric` resource.

* Remove `stateSet` from the `circonus_account` data source.

* Remove `stateSet` from the `circonus_collector` data source.

* Remove stray `stateSet` call from the `circonus_contact` resource.

This is an odd artifact to find... I'm completely unsure as to why it
was there to begin with but am mostly certain it's a bug and needs to be
removed.

* Remove `stateSet` from the `circonus_check` resource.

* Remove the `stateSet` helper function.

All call sites have been converted to return errors vs `panic()`'ing at
runtime.

* Remove a pile of unused functions and type definitions.

* Remove the last of the `attrReader` interface.

* Remove an unused `Sprintf` call.

* Update `circonus-gometrics` and remove unused files.

* Document what `convertToHelperSchema()` does.

Rename `castSchemaToTF` to `convertToHelperSchema`.

Change the function parameter ordering so the `map` of attribute
descriptions: this is much easier to maintain when the description map
is first when creating schema inline.

* Move descriptions into their respective source files.

* Remove all instances of `panic()`.

In the case of software bugs, log an error.  Never `panic()` and always
return a value.

* Rename `stream_group` to `metric_cluster`.

* Rename triggers to rule sets

* Rename `stream` to `metric`.

* Chase the `stream` -> `metric` change into the docs.

* Remove some unused test functions.

* Add the now required `color` attribute for graphing a `metric_cluster`.

* Add a missing description to silence a warning.

* Add `id` as a selector for the account data source.

* Futureproof testing: Randomize all asset names to prevent any possible resource conflicts.

This isn't a necessary change for our current build and regression
testing, but *just in case* we have a radical change to our testing
framework in the future, make all resource names fully random.

* Rename various values to match the Circonus docs.

* s/alarm/alert/g

* Ensure ruleset criteria can not be empty.
2017-03-10 14:19:17 -06:00
clint shryock 4a34fb1d34 fix go vet issue 2017-03-10 11:25:58 -06:00
clint shryock 87c91f5bc8 provider/aws: annoymize tags in VPC data source test 2017-03-10 10:48:35 -06:00
Andy Lindeman fa18174713 Updates heroku-go to the latest revision (#12575) 2017-03-10 14:00:03 +02:00
Clint d24c761bbb more aws acc test fixes (#12568)
* provider/aws: fix TestAccAWSAutoScalingGroup_ALB_TargetGroups_ELBCapacity

* provider/aws: Randomize to fix TestAccDataSourceAwsVpc_basic
2017-03-09 16:01:18 -06:00
Clint 3fdeacdca7 helper/schema: Rename Timeout resource block to Timeouts (#12533)
helper/schema: Rename Timeout resource block to Timeouts

- Pluralize configuration argument name to better represent that there is
one block for many timeouts
- use a const for the configuration timeouts key
- update docs
2017-03-09 14:40:14 -06:00
James Bardin 343b96c9d7 Add MetaReset
Make sure the ArmClient gets a new StopContext for each test
2017-03-09 08:39:02 -05:00
John Engelman 8d35e3dc22 Closes #11054. Apply the set value for finish_upgrade. (#12545) 2017-03-09 01:32:28 +02:00
Seth Vargo d387860c19 Hash custom_data in state storage (#12214)
This also switches to helpers for b64.
2017-03-08 23:57:51 +02:00
clint shryock 3022eb6da0 provider/aws: fix acc test for data source 2017-03-08 15:43:57 -06:00
Paul Stack 10f080f315 provider/aws: Prevent aws_dms_replication_task panic (#12539)
Fixes: #12506

When a replication_task cdc_start_time was specified as an int, it was
causing a panic as the conversion to a Unix timestampe was expecting a
string

```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAwsDmsReplicationTaskBasic'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/03/08 22:55:29 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAwsDmsReplicationTaskBasic -timeout 120m
=== RUN   TestAccAwsDmsReplicationTaskBasic
--- PASS: TestAccAwsDmsReplicationTaskBasic (1089.77s)
PASS
ok  	github.com/hashicorp/terraform/builtin/providers/aws	1089.802s
```
2017-03-08 23:27:47 +02:00
clint shryock 7f87abae91 provider/aws: increase randomization in beantsalk app version tests 2017-03-08 15:25:41 -06:00
Clint c5b833b999 provider/aws: Default build_timeout to 60, matching docs, update tests (#12531) 2017-03-08 14:14:02 -06:00
Robert Rudduck bf4d6d5b1e provider/azurerm: Add support for managed availability sets. (#12532)
* Add support for managed availability sets.

* Formatting.
2017-03-08 21:30:11 +02:00
Paul Stack b5b53bc56a provider/aws: Error on trying to recreate an existing customer gateway (#12501)
Fixes: #7492

When we use the same IP Address, BGP ASN and VPN Type as an existing
aws_customer_gateway, terraform will take control of that gateway (not
import it!) and try and modify it. This could be very bad

There is a warning on the AWS documentation that one gateway of the same
parameters can be created, Terraform is now going to error if a gateway
of the same parameters is attempted to be created

```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSCustomerGateway_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/03/07 18:40:39 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSCustomerGateway_ -timeout 120m
=== RUN   TestAccAWSCustomerGateway_importBasic
--- PASS: TestAccAWSCustomerGateway_importBasic (31.11s)
=== RUN   TestAccAWSCustomerGateway_basic
--- PASS: TestAccAWSCustomerGateway_basic (68.72s)
=== RUN   TestAccAWSCustomerGateway_similarAlreadyExists
--- PASS: TestAccAWSCustomerGateway_similarAlreadyExists (35.18s)
=== RUN   TestAccAWSCustomerGateway_disappears
--- PASS: TestAccAWSCustomerGateway_disappears (25.13s)
PASS
ok  	github.com/hashicorp/terraform/builtin/providers/aws	160.172s
```
2017-03-08 21:11:59 +02:00
Paul Stack 0b0a76a3d5 provider/aws: Add the IPV6 cidr block to the vpc datasource (#12529)
Fixes: #12526

```
% make testacc TEST=./builtin/providers/aws  TESTARGS='-run=TestAccDataSourceAwsVpc_ipv6Associated'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/03/08 17:42:13 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v
-run=TestAccDataSourceAwsVpc_ipv6Associated -timeout 120m
=== RUN   TestAccDataSourceAwsVpc_ipv6Associated
--- PASS: TestAccDataSourceAwsVpc_ipv6Associated (71.33s)
PASS
ok      github.com/hashicorp/terraform/builtin/providers/aws71.366s
```
2017-03-08 21:08:37 +02:00
tpoindessous 9a2e9914de providers/google : google_compute_disk.go : Minor correction : "Deleting disk" message in Delete method (#12521)
* WIP: added a new resource type : google_compute_snapshot

* [WIP]: added a test acceptance for google_compute_snapshot

* Cleanup

* Minor correction : "Deleting disk" message in Delete method

* Error in merge action

* Error in merge action
2017-03-08 17:34:49 +02:00
Clint f6ac200aca provider/aws: Rename 'timeout' to 'build_timeout' for Codebuild (#12503) 2017-03-08 09:29:54 -06:00
Brandon Tosch c2a6625d0f Bug Fix: Terraform crashes during azurerm_container_service provisioning (#12516)
* Corrected referencing on sshKeys map

* changed tests to use East US due to resource availability
2017-03-08 17:25:51 +02:00
Clint 5d894e4ffd Fix up command and some go fmt issues (#12509) 2017-03-07 16:03:45 -06:00
Daniel Portella 88cdae91e6 provider/docker: added support for linux capabilities (#12045)
* added support for linux capabilities

Refs #11623

Added capabilities block
Added tests for it
Added documentation for it.

My PC doesnt support memory swap so it errors there.

```
$ make testacc TEST=./builtin/providers/docker TESTARGS='-run=TestAccDockerContainer_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/02/17 14:57:08 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/docker -v -run=TestAccDockerContainer_ -timeout 120m
=== RUN   TestAccDockerContainer_basic
--- PASS: TestAccDockerContainer_basic (44.50s)
=== RUN   TestAccDockerContainer_volume
--- PASS: TestAccDockerContainer_volume (40.73s)
=== RUN   TestAccDockerContainer_customized
--- FAIL: TestAccDockerContainer_customized (50.27s)
	testing.go:265: Step 0 error: Check failed: Check 2/2 error: Container has wrong memory swap setting: -1
	Please check that you machine supports memory swap (you can do that by running 'docker info' command).
=== RUN   TestAccDockerContainer_upload
--- PASS: TestAccDockerContainer_upload (38.56s)
FAIL
exit status 1
FAIL	github.com/hashicorp/terraform/builtin/providers/docker	174.070s
Makefile:48: recipe for target 'testacc' failed
make: *** [testacc] Error 1
```

* Documentation changes.

* added maxitems and rerun tests
2017-03-07 18:48:20 +02:00
Paul Stack b57e0bee2a provider/datadog: Update to datadog_monitor still used d.GetOk (#12497)
Fixes: #12494

The Create was changed to use the default and not d.GetOk - the update
wasn't - this was causing issues when trying to update to a false value

```
% make testacc TEST=./builtin/providers/datadog
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/03/07 16:20:54 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/datadog -v  -timeout 120m
=== RUN   TestDatadogMonitor_import
--- PASS: TestDatadogMonitor_import (4.77s)
=== RUN   TestDatadogUser_import
--- PASS: TestDatadogUser_import (6.23s)
=== RUN   TestProvider
--- PASS: TestProvider (0.00s)
=== RUN   TestProvider_impl
--- PASS: TestProvider_impl (0.00s)
=== RUN   TestAccDatadogMonitor_Basic
--- PASS: TestAccDatadogMonitor_Basic (3.83s)
=== RUN   TestAccDatadogMonitor_BasicNoTreshold
--- PASS: TestAccDatadogMonitor_BasicNoTreshold (4.92s)
=== RUN   TestAccDatadogMonitor_Updated
--- PASS: TestAccDatadogMonitor_Updated (5.88s)
=== RUN   TestAccDatadogMonitor_TrimWhitespace
--- PASS: TestAccDatadogMonitor_TrimWhitespace (3.23s)
=== RUN   TestAccDatadogMonitor_Basic_float_int
--- PASS: TestAccDatadogMonitor_Basic_float_int (5.73s)
=== RUN   TestAccDatadogTimeboard_update
--- PASS: TestAccDatadogTimeboard_update (8.86s)
=== RUN   TestValidateAggregatorMethod
--- PASS: TestValidateAggregatorMethod (0.00s)
=== RUN   TestAccDatadogUser_Updated
--- PASS: TestAccDatadogUser_Updated (6.05s)
PASS
ok  	github.com/hashicorp/terraform/builtin/providers/datadog	49.506s
```
2017-03-07 16:36:37 +02:00
Matt Dainty 3d335e48ff Check instance is running before trying to attach (#12459)
This covers the scenario of an instance created by a spot request. Using
Terraform we only know the spot request is fulfilled but the instance can
still be pending which causes the attachment to fail.
2017-03-07 16:20:01 +02:00
Jack Bruno 6c0caaf1dd Fix aws_dms_replication_task diff for json with whitespace. (#12380) 2017-03-07 16:00:02 +02:00
stack72 1dedf666df
provider/aws: Adding an acceptance test to for ForceNew on
ecs_task_definition volumes
2017-03-07 15:53:00 +02:00
Pawel Burchard aa8de2f8cf
provider/aws: (#10587) Changing volumes in ECS task definition should force new revision. 2017-03-07 15:53:00 +02:00
stack72 61c101da29
Merge branch 'Originate-mb-fix-spot-fleet-request' 2017-03-07 15:28:24 +02:00
stack72 80e8418846
provider/aws: Change aws_spot_fleet_request tests to use the correct
hash values in test cases
2017-03-07 15:14:52 +02:00
Clint d2f728e6cd provider/aws: Only send iops when creating io1 devices. Fix docs (#12392) 2017-03-07 14:44:39 +02:00
Dana Hoffman 322044695b provider/google: initial commit for node pool resource (#11802)
provider/google: initial commit for node pool resource
2017-03-06 14:59:24 -08:00
stack72 2d0770c507
Merge branch 'mb-fix-spot-fleet-request' of https://github.com/Originate/terraform into Originate-mb-fix-spot-fleet-request 2017-03-06 16:03:16 +02:00
Máximo Cuadros b58709aa91 provider/ignition: migration from resources to data resources (#11851)
* provider/ignition: migration from resources to data resources

* website: provider/ignition documention updated to data resources

* provider/ignition: backwards compatibility support for old resources
2017-03-06 14:23:04 +02:00
Brandon Clodius 22f69b1592 aws/provider: Fixes issue for aws_lb_ssl_negotiation_policy of already deleted ELB (#12360)
* Ensures elb exists before negotiation policy check; Fixes #11260

* Adds acceptance test case for missing elb

* Adds back https properties for test elb
2017-03-06 13:48:35 +02:00
Paul Stack bed8940953 provider/aws: Populate the iam_instance_profile uniqueId (#12449)
Fixes: #12430
2017-03-06 13:39:49 +02:00
yanndegat 09b1f4e1be provider/openstack: Add openstack_networking_network_v2 datasource (#12304) 2017-03-06 13:25:08 +02:00
Joe Topjian 3759e36784 provider/cobbler: Profile and System Fixes (#12452)
* vendor: Updating cobblerclient for Cobbler

* provider/cobbler: Fix Profile Repos

This commit fixes a bug where adding repos would result in an error.
This was due to the Cobbler API wanting a space-separated list of
repos rather than an array. The Cobbler Service will split the string
by space when used internally, but will always present the repos
as a string.

* provider/cobbler: System Interface Management Test

This commit adds a test to verify that the Management
parameter of System Interfaces works.
2017-03-06 13:19:30 +02:00
Pasha Palangpour ce633f2321 provider/ns1: Add notify list resource (#12373)
* Allow for local development with ns1 provider.

* Adds first implementation of ns1 notification list resource.

* NS1 record.use_client_subnet defaults to true, and added test for field.

* Adds more test cases for monitoring jobs.

* Adds webhook/datafeed notifier types and acctests for notifylists.

* Adds docs for notifylists resource.

* Updates ns1-go rest client via govendor

* Fix typos in record docs
2017-03-05 16:21:06 +02:00
Joe Topjian 120e3af178 provider/openstack: Toggle Creation of Default Security Group Rules (#12119)
This commit modifies the behavior implemented in #9799 by enabling
the user to be able to toggle the creation of the default security
group rules.
2017-03-05 16:18:00 +02:00
Radek Simko 7d6e2837e1 mysql: Avoid crash on un-interpolated provider cfg (#12391) 2017-03-05 15:58:15 +02:00
Joe Topjian 4ba145a3ac provider/openstack: Handle cases where volumes are disabled (#12374)
This commit handles the case where the volume API extensions are
disabled.
2017-03-05 15:53:09 +02:00
Andre Silva d50f2aca6f provider/statuscake: use default status code list when updating test (#12375) 2017-03-05 15:44:22 +02:00
David Harris 01f995fed5 provider/aws: Return errors from Elastic Beanstalk (#12425)
In the event that an unexpected state is returned from
`environmentStateRefreshFunc` errors in the Elastic Beanstalk console
will not be returned to the user.
2017-03-05 15:28:37 +02:00
Joe Topjian 959c197dc3 provider/openstack: rename image data source files (#12439) 2017-03-04 20:24:19 +02:00
Maxime Bury 0af10dec41 Fix spurious user_data diffs 2017-03-03 17:58:18 -08:00
Maxime Bury 93c4730de7 Properly handle 'vpc_security_group_ids', drop phantom 'security_groups' 2017-03-03 17:25:15 -08:00
Maxime Bury 4eba77eaee Default 'ebs_optimized' and 'monitoring' to false 2017-03-03 17:23:15 -08:00
Paddy 6de8e25b16 Merge pull request #12434 from hashicorp/paddy_fix_gcp_storage_region_test
provider/google: add location to storage tests.
2017-03-03 16:58:18 -08:00
Paddy 6531ef57e1 provider/google: log the op name in sql op errors.
To aid in tracking down the error that's causing
TestAccGoogleSqlDatabaseInstance_basic to fail (it's claiming an op
can't be found?) I've added the op name (which is unique) to the error
output for op errors.
2017-03-03 16:45:25 -08:00
Paddy a318cd9d61 provider/google: add location to storage tests.
Add location to storage tests that need it, which fixes the failing
TestAccStorageStorageClass test.
2017-03-03 15:51:36 -08:00