Commit Graph

57 Commits

Author SHA1 Message Date
Pam Selle e39e0e3d04 Remove vendor provisioners and add fmt Make target
Remove chef, habitat, puppet, and salt-masterless provsioners,
which follows their deprecation. Update the documentatin for these
provisioners to clarify that they have been removed from later versions
of Terraform. Adds the fmt Make target back and updates fmtcheck script
for correctness.
2020-11-17 11:22:03 -05:00
Tim Sharpe 615110e13e provisioner: new Puppet provisioner (#18851)
* Basic Puppet provisioner

* (fixup) fix snake_case use in Bolt

* (fixup) Remove unused ValidateFunc

* (fixup) Check bolt result status

* (lint) go fmt

* Requested changes

* Remove PE autodetection

* Apply suggestions from @svanharmelen

Co-Authored-By: rodjek <tim@sharpe.id.au>

* Tag all JSON fields in bolt output

* Defer comm.Disconnect() as suggested

* Make bolt timeout configurable

* Update builtin/provisioners/puppet/resource_provisioner.go

Co-Authored-By: rodjek <tim@sharpe.id.au>

* Make extension_requests and custom_attributes configurable
2019-06-10 15:31:21 -04:00
Martin Atkins a6008b8a48
v0.11.2 2018-01-09 23:13:33 +00:00
James Bardin 5a9e0cf763 create a new InternalProviders test
Remove the legacy InternalProviders global.

The terraform provider is the only one run internally.
2018-01-05 10:59:30 -05:00
Nolan Davidson 653db95df7 Initial implementation of a habitat provisioner
First pass at loading the config data using the TF schema.

Signed-off-by: Nolan Davidson <ndavidson@chef.io>
2017-12-07 16:29:30 -08:00
Sevag Hanssian 867760ed56 Add salt-masterless provisioner 2017-08-07 10:00:29 -04:00
James Bardin 77a32f3df0 remove "core" distinction
Since there is little left that isn't core, remove the distinction for
now to reduce confusion, since a "core" binary will mostly work except
for provisioners.
2017-06-12 13:43:54 -04:00
James Bardin 81ac0ed204 re-generate plugin list 2017-06-12 13:42:07 -04:00
Vladislav Rassokhin df4342bc3d Regenerate plugin list since provisioners were changed in previous commits 2017-05-19 20:54:08 +02:00
Paul Stack 055c18e302 core/provider-split: Split out the Oracle OPC provider to new structure (#14362)
* core/providersplit: Split OPC Provider to separate repo

As we march towards Terraform 0.10.0, we are going to start building the
terraform providers as separate binaries - this will allow us to
continually release them. Before we go to 0.10.0, we need to be able to
continue building providers in the same manner, therefore, we have
hardcoded the path of the provider in the generate-plugins.go file

The interim solution will require us to vendor the opc provider and any
child dependencies, but when we get to 0.10.0, we will no longer have to
do this - the core will auto download the plugin binary. The plugin
package will have it's own dependencies vendored as well.

* core/providersplit: Removing the builtin version of OPC provider

* core/providersplit: Vendoring the OPC plugin

* core/providersplit: update internal plugin list

* core/providersplit: remove unused govendor item
2017-05-16 19:53:25 +03:00
yanndegat 21d8125d5c provider/ovh: new provider (#12669)
* vendor: add go-ovh

* provider/ovh: new provider

* provider/ovh: Adding PublicCloud User Resource

* provider/ovh: Adding VRack Attachment Resource

* provider/ovh: Adding PublicCloud Network Resource

* provider/ovh: Adding PublicCloud Subnet Resource

* provider/ovh: Adding PublicCloud Regions Datasource

* provider/ovh: Adding PublicCloud Region Datasource

* provider/ovh: Fix Acctests using project's regions
2017-05-16 17:26:43 +03:00
Dmitrii Korotovskii ace0456d58 http provider and http request data source 2017-05-08 17:37:48 -07:00
stack72 898ac02854
Merge branch 'feature/gitlab_provider' of https://github.com/richardc/terraform into richardc-feature/gitlab_provider 2017-04-27 05:08:12 +12:00
Richard Clamp 631b0b865c provider/gitlab: add gitlab provider and `gitlab_project` resource
Here we add a basic provider with a single resource type.

It's copied heavily from the `github` provider and `github_repository`
resource, as there is some overlap in those types/apis.

~~~
resource "gitlab_project" "test1" {
  name = "test1"
  visibility_level = "public"
}
~~~

We implement in terms of the
[go-gitlab](https://github.com/xanzy/go-gitlab) library, which provides
a wrapping of the [gitlab api](https://docs.gitlab.com/ee/api/)

We have been a little selective in the properties we surface for the
project resource, as not all properties are very instructive.
Notable is the removal of the `public` bool as the `visibility_level`
will take precedent if both are supplied which leads to confusing
interactions if they disagree.
2017-04-24 11:38:20 +01:00
Martin Atkins b1763e262a Restore stringer-generated files back to new version
stringer has changed the boilerplate it generates in a recent version.
We'd previously updated to the new format but accientally rolled back
to the old while merging a long-running feature branch.

This restores us back to the new format again.
2017-04-21 14:49:18 -07:00
Jasmin Gacic 61499cfcf0 Provider Oneandone (#13633)
* Terraform Provider 1&1

* Addressing pull request remarks

* Fixed imports

* Fixing remarks

* Test optimiziation
2017-04-21 17:19:10 +03:00
Jake Champlin 35388cbc31 Merge pull request #13468 from hashicorp/f-oracle-compute
provider/opc: Add Oracle Compute Provider
2017-04-20 14:52:39 -04:00
Quentin Machu bf8d932d23
provider/local: Implement a new local_file resource
This commit adds the ability to provision files locally.
This is useful for cases where TerraForm generates assets
such as TLS certificates or templated documents that need
to be saved locally.

- While output variables can be used to return values to
the user, it is not extremly suitable for large content or
when many of these are generated, nor is it practical for
operators to manually save them on disk.
- While `local-exec` could be used with an `echo`, this
provider works across platforms and do not require any
convoluted escaping.
2017-04-13 14:57:29 -07:00
Jake Champlin edc524df55
provider/opc: Update OPC Provider
Updates the OPC provider to a fully working version.
2017-04-03 18:24:57 -04:00
Radek Simko 4448e45678 Merge pull request #12372 from hashicorp/f-kubernetes
kubernetes: Add provider + namespace resource
2017-03-16 07:18:39 +00:00
Radek Simko f1db0fcf9b
kubernetes: Add provider + namespace resource 2017-03-13 21:19:17 +00:00
Sean Chittenden 17fb98afa2 Circonus Provider (#12338)
* Begin stubbing out the Circonus provider.

* Remove all references to `reverse:secret_key`.

This value is dynamically set by the service and unused by Terraform.

* Update the `circonus_check` resource.

Still a WIP.

* Add docs for the `circonus_check` resource.

Commit miss, this should have been included in the last commit.

* "Fix" serializing check tags

I still need to figure out how I can make them order agnostic w/o using
a TypeSet.  I'm worried that's what I'm going to have to do.

* Spike a quick circonus_broker data source.

* Convert tags to a Set so the order does not matter.

* Add a `circonus_account` data source.

* Correctly spell account.

Pointed out by: @postwait

* Add the `circonus_contact_group` resource.

* Push descriptions into their own file in order to reduce the busyness of the schema when reviewing code.

* Rename `circonus_broker` and `broker` to `circonus_collector` and `collector`, respectively.

Change made with concent by Circonus to reduce confusion (@postwait, @maier, and several others).

* Use upstream contsants where available.

* Import the latest circonus-gometrics.

* Move to using a Set of collectors vs a list attached to a single attribute.

* Rename "cid" to "id" in the circonus_account data source and elsewhere
where possible.

* Inject a tag automatically.  Update gometrics.

* Checkpoint `circonus_metric` resource.

* Enable provider-level auto-tagging.  This is disabled by default.

* Rearrange metric.  This is an experimental "style" of a provider.  We'll see.

That moment. When you think you've gone off the rails on a mad scientist
experiment but like the outcome and think you may be onto something but
haven't proven it to yourself or anyone else yet?  That.  That exact
feeling of semi-confidence while being alone in the wilderness.  Please
let this not be the Terraform provider equivalent of DJB's C style of
coding.

We'll know in another resource or two if this was a horrible mistake or
not.

* Begin moving `resource_circonus_check` over to the new world order/structure:

Much of this is WIP and incomplete, but here is the new supported
structure:

```
variable "used_metric_name" {
  default = "_usage`0`_used"
}

resource "circonus_check" "usage" {
  # collectors = ["${var.collectors}"]
  collector {
    id = "${var.collectors[0]}"
  }

  name       = "${var.check_name}"
  notes      = "${var.notes}"

  json {
    url = "https://${var.target}/account/current"

    http_headers = {
      "Accept"                = "application/json"
      "X-Circonus-App-Name"   = "TerraformCheck"
      "X-Circonus-Auth-Token" = "${var.api_token}"
    }
  }

  stream {
    name = "${circonus_metric.used.name}"
    tags = "${circonus_metric.used.tags}"
    type = "${circonus_metric.used.type}"
  }

  tags = {
    source = "circonus"
  }
}

resource "circonus_metric" "used" {
  name = "${var.used_metric_name}"

  tags = {
    source = "circonus"
  }

  type = "numeric"
}
```

* Document the `circonus_metric` resource.

* Updated `circonus_check` docs.

* If a port was present, automatically set it in the Config.

* Alpha sort the check parameters now that they've been renamed.

* Fix a handful of panics as a result of the schema changing.

* Move back to a `TypeSet` for tags.  After a stint with `TypeMap`, move
back to `TypeSet`.

A set of strings seems to match the API the best.  The `map` type was
convenient because it reduced the amount of boilerplate, but you loose
out on other things.  For instance, tags come in the form of
`category:value`, so naturally it seems like you could use a map, but
you can't without severe loss of functionality because assigning two
values to the same category is common.  And you can't normalize map
input or suppress the output correctly (this was eventually what broke
the camel's back).  I tried an experiment of normalizing the input to be
`category:value` as the key in the map and a value of `""`, but... seee
diff suppress.  In this case, simple is good.

While here bring some cleanups to _Metric since that was my initial
testing target.

* Rename `providerConfig` to `_ProviderConfig`

* Checkpoint the `json` check type.

* Fix a few residual issues re: missing descriptions.

* Rename `validateRegexp` to `_ValidateRegexp`

* Use tags as real sets, not just a slice of strings.

* Move the DiffSuppressFunc for tags down to the Elem.

* Fix up unit tests to chase the updated, default hasher function being used.

* Remove `Computed` attribute from `TypeSet` objects.

This fixes a pile of issues re: update that I was having.

* Rename functions.

`GetStringOk` -> `GetStringOK`
`GetSetAsListOk` -> `GetSetAsListOK`
`GetIntOk` -> `GetIntOK`

* Various small cleanups and comments rolled into a single commit.

* Add a `postgresql` check type for the `circonus_check` resource.

* Rename various validator functions to be _CapitalCase vs capitalCase.

* Err... finish the validator renames.

* Add `GetFloat64()` support.

* Add `icmp_ping` check type support.

* Catch up to the _API*Attr renames.

Deliberately left out of the previous commit in order to create a clean
example of what is required to add a new check type to the
`circonus_check` resource.

* Clarify when the `target` attribute is required for the `postgresql`
check type.

* Correctly pull the metric ID attribute from the right location.

* Add a circonus_stream_group resource (a.k.a. a Circonus "metric cluster")

* Add support for the [`caql`](https://login.circonus.com/user/docs/caql_reference) check type.

* Add support for the `http` check type.

* `s/SSL/TLS/g`

* Add support for `tcp` check types.

* Enumerate the available metrics that are supported for each check type.

* Add [`cloudwatch`](https://login.circonus.com/user/docs/Data/CheckTypes/CloudWatch) check type support.

* Add a `circonus_trigger` resource (a.k.a Circonus Ruleset).

* Rename a handful of functions to make it clear in the function name the
direction of flow for information moving through the provider.

TL;DR: Replace `parse` and `read` with "foo to bar"-like names.

* Fix the attribute name used in a validator.  Absent != After.

* Set the minimum `absent` predicate to 70s per testing.

* Fix the regression tests for circonus_trigger now that absent has a 70s min

* Fix up the `tcp` check to require a `host` attribute.

Fix tests.  It's clear I didn't run these before committing/pushing the
`tcp` check last time.

* Fix `circonus_check` for `cloudwatch` checks.

* Rename `parsePerCheckTypeConfig()` to `_CheckConfigToAPI` to be
consistent with other function names.

grep(1)ability of code++

* Slack buttons as an integer are string encoded.

* Fix updates for `circonus_contact`.

* Fix the out parameters for contact groups.

* Move to using `_CastSchemaToTF()` where appropriate.

* Fix circonus_contact_group.  Updates work as expected now.

* Use `_StateSet()` in place of `d.Set()` everywhere.

* Make a quick pass over the collector datasource to modernize its style

* Quick pass for items identified by `golint`.

* Fix up collectors

* Fix the `json` check type.

Reconcile possible sources of drift.  Update now works as expected.

* Normalize trigger durations to seconds.

* Improve the robustness of the state handling for the `circonus_contact_group` resource.

* I'm torn on this, but sort the contact groups in the notify list.

This does mean that if the first contact group in the list has a higher
lexical sort order the plan won't converge until the offending resource
is tainted and recreated.  But there's also some sorting happening
elsewhere, so.... sort and taint for now and this will need to be
revisited in the future.

* Add support for the `httptrap` check type.

* Remove empty units from the state file.

* Metric clusters can return a 404.  Detect this accordingly in its
respective Exists handler.

* Add a `circonus_graph` resource.

* Fix a handful of bugs in the graph provider.

* Re-enable the necessary `ConflictsWith` definitions and normalize attribute names.

* Objects that have been deleted via the UI return a 404. Handle in Exists().

* Teach `circonus_graph`'s Stack set to accept nil values.

* Set `ForceNew: true` for a graph's name.

* Chase various API fixes required to make `circonus_graph` work as expected.

* Fix up the handling of sub-1 zoom resolutions for graphs.

* Add the `check_by_collector` out parameter to the `circonus_check` resource.

* Improve validation of line vs area graphs.  Fix graph_style.

* Fix up the `logarithmic` graph axis option.

* Resolve various trivial `go vet` issues.

* Add a stream_group out parameter.

* Remove incorrectly applied `Optional` attributes to the `circonus_account` resource.

* Remove various `Optional` attributes from the `circonus_collector` data source.

* Centralize the common need to suppress leading and trailing whitespace into `suppressWhitespace`.

* Sync up with upstream vendor fixes for circonus_graph.

* Update the checksum value for the http check.

* Chase `circonus_graph`'s underlying `line_style` API object change from `string` to `*string`.

* Clean up tests to use a generic terraform regression testing account.

* Add support for the MySQL to the `circonus_check` resource.

* Begin stubbing out the Circonus provider.

* Remove all references to `reverse:secret_key`.

This value is dynamically set by the service and unused by Terraform.

* Update the `circonus_check` resource.

Still a WIP.

* Add docs for the `circonus_check` resource.

Commit miss, this should have been included in the last commit.

* "Fix" serializing check tags

I still need to figure out how I can make them order agnostic w/o using
a TypeSet.  I'm worried that's what I'm going to have to do.

* Spike a quick circonus_broker data source.

* Convert tags to a Set so the order does not matter.

* Add a `circonus_account` data source.

* Correctly spell account.

Pointed out by: @postwait

* Add the `circonus_contact_group` resource.

* Push descriptions into their own file in order to reduce the busyness of the schema when reviewing code.

* Rename `circonus_broker` and `broker` to `circonus_collector` and `collector`, respectively.

Change made with concent by Circonus to reduce confusion (@postwait, @maier, and several others).

* Use upstream contsants where available.

* Import the latest circonus-gometrics.

* Move to using a Set of collectors vs a list attached to a single attribute.

* Rename "cid" to "id" in the circonus_account data source and elsewhere
where possible.

* Inject a tag automatically.  Update gometrics.

* Checkpoint `circonus_metric` resource.

* Enable provider-level auto-tagging.  This is disabled by default.

* Rearrange metric.  This is an experimental "style" of a provider.  We'll see.

That moment. When you think you've gone off the rails on a mad scientist
experiment but like the outcome and think you may be onto something but
haven't proven it to yourself or anyone else yet?  That.  That exact
feeling of semi-confidence while being alone in the wilderness.  Please
let this not be the Terraform provider equivalent of DJB's C style of
coding.

We'll know in another resource or two if this was a horrible mistake or
not.

* Begin moving `resource_circonus_check` over to the new world order/structure:

Much of this is WIP and incomplete, but here is the new supported
structure:

```
variable "used_metric_name" {
  default = "_usage`0`_used"
}

resource "circonus_check" "usage" {
  # collectors = ["${var.collectors}"]
  collector {
    id = "${var.collectors[0]}"
  }

  name       = "${var.check_name}"
  notes      = "${var.notes}"

  json {
    url = "https://${var.target}/account/current"

    http_headers = {
      "Accept"                = "application/json"
      "X-Circonus-App-Name"   = "TerraformCheck"
      "X-Circonus-Auth-Token" = "${var.api_token}"
    }
  }

  stream {
    name = "${circonus_metric.used.name}"
    tags = "${circonus_metric.used.tags}"
    type = "${circonus_metric.used.type}"
  }

  tags = {
    source = "circonus"
  }
}

resource "circonus_metric" "used" {
  name = "${var.used_metric_name}"

  tags = {
    source = "circonus"
  }

  type = "numeric"
}
```

* Document the `circonus_metric` resource.

* Updated `circonus_check` docs.

* If a port was present, automatically set it in the Config.

* Alpha sort the check parameters now that they've been renamed.

* Fix a handful of panics as a result of the schema changing.

* Move back to a `TypeSet` for tags.  After a stint with `TypeMap`, move
back to `TypeSet`.

A set of strings seems to match the API the best.  The `map` type was
convenient because it reduced the amount of boilerplate, but you loose
out on other things.  For instance, tags come in the form of
`category:value`, so naturally it seems like you could use a map, but
you can't without severe loss of functionality because assigning two
values to the same category is common.  And you can't normalize map
input or suppress the output correctly (this was eventually what broke
the camel's back).  I tried an experiment of normalizing the input to be
`category:value` as the key in the map and a value of `""`, but... seee
diff suppress.  In this case, simple is good.

While here bring some cleanups to _Metric since that was my initial
testing target.

* Rename `providerConfig` to `_ProviderConfig`

* Checkpoint the `json` check type.

* Fix a few residual issues re: missing descriptions.

* Rename `validateRegexp` to `_ValidateRegexp`

* Use tags as real sets, not just a slice of strings.

* Move the DiffSuppressFunc for tags down to the Elem.

* Fix up unit tests to chase the updated, default hasher function being used.

* Remove `Computed` attribute from `TypeSet` objects.

This fixes a pile of issues re: update that I was having.

* Rename functions.

`GetStringOk` -> `GetStringOK`
`GetSetAsListOk` -> `GetSetAsListOK`
`GetIntOk` -> `GetIntOK`

* Various small cleanups and comments rolled into a single commit.

* Add a `postgresql` check type for the `circonus_check` resource.

* Rename various validator functions to be _CapitalCase vs capitalCase.

* Err... finish the validator renames.

* Add `GetFloat64()` support.

* Add `icmp_ping` check type support.

* Catch up to the _API*Attr renames.

Deliberately left out of the previous commit in order to create a clean
example of what is required to add a new check type to the
`circonus_check` resource.

* Clarify when the `target` attribute is required for the `postgresql`
check type.

* Correctly pull the metric ID attribute from the right location.

* Add a circonus_stream_group resource (a.k.a. a Circonus "metric cluster")

* Add support for the [`caql`](https://login.circonus.com/user/docs/caql_reference) check type.

* Add support for the `http` check type.

* `s/SSL/TLS/g`

* Add support for `tcp` check types.

* Enumerate the available metrics that are supported for each check type.

* Add [`cloudwatch`](https://login.circonus.com/user/docs/Data/CheckTypes/CloudWatch) check type support.

* Add a `circonus_trigger` resource (a.k.a Circonus Ruleset).

* Rename a handful of functions to make it clear in the function name the
direction of flow for information moving through the provider.

TL;DR: Replace `parse` and `read` with "foo to bar"-like names.

* Fix the attribute name used in a validator.  Absent != After.

* Set the minimum `absent` predicate to 70s per testing.

* Fix the regression tests for circonus_trigger now that absent has a 70s min

* Fix up the `tcp` check to require a `host` attribute.

Fix tests.  It's clear I didn't run these before committing/pushing the
`tcp` check last time.

* Fix `circonus_check` for `cloudwatch` checks.

* Rename `parsePerCheckTypeConfig()` to `_CheckConfigToAPI` to be
consistent with other function names.

grep(1)ability of code++

* Slack buttons as an integer are string encoded.

* Fix updates for `circonus_contact`.

* Fix the out parameters for contact groups.

* Move to using `_CastSchemaToTF()` where appropriate.

* Fix circonus_contact_group.  Updates work as expected now.

* Use `_StateSet()` in place of `d.Set()` everywhere.

* Make a quick pass over the collector datasource to modernize its style

* Quick pass for items identified by `golint`.

* Fix up collectors

* Fix the `json` check type.

Reconcile possible sources of drift.  Update now works as expected.

* Normalize trigger durations to seconds.

* Improve the robustness of the state handling for the `circonus_contact_group` resource.

* I'm torn on this, but sort the contact groups in the notify list.

This does mean that if the first contact group in the list has a higher
lexical sort order the plan won't converge until the offending resource
is tainted and recreated.  But there's also some sorting happening
elsewhere, so.... sort and taint for now and this will need to be
revisited in the future.

* Add support for the `httptrap` check type.

* Remove empty units from the state file.

* Metric clusters can return a 404.  Detect this accordingly in its
respective Exists handler.

* Add a `circonus_graph` resource.

* Fix a handful of bugs in the graph provider.

* Re-enable the necessary `ConflictsWith` definitions and normalize attribute names.

* Objects that have been deleted via the UI return a 404. Handle in Exists().

* Teach `circonus_graph`'s Stack set to accept nil values.

* Set `ForceNew: true` for a graph's name.

* Chase various API fixes required to make `circonus_graph` work as expected.

* Fix up the handling of sub-1 zoom resolutions for graphs.

* Add the `check_by_collector` out parameter to the `circonus_check` resource.

* Improve validation of line vs area graphs.  Fix graph_style.

* Fix up the `logarithmic` graph axis option.

* Resolve various trivial `go vet` issues.

* Add a stream_group out parameter.

* Remove incorrectly applied `Optional` attributes to the `circonus_account` resource.

* Remove various `Optional` attributes from the `circonus_collector` data source.

* Centralize the common need to suppress leading and trailing whitespace into `suppressWhitespace`.

* Sync up with upstream vendor fixes for circonus_graph.

* Update the checksum value for the http check.

* Chase `circonus_graph`'s underlying `line_style` API object change from `string` to `*string`.

* Clean up tests to use a generic terraform regression testing account.

* Rename all identifiers that began with a `_` and replace with a corresponding lowercase glyph.

* Remove stale comment in types.

* Move the calls to `ResourceData`'s `SetId()` calls to be first in the
list so that no resources are lost in the event of a `panic()`.

* Remove `stateSet` from the `circonus_trigger` resource.

* Remove `stateSet` from the `circonus_stream_group` resource.

* Remove `schemaSet` from the `circonus_graph` resource.

* Remove `stateSet` from the `circonus_contact` resource.

* Remove `stateSet` from the `circonus_metric` resource.

* Remove `stateSet` from the `circonus_account` data source.

* Remove `stateSet` from the `circonus_collector` data source.

* Remove stray `stateSet` call from the `circonus_contact` resource.

This is an odd artifact to find... I'm completely unsure as to why it
was there to begin with but am mostly certain it's a bug and needs to be
removed.

* Remove `stateSet` from the `circonus_check` resource.

* Remove the `stateSet` helper function.

All call sites have been converted to return errors vs `panic()`'ing at
runtime.

* Remove a pile of unused functions and type definitions.

* Remove the last of the `attrReader` interface.

* Remove an unused `Sprintf` call.

* Update `circonus-gometrics` and remove unused files.

* Document what `convertToHelperSchema()` does.

Rename `castSchemaToTF` to `convertToHelperSchema`.

Change the function parameter ordering so the `map` of attribute
descriptions: this is much easier to maintain when the description map
is first when creating schema inline.

* Move descriptions into their respective source files.

* Remove all instances of `panic()`.

In the case of software bugs, log an error.  Never `panic()` and always
return a value.

* Rename `stream_group` to `metric_cluster`.

* Rename triggers to rule sets

* Rename `stream` to `metric`.

* Chase the `stream` -> `metric` change into the docs.

* Remove some unused test functions.

* Add the now required `color` attribute for graphing a `metric_cluster`.

* Add a missing description to silence a warning.

* Add `id` as a selector for the account data source.

* Futureproof testing: Randomize all asset names to prevent any possible resource conflicts.

This isn't a necessary change for our current build and regression
testing, but *just in case* we have a radical change to our testing
framework in the future, make all resource names fully random.

* Rename various values to match the Circonus docs.

* s/alarm/alert/g

* Ensure ruleset criteria can not be empty.
2017-03-10 14:19:17 -06:00
Clint 5d894e4ffd Fix up command and some go fmt issues (#12509) 2017-03-07 16:03:45 -06:00
Paul Stack b57e0bee2a provider/datadog: Update to datadog_monitor still used d.GetOk (#12497)
Fixes: #12494

The Create was changed to use the default and not d.GetOk - the update
wasn't - this was causing issues when trying to update to a false value

```
% make testacc TEST=./builtin/providers/datadog
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/03/07 16:20:54 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/datadog -v  -timeout 120m
=== RUN   TestDatadogMonitor_import
--- PASS: TestDatadogMonitor_import (4.77s)
=== RUN   TestDatadogUser_import
--- PASS: TestDatadogUser_import (6.23s)
=== RUN   TestProvider
--- PASS: TestProvider (0.00s)
=== RUN   TestProvider_impl
--- PASS: TestProvider_impl (0.00s)
=== RUN   TestAccDatadogMonitor_Basic
--- PASS: TestAccDatadogMonitor_Basic (3.83s)
=== RUN   TestAccDatadogMonitor_BasicNoTreshold
--- PASS: TestAccDatadogMonitor_BasicNoTreshold (4.92s)
=== RUN   TestAccDatadogMonitor_Updated
--- PASS: TestAccDatadogMonitor_Updated (5.88s)
=== RUN   TestAccDatadogMonitor_TrimWhitespace
--- PASS: TestAccDatadogMonitor_TrimWhitespace (3.23s)
=== RUN   TestAccDatadogMonitor_Basic_float_int
--- PASS: TestAccDatadogMonitor_Basic_float_int (5.73s)
=== RUN   TestAccDatadogTimeboard_update
--- PASS: TestAccDatadogTimeboard_update (8.86s)
=== RUN   TestValidateAggregatorMethod
--- PASS: TestValidateAggregatorMethod (0.00s)
=== RUN   TestAccDatadogUser_Updated
--- PASS: TestAccDatadogUser_Updated (6.05s)
PASS
ok  	github.com/hashicorp/terraform/builtin/providers/datadog	49.506s
```
2017-03-07 16:36:37 +02:00
Liran Polak f37800ae62 New Provider: Spotinst (#5001)
* providers/spotinst: Add support for Spotinst resources

* providers/spotinst: Fix merge conflict - layouts/docs.erb

* docs/providers/spotinst: Fix the resource description field

* providers/spotinst: Fix the acceptance tests

* providers/spotinst: Mark the device_index as a required field

* providers/spotinst: Change the associate_public_ip_address field to TypeBool

* docs/providers/spotinst: Update the description of the adjustment field

* providers/spotinst: Rename IamRole to IamInstanceProfile to make it more compatible with the AWS provider

* docs/providers/spotinst: Rename iam_role to iam_instance_profile

* providers/spotinst: Deprecate the iam_role attribute

* providers/spotinst: Fix a misspelled var (IamRole)

* providers/spotinst: Fix possible null pointer exception related to "iam_instance_profile"

* docs/providers/spotinst: Add "load_balancer_names" missing description

* providers/spotinst: New resource "spotinst_subscription" added

* providers/spotinst: Eliminate a possible null pointer exception in "spotinst_aws_group"

* providers/spotinst: Eliminate a possible null pointer exception in "spotinst_subscription"

* providers/spotinst: Mark spotinst_subscription as deleted in destroy

* providers/spotinst: Add support for custom event format in spotinst_subscription

* providers/spotinst: Disable the destroy step of spotinst_subscription

* providers/spotinst: Add support for update subscriptions

* providers/spotinst: Merge fixed conflict - layouts/docs.erb

* providers/spotinst: Vendor dependencies

* providers/spotinst: Return a detailed error message

* provider/spotinst: Update the plugin list

* providers/spotinst: Vendor dependencies using govendor

* providers/spotinst: New resource "spotinst_healthcheck" added

* providers/spotinst: Update the Spotinst SDK

* providers/spotinst: Comment out unnecessary log.Printf

* providers/spotinst: Fix the acceptance tests

* providers/spotinst: Gofmt fixes

* providers/spotinst: Use multiple functions to expand each block

* providers/spotinst: Allow ondemand_count to be zero

* providers/spotinst: Change security_group_ids from TypeSet to TypeList

* providers/spotinst: Remove unnecessary `ForceNew` fields

* providers/spotinst: Update the Spotinst SDK

* providers/spotinst: Add support for capacity unit

* providers/spotinst: Add support for EBS volume pool

* providers/spotinst: Delete health check

* providers/spotinst: Allow to set multiple availability zones

* providers/spotinst: Gofmt

* providers/spotinst: Omit empty strings from the load_balancer_names field

* providers/spotinst: Update the Spotinst SDK to v1.1.9

* providers/spotinst: Add support for new strategy parameters

* providers/spotinst: Update the Spotinst SDK to v1.2.0

* providers/spotinst: Add support for Kubernetes integration

* providers/spotinst: Fix merge conflict - vendor/vendor.json

* providers/spotinst: Update the Spotinst SDK to v1.2.1

* providers/spotinst: Add support for Application Load Balancers

* providers/spotinst: Do not allow to set ondemand_count to 0

* providers/spotinst: Update the Spotinst SDK to v1.2.2

* providers/spotinst: Add support for scaling policy operators

* providers/spotinst: Add dimensions to spotinst_aws_group tests

* providers/spotinst: Allow both ARN and name for IAM instance profiles

* providers/spotinst: Allow ondemand_count=0

* providers/spotinst: Split out the set funcs into flatten style funcs

* providers/spotinst: Update the Spotinst SDK to v1.2.3

* providers/spotinst: Add support for EBS optimized flag

* providers/spotinst: Update the Spotinst SDK to v2.0.0

* providers/spotinst: Use stringutil.Stringify for debugging

* providers/spotinst: Update the Spotinst SDK to v2.0.1

* providers/spotinst: Key pair is now optional

* providers/spotinst: Make sure we do not nullify signals on strategy update

* providers/spotinst: Hash both Strategy and EBS Block Device

* providers/spotinst: Hash AWS load balancer

* providers/spotinst: Update the Spotinst SDK to v2.0.2

* providers/spotinst: Verify namespace exists before appending policy

* providers/spotinst: Image ID will be in a separate block from now on, so as to allow ignoring changes only on the image ID. This change is backwards compatible.

* providers/spotinst: user data decoded when returned from spotinst api, so that TF compares the two states properly, and does not update without cause.
2017-02-22 22:57:16 +02:00
clint shryock be6ae20ac1 Merge branch 'pr-8299'
* pr-8299:
  Patch up website docs
  provider/dns: DNS dynamic updates (RFC 2136)
  vendor: Capture new dependency miekg-dns
2017-02-17 17:02:37 -06:00
Kazumichi Yamamoto cd7f69ab11 New provider arukas (#11171)
* Add a Arukas provider

* Add dependencies for the Arukas provider

* Add documents for the Arukas
2017-02-13 19:11:30 +00:00
Roberto Jung Drebes e3934c23c8 provider/dns: DNS dynamic updates (RFC 2136) 2017-02-10 21:38:26 +01:00
Mitchell Hashimoto 83cc54bfbe
updated generate output 2017-01-26 15:11:47 -08:00
Paul Stack 987b910828 Ns1 provider (#10782)
* vendor: update gopkg.in/ns1/ns1-go.v2

* provider/ns1: Port the ns1 provider to Terraform core

* docs/ns1: Document the ns1 provider

* ns1: rename remaining nsone -> ns1 (#10805)

* Ns1 provider (#11300)

* provider/ns1: Flesh out support for meta structs.

Following the structure outlined by @pashap.

Using reflection to reduce copy/paste.

Putting metas inside single-item lists.  This is clunky, but I couldn't
figure out how else to have a nested struct.  Maybe the Terraform people
know a better way?

Inside the meta struct, all fields are always written to the state; I
can't figure out how to omit fields that aren't used.  This is not just
verbose, it actually causes issues because you can't have both "up" and
"up_feed" set).

Also some minor other changes:
- Add "terraform" import support to records and zones.
- Create helper class StringEnum.

* provider/ns1: Make fmt

* provider/ns1: Remove stubbed out RecordRead (used for testing metadata change).

* provider/ns1: Need to get interface that m contains from Ptr Value with Elem()

* provider/ns1: Use empty string to indicate no feed given.

* provider/ns1: Remove old record.regions fields.

* provider/ns1: Removes redundant testAccCheckRecordState

* provider/ns1: Moves account permissions logic to permissions.go

* provider/ns1: Adds tests for team resource.

* provider/ns1: Move remaining permissions logic to permissions.go

* ns1/provider: Adds datasource.config

* provider/ns1: Small clean up of datafeed resource tests

* provider/ns1: removes testAccCheckZoneState in favor of explicit name check

* provider/ns1: More renaming of nsone -> ns1

* provider/ns1: Comment out metadata for the moment.

* Ns1 provider (#11347)

* Fix the removal of empty containers from a flatmap

Removal of empty nested containers from a flatmap would sometimes fail a
sanity check when removed in the wrong order. This would only fail
sometimes due to map iteration. There was also an off-by-one error in
the prefix check which could match the incorrect keys.

* provider/ns1: Adds ns1 go client through govendor.

* provider/ns1: Removes unused debug line

* docs/ns1: Adds docs around apikey/datasource/datafeed/team/user/record.

* provider/ns1: Gets go vet green
2017-01-23 21:41:07 +00:00
Paul Stack 798bf60ef9 command/plugin_list: Adding Alicloud to the plugin list file (#11292) 2017-01-19 17:01:14 +00:00
Jasmin Gacic 7e9c850936 Terraform ProfitBricks Builder (#7943)
* Terraform ProfitBricks Builder

* make fmt

* Merge remote-tracking branch 'upstream/master' into terraform-provider-profitbricks

# Conflicts:
#	command/internal_plugin_list.go

* Addressing PR remarks

* Removed importers
2017-01-18 14:43:09 +00:00
clint shryock 87bb691800 Revert "New provider arukas (#10862)"
This reverts commit 9176bd4861.
This provider includes a dependency that at time of writing requires a
*nix system, and will not build on Windows.
2017-01-11 09:04:32 -06:00
Kazumichi Yamamoto 9176bd4861 New provider arukas (#10862)
* Add a Arukas provider

* Add dependencies for the Arukas provider

* Add documents for the Arukas
2017-01-09 17:14:33 +00:00
Tom Harvey 05d00a93ce New Provider: OpsGenie (#11012)
* Importing the OpsGenie SDK

* Adding the goreq dependency

* Initial commit of the OpsGenie / User provider

* Refactoring to return a single client

* Adding an import test / fixing a copy/paste error

* Adding support for OpsGenie docs

* Scaffolding the user documentation for OpsGenie

* Adding a TODO

* Adding the User data source

* Documentation for OpsGenie

* Adding OpsGenie to the internal plugin list

* Adding support for Teams

* Documentation for OpsGenie Team's

* Validation for Teams

* Removing Description for now

* Optional fields for a User: Locale/Timezone

* Removing an implemented TODO

* Running makefmt

* Downloading about half the internet

Someone witty might simply sign this commit with "npm install"

* Adding validation to the user object

* Fixing the docs

* Adding a test creating multple users

* Prompting for the API Key if it's not specified

* Added a test for multiple users / requested changes

* Fixing the linting
2017-01-05 19:25:04 +00:00
Máximo Cuadros 85f0fba9f9 Ignition provider (#6189)
* providers: ignition, basic config, version and config

* providers: ignition, user and passwd example, general cache implementation

* vendor: Capture new dependency upstream-pkg

* providers: ignition ignition_user

* providers: ignition ignition_disk, ignition_group and ignition_raid

* providers: ignition_file and ignition_filesystem

* providers: ignition_systemd_unit and ignition_networkd_unit

* providers: ignition_config improvements

* vendor: Capture new dependency upstream-pkg

* providers: ignition main

* documentation: ignition provider

* providers: ignition minor changes

* providers: ignition, fix test

* fixing tests and latest versions
2017-01-03 11:29:14 +00:00
Paul Tyng 3dfe5a47f1
provider/newrelic: Add new provider for New Relic 2016-12-15 19:14:59 +00:00
Len Smith 015e96d0dd Initial check in for Icinga2 Provider/Resource (#8306)
* Initial checkin for PR request

* Added an argument to provider to allow control over whether or not TLS Certs will skip verification. Controllable via provider or env variable being set

* Initial check-in to use refactored module

* Checkin of very MVP for creating/deleting host test which works and validates basic host creation and deletion

* Check in with support for creating hosts with variables working

* Checking in work to date

* Remove code that causes travis CI to fail while I debug

* Adjust create to accept multivale

* Back on track. Working basic tests. go-icinga2-api needs more test too

* Squashing

* Back on track. Working basic tests. go-icinga2-api needs more test too

* Check in refactored hostgroup support

* Check in refactored check_command, hosts, and hsotgroup with a few test

* Checking in service code

* Add in dependency for icinga2 provider

* Add documentation. Refactor, fix and extend based on feedback from Hashicorp

* Added checking and validation around invalid URL and unavailable server
2016-12-12 15:28:26 +00:00
Martin Atkins e772b45970 "external" data source, for integrating with external programs (#8768)
* "external" provider for gluing in external logic

This provider will become a bit of glue to help people interface external
programs with Terraform without writing a full Terraform provider.

It will be nowhere near as capable as a first-class provider, but is
intended as a light-touch way to integrate some pre-existing or custom
system into Terraform.

* Unit test for the "resourceProvider" utility function

This small function determines the dependable name of a provider for
a given resource name and optional provider alias. It's simple but it's
a key part of how resource nodes get connected to provider nodes so
worth specifying the intended behavior in the form of a test.

* Allow a provider to export a resource with the provider's name

If a provider only implements one resource of each type (managed vs. data)
then it can be reasonable for the resource names to exactly match the
provider name, if the provider name is descriptive enough for the
purpose of the each resource to be obvious.

* provider/external: data source

A data source that executes a child process, expecting it to support a
particular gateway protocol, and exports its result. This can be used as
a straightforward way to retrieve data from sources that Terraform
doesn't natively support..

* website: documentation for the "external" provider
2016-12-05 17:24:57 +00:00
John Engelman 243ecf3b4f [Provider] Rancher (#9173)
* Vendor Rancher Go library.

* Implement Rancher Provider.

Starting implementation taken from
https://github.com/platanus/terraform-provider-rancher

Commits from jidonoso@gmail.com and raphael.pinson@camptocamp.com
2016-12-05 15:29:41 +00:00
Mitchell Hashimoto fd498fbfff Merge pull request #9538 from hashicorp/f-nomad-provider
provider/nomad: Nomad provider for managing jobs
2016-11-09 18:34:55 -08:00
Martin Atkins b2b5831205 "vault" provider registration
To reduce the risk of secret exposure via Terraform state and log output,
we default to creating a relatively-short-lived token (20 minutes) such
that Vault can, where possible, automatically revoke any retrieved
secrets shortly after Terraform has finished running.

This has some implications for usage of this provider that will be spelled
out in more detail in the docs that will be added in a later commit, but
the most significant implication is that a plan created by "terraform plan"
that includes secrets leased from Vault must be *applied* before the
lease period expires to ensure that the issued secrets remain valid.

No resources yet. They will follow in subsequent commits.
2016-10-29 23:16:57 -07:00
Mitchell Hashimoto bb5f6498e2
provider/nomad 2016-10-24 10:34:06 -07:00
Alexander Hellbom d02067a75e Add PagerDuty provider 2016-10-24 14:19:55 +02:00
stack72 a2970e631c
Merge branch 'cwood/bitbucket-provider' of https://github.com/cwood/terraform into cwood-cwood/bitbucket-provider 2016-09-21 19:35:58 +01:00
Joe Topjian c2469c95f4 provider/rabbitmq: Initial Commit of RabbitMQ Provider
Contains provider configuration, a rabbitmq_vhost resource, and
acceptance test.
2016-09-01 03:19:16 +00:00
James Nugent e181d753fe build: Fix ordering of plugin list 2016-08-09 15:03:24 -04:00
Colin Wood bd9ddff0cc Bitbucket provider for terraform 2016-08-08 09:45:16 -07:00
Brad Sickles 70cadcf31d Implement archive provider and "archive_file" resource. (#7322) 2016-08-08 12:56:44 +12:00
Raphael Randschau 9081cabd6e Add scaleway provider (#7331)
* Add scaleway provider

this PR allows the entire scaleway stack to be managed with terraform

example usage looks like this:

```
provider "scaleway" {
  api_key = "snap"
  organization = "snip"
}

resource "scaleway_ip" "base" {
  server = "${scaleway_server.base.id}"
}

resource "scaleway_server" "base" {
  name = "test"
  # ubuntu 14.04
  image = "aecaed73-51a5-4439-a127-6d8229847145"
  type = "C2S"
}

resource "scaleway_volume" "test" {
  name = "test"
  size_in_gb = 20
  type = "l_ssd"
}

resource "scaleway_volume_attachment" "test" {
  server = "${scaleway_server.base.id}"
  volume = "${scaleway_volume.test.id}"
}

resource "scaleway_security_group" "base" {
  name = "public"
  description = "public gateway"
}

resource "scaleway_security_group_rule" "http-ingress" {
  security_group = "${scaleway_security_group.base.id}"

  action = "accept"
  direction = "inbound"
  ip_range = "0.0.0.0/0"
  protocol = "TCP"
  port = 80
}

resource "scaleway_security_group_rule" "http-egress" {
  security_group = "${scaleway_security_group.base.id}"

  action = "accept"
  direction = "outbound"
  ip_range = "0.0.0.0/0"
  protocol = "TCP"
  port = 80
}
```

Note that volume attachments require the server to be stopped, which can lead to
downtimes of you attach new volumes to already used servers

* Update IP read to handle 404 gracefully

* Read back resource on update

* Ensure IP detachment works as expected

Sadly this is not part of the official scaleway api just yet

* Adjust detachIP helper

based on feedback from @QuentinPerez in
https://github.com/scaleway/scaleway-cli/pull/378

* Cleanup documentation

* Rename api_key to access_key

following @stack72 suggestion and rename the provider api_key for more clarity

* Make tests less chatty by using custom logger
2016-07-13 21:03:41 +01:00