Commit Graph

784 Commits

Author SHA1 Message Date
Martin Atkins 6ba6508ec9 command: pass the locked plugin hashes into ContextOpts
By reading our lock file and passing this into the context, we ensure that
only the plugins referenced in the lock file can be used. As of this
commit there is no way to create that lock file, but that will follow soon
as part of "terraform init".

We also provide a way to force a particular set of SHA256s. The main use
for this is to allow us to persist a set of plugins in the plan and
check the same plugins are used during apply, but it may also be useful
for automated tests.
2017-06-09 14:03:59 -07:00
Martin Atkins 720670fae7 command: helper to manage the provider plugins lock file
This is just a JSON file with the SHA256 digests of the plugin
executables.
2017-06-09 14:03:59 -07:00
Martin Atkins 7d0a98af46 command: provider resolver to also check SHA256 constraints when set
In addition to looking for matching versions, the caller can also
optionally require a specific executable by its SHA256 digest.
2017-06-09 14:03:59 -07:00
Martin Atkins e3401947a6 plugin/discovery: PluginRequirements can specify SHA256 digests
As well as constraining plugins by version number, we also want to be
able to pin plugins to use specific executables so that we can detect
drift in available plugins between commands.

This commit allows such requirements to be specified, but doesn't yet
specify any such requirements, nor validate them.
2017-06-09 14:03:59 -07:00
Martin Atkins 9a398a7793 command: require resource to be in config before import
Previously we encouraged users to import a resource and _then_ write the
configuration block for it. This ordering creates lots of risk, since
for various reasons users can end up subsequently running Terraform
without any configuration in place, which then causes Terraform to want
to destroy the resource that was imported.

Now we invert this and require a minimal configuration block be written
first. This helps ensure that the user ends up with a correlated resource
config and state, protecting against any inconsistency caused by typos.

This addresses #11835.
2017-06-09 14:03:59 -07:00
Martin Atkins 7d8719150c command: validate import resource address early
Previously we deferred validation of the resource address on the import
command until we were in the core guts, which caused the error responses
to be rather unhelpful.

By validating these things early we can give better feedback to the user.
2017-06-09 14:03:59 -07:00
Martin Atkins f6305fcc27 command: remove commented-out, misplaced tests
For some reason there was a block of commented-out tests for the refresh
command in the test file for the import command. Here we remove them to
reduce the noise in this file.
2017-06-09 14:03:59 -07:00
James Bardin 044ad5ef59 rename some Constraints methods per code review 2017-06-09 14:03:59 -07:00
James Bardin 2994c6ac5d add init getPlugin test
add a mock plugin getter, and test that we can fetch requested version
of the plugins.
2017-06-09 14:03:59 -07:00
James Bardin 4c32cd432a make getProvider pluggable 2017-06-09 14:03:59 -07:00
James Bardin 66ebff90cd move some more plugin search path logic to command
Make less to change when we remove the old search path
2017-06-09 14:03:59 -07:00
James Bardin 2749946f5c basic plugin getter
Add discovery.GetProviders to fetch plugins from the relases site.

This is an early version, with no tests, that only (probably) fetches
plugins from the default location. The URLs are still subject to change,
and since there are no plugin releases, it doesn't work at all yet.
2017-06-09 14:03:59 -07:00
James Bardin 7d2d951f27 Rename VersionSet to Constraints
VersionSet is a wrapper around version.Contraints, so rename it it as
such.
2017-06-09 14:03:59 -07:00
James Bardin 718ede0636 have Meta.Backend use a Config rather than loading
Instead of providing the a path in BackendOpts, provide a loaded
*config.Config instead. This reduces the number of places where
configuration is loaded.
2017-06-09 14:03:59 -07:00
Martin Atkins 3af0ecdf01 command: "terraform providers" command
This new command prints out the tree of modules annotated with their
associated required providers.

The purpose of this command is to help users answer questions such as
"why is this provider required?", "why is Terraform using an older version
of this provider?", and "what combination of modules is creating an
impossible provider version situation?"

For configurations using many modules this sort of question is likely to
come up a lot once we support versioned providers.

As a bonus use-case, this command also shows explicitly when a provider
configuration is being inherited from a parent module, to help users to
understand where the configuration is coming from for each module when
some child modules provide their own provider configurations.
2017-06-09 14:03:59 -07:00
Martin Atkins 4ab8973520 core: provide config to all import context tests
We're going to use config to determine provider dependencies, so we need
to always provide a config when instantiating a context or we'll end up
loading no providers at all.

We previously had a test for running "terraform import -config=''" to
disable the config entirely, but this test is now removed because it makes
no sense. The actual functionality its testing still remains for now,
but it will be removed in a subsequent commit when we start requiring that
a resource to be imported must already exist in configuration.
2017-06-09 14:03:59 -07:00
Martin Atkins c835ef8ff3 Update tests for the new ProviderResolver interface
Rather than providing an already-resolved map of plugins to core, we now
provide a "provider resolver" which knows how to resolve a set of provider
dependencies, to be determined later, and produce that map.

This requires the context to be instantiated in a different way, so this
very noisy diff is a mostly-mechanical update of all of the existing
places where contexts get created for testing, using some adapted versions
of the pre-existing utilities for passing in mock providers.
2017-06-09 14:03:59 -07:00
Martin Atkins 7ca592ac06 core: use ResourceProviderResolver to resolve providers
Previously the set of providers was fixed early on in the command package
processing. In order to be version-aware we need to defer this work until
later, so this interface exists so we can hold on to the possibly-many
versions of plugins we have available and then later, once we've finished
determining the provider dependencies, select the appropriate version of
each provider to produce the final set of providers to use.

This commit establishes the use of this new mechanism, and thus populates
the provider factory map with only the providers that result from the
dependency resolution process.

This disables support for internal provider plugins, though the
mechanisms for building and launching these are still here vestigially,
to be cleaned up in a subsequent commit.

This also adds a new awkward quirk to the "terraform import" workflow
where one can't import a resource from a provider that isn't already
mentioned (implicitly or explicitly) in config. We will do some UX work
in subsequent commits to make this behavior better.

This breaks many tests due to the change in interface, but to keep this
particular diff reasonably easy to read the test fixes are split into
a separate commit.
2017-06-09 14:03:59 -07:00
Martin Atkins d6b6dbb5c6 command: correct provider name in the test fixture for push
Currently this doesn't matter much, but we're about to start checking the
availability of providers early on and so we need to use the correct name
for the mock set of providers we use in command tests, which includes
only a provider named "test".

Without this change, the "push" tests will begin failing once we start
verifying this, since there's no "aws" provider available in the test
context.
2017-06-09 14:03:59 -07:00
Martin Atkins 9b4f15c261 plugin: move Client function into plugin, from plugin/discovery
Having this as a method of PluginMeta felt most natural, but unfortunately
that means that discovery must depend on plugin and plugin in turn
depends on core Terraform, thus making the discovery package hard to use
without creating dependency cycles.

To resolve this, we invert the dependency and make the plugin package be
responsible for instantiating clients given a meta, using a top-level
function.
2017-06-09 14:03:59 -07:00
Martin Atkins 8364383c35 Push plugin discovery down into command package
Previously we did plugin discovery in the main package, but as we move
towards versioned plugins we need more information available in order to
resolve plugins, so we move this responsibility into the command package
itself.

For the moment this is just preserving the existing behavior as long as
there are only internal and unversioned plugins present. This is the
final state for provisioners in 0.10, since we don't want to support
versioned provisioners yet. For providers this is just a checkpoint along
the way, since further work is required to apply version constraints from
configuration and support additional plugin search directories.

The automatic plugin discovery behavior is not desirable for tests because
we want to mock the plugins there, so we add a new backdoor for the tests
to use to skip the plugin discovery and just provide their own mock
implementations. Most of this diff is thus noisy rework of the tests to
use this new mechanism.
2017-06-09 14:03:59 -07:00
Jake Champlin 7894478c8c Merge pull request #14681 from svanharmelen/f-review
Use `helpers.shema.Provisoner` in Chef provisioner V2
2017-05-30 14:26:51 -04:00
James Bardin dbe9599820 remove dead code 2017-05-26 15:01:39 -04:00
James Bardin b3795e2d29 remove old state hook code 2017-05-24 16:16:35 -04:00
Mat Schaffer 76d4abd12a Fix typo on state migration input error 2017-05-23 20:50:12 -07:00
Vladislav Rassokhin df4342bc3d Regenerate plugin list since provisioners were changed in previous commits 2017-05-19 20:54:08 +02:00
David Glasser 783908ee25 core: fix bad Sprintf in backend migration message (#14601)
Before this, invoking this codepath would print

    Terraform has successfully migrated from legacy remote state to your
    configured remote state.%!(EXTRA string=s3)
2017-05-19 17:01:44 +03:00
Paul Stack 055c18e302 core/provider-split: Split out the Oracle OPC provider to new structure (#14362)
* core/providersplit: Split OPC Provider to separate repo

As we march towards Terraform 0.10.0, we are going to start building the
terraform providers as separate binaries - this will allow us to
continually release them. Before we go to 0.10.0, we need to be able to
continue building providers in the same manner, therefore, we have
hardcoded the path of the provider in the generate-plugins.go file

The interim solution will require us to vendor the opc provider and any
child dependencies, but when we get to 0.10.0, we will no longer have to
do this - the core will auto download the plugin binary. The plugin
package will have it's own dependencies vendored as well.

* core/providersplit: Removing the builtin version of OPC provider

* core/providersplit: Vendoring the OPC plugin

* core/providersplit: update internal plugin list

* core/providersplit: remove unused govendor item
2017-05-16 19:53:25 +03:00
yanndegat 21d8125d5c provider/ovh: new provider (#12669)
* vendor: add go-ovh

* provider/ovh: new provider

* provider/ovh: Adding PublicCloud User Resource

* provider/ovh: Adding VRack Attachment Resource

* provider/ovh: Adding PublicCloud Network Resource

* provider/ovh: Adding PublicCloud Subnet Resource

* provider/ovh: Adding PublicCloud Regions Datasource

* provider/ovh: Adding PublicCloud Region Datasource

* provider/ovh: Fix Acctests using project's regions
2017-05-16 17:26:43 +03:00
Dmitrii Korotovskii ace0456d58 http provider and http request data source 2017-05-08 17:37:48 -07:00
James Bardin 947db2304a it's possible to get a nil diff in PreApply
Looking through the operations in node_resource_apply and
node_resource_destroy, there are multiple nil checks for diffApply, so
one we need to continue to assume that the diff can be nil through there
until we ensure that it is non-nill in all cases.

So regardless of how we came to get a nil diff in the UiHook PreApply
method, we need to check it.
2017-04-28 21:59:56 -04:00
Justin LaRose c6ad44de10 update error response when env does not exist (#14009) 2017-04-27 11:22:30 +01:00
stack72 898ac02854
Merge branch 'feature/gitlab_provider' of https://github.com/richardc/terraform into richardc-feature/gitlab_provider 2017-04-27 05:08:12 +12:00
Edward Betts be265479a9 correct spelling mistakes (#13979) 2017-04-27 02:10:04 +12:00
James Bardin 6ef7c83ec5 add data-loss warning to SIGINT handler in apply
Warning user about data loss after receiving an interrupt.
2017-04-25 11:43:59 -04:00
Richard Clamp 631b0b865c provider/gitlab: add gitlab provider and `gitlab_project` resource
Here we add a basic provider with a single resource type.

It's copied heavily from the `github` provider and `github_repository`
resource, as there is some overlap in those types/apis.

~~~
resource "gitlab_project" "test1" {
  name = "test1"
  visibility_level = "public"
}
~~~

We implement in terms of the
[go-gitlab](https://github.com/xanzy/go-gitlab) library, which provides
a wrapping of the [gitlab api](https://docs.gitlab.com/ee/api/)

We have been a little selective in the properties we surface for the
project resource, as not all properties are very instructive.
Notable is the removal of the `public` bool as the `visibility_level`
will take precedent if both are supplied which leads to confusing
interactions if they disagree.
2017-04-24 11:38:20 +01:00
Martin Atkins b1763e262a Restore stringer-generated files back to new version
stringer has changed the boilerplate it generates in a recent version.
We'd previously updated to the new format but accientally rolled back
to the old while merging a long-running feature branch.

This restores us back to the new format again.
2017-04-21 14:49:18 -07:00
Jasmin Gacic 61499cfcf0 Provider Oneandone (#13633)
* Terraform Provider 1&1

* Addressing pull request remarks

* Fixed imports

* Fixing remarks

* Test optimiziation
2017-04-21 17:19:10 +03:00
James Bardin dca89519ec Merge pull request #13825 from hashicorp/jbardin/reconfigure
add init -reconfigure flag
2017-04-21 09:03:12 -04:00
James Bardin 0e0f0b64b9 add init -reconfigure test
Check that we can reconfigure a backend ignoring the saved config, and
without effecting the saved backend.
2017-04-20 18:15:47 -04:00
James Bardin 7aa2ce8341 add -reconfigure option for init
The reconfigure flag will force init to ignore any saved backend state.
This is useful when a user does not want any backend migration to
happen, or if the saved configuration can't be loaded at all for some
reason.
2017-04-20 18:15:46 -04:00
Jake Champlin 35388cbc31 Merge pull request #13468 from hashicorp/f-oracle-compute
provider/opc: Add Oracle Compute Provider
2017-04-20 14:52:39 -04:00
Quentin Machu bf8d932d23
provider/local: Implement a new local_file resource
This commit adds the ability to provision files locally.
This is useful for cases where TerraForm generates assets
such as TLS certificates or templated documents that need
to be saved locally.

- While output variables can be used to return values to
the user, it is not extremly suitable for large content or
when many of these are generated, nor is it practical for
operators to manually save them on disk.
- While `local-exec` could be used with an `echo`, this
provider works across platforms and do not require any
convoluted escaping.
2017-04-13 14:57:29 -07:00
James Bardin d1b4df42ed missing PersistState in env new 2017-04-12 13:57:22 -04:00
David Joos bbd2e6de35 Fix for minor typo
"destionation" -> "destination"
2017-04-11 13:04:36 +01:00
tombuildsstuff 3a084b061a Merge branch 'master' into f-oracle-merge 2017-04-07 11:15:36 +01:00
Jake Champlin 456d43e200
Merge remote-tracking branch 'origin/master' into f-oracle-compute 2017-04-04 16:14:51 -04:00
James Bardin fb4a365d12 noop migrate copy, add -lock and -input
A couple commits got rebased together here, and it's easier to enumerate
them in a single commit.

Skip copying of states during migration if they are the same state. This
can happen when trying to reconfigure a backend's options, or if the
state was manually transferred. This can fail unexpectedly with locking
enabled.

Honor the `-input` flag for all confirmations (the new test hit some
more). Also unify where we reference the Meta.forceInitCopy and transfer
the value to the existing backendMigrateOpts.force field.
2017-04-04 14:54:48 -04:00
James Bardin aad143b6d1 set stateLock to true when building meta flagSet
Any commands that use `stateLock` should have a flag to set that
value, but set a failsafe to true just in case.
2017-04-04 14:44:58 -04:00
James Bardin d059939f88 Merge pull request #13262 from hashicorp/jbardin/lock-timeouts
lock timeouts
2017-04-04 14:30:20 -04:00
Radek Simko fc72a20c66 command/hook_ui: Increase max length of state IDs (#13317) 2017-04-04 15:41:54 +01:00
Jake Champlin edc524df55
provider/opc: Update OPC Provider
Updates the OPC provider to a fully working version.
2017-04-03 18:24:57 -04:00
James Bardin 3d604851c2 test -lock-timeout from cli 2017-04-03 11:50:19 -04:00
James Bardin 5eca913b14 add cli flags for -lock-timeout
Add the -lock-timeout flag to the appropriate commands.
Add the -lock flag to `init` and `import` which were missing it.
Set both stateLock and stateLockTimeout in Meta.flagsSet, and remove the
extra references for clarity.
2017-04-01 17:09:21 -04:00
James Bardin 305ef43aa6 provide contexts to clistate.Lock calls
Add fields required to create an appropriate context for all calls to
clistate.Lock.

Add missing checks for Meta.stateLock, where we would attempt to lock,
even if locking should be skipped.
2017-04-01 17:09:20 -04:00
James Bardin 3f0dcd1308 Have the clistate Lock use LockWithContext
- Have the ui Lock helper use state.LockWithContext.
- Rename the message package to clistate, since that's how it's imported
  everywhere.
- Use a more idiomatic placement of the Context in the LockWithContext
  args.
2017-04-01 17:09:20 -04:00
James Bardin 75458a182d remove extra state.Locker assertions
All states are lockers, so get rid of extra asertions.
2017-04-01 17:01:45 -04:00
James Bardin 8050eda52d don't delete local state on a local backend
Don't erase local state during backend migration if the new and old
paths are the same. Skipping the confirmation and copy are handled in
another patch, but the local state was always erased by default, even
when it was our new state.
2017-03-31 15:26:23 -04:00
James Bardin 50023e9a60 honor `input=false` in state migration
return an error when confirming a copy if -input=false
2017-03-29 18:11:45 -04:00
James Bardin 7d23e1ef20 add equivalent tests to meta_backend_test 2017-03-29 17:50:55 -04:00
James Bardin c891ab50b7 detect when backend.Hash needs update
It's possible to not change the backend config, but require updating the
stored backend state by moving init options from the config file to the
`-backend-config` flag. If the config is the same, but the hash doesn't
match, update the stored state.
2017-03-29 16:03:51 -04:00
James Bardin ff2d753062 add Rehash to terraform.BackendState
This method mirrors that of config.Backend, so we can compare the
configration of a backend read from a config vs that of a backend read
from a state. This will prevent init from reinitializing when using
`-backend-config` options that match the existing state.
2017-03-29 15:53:42 -04:00
Martin Atkins 76dca009e0 Allow escaped interpolation-like sequences in variable defaults
The variable validator assumes that any AST node it gets from an
interpolation walk is an indicator of an interpolation. Unfortunately,
back in f223be15 we changed the interpolation walker to emit a LiteralNode
as a way to signal that the result is a literal but not identical to the
input due to escapes.

The existence of this issue suggests a bit of a design smell in that the
interpolation walker interface at first glance appears to skip over all
literals, but it actually emits them in this one situation. In the long
run we should perhaps think about whether the abstraction is right here,
but this is a shallow, tactical change that fixes #13001.
2017-03-29 09:25:57 -07:00
Martin Atkins 21cd5595e2 Update stringer-generated files to new boilerplate
golang/tools commit 23ca8a263 changed the format of the leading comment
to comply with some new standards discussed here:
https://golang.org/issue/13560

This is the result of running generate with the latest version of
stringer. Everyone working on Terraform will need to update stringer
after this is merged, to avoid reverting this:
    go get -u golang.org/x/tools/cmd/stringer
2017-03-29 08:07:06 -07:00
James Bardin f172b4a023 Merge pull request #13105 from hashicorp/jbardin/command-env
disallow env names that aren't url-safe
2017-03-27 18:53:44 -04:00
James Bardin 2cffa25235 Add test to verify that Validation isn't called
The apply won't succeed because we don't have a valid plan, but this
verifies that providing a plan file prevents Validation.
2017-03-27 18:39:18 -04:00
James Bardin 9d118325b3 Reject names that aren't url-safe
Environment names can be used in a number of contexts, and should be
properly escaped for safety. Since most state names are store in path
structures, and often in a URL, use `url.PathEscape` to check for
disallowed characters
2017-03-27 18:00:56 -04:00
James Bardin 8027fe9e08 Don't Validate if we have an execution plan
The plan file should contain all data required to execute the apply
operation. Validation requires interpolation, and the `file()`
interpolation function may fail if the module files are not present.
This is the case currently with how TFE executes plans.
2017-03-27 17:11:50 -04:00
James Bardin 54e536cfe0 add `-force-copy` option to init command
The `-force-copy` option will suppress confirmation for copying state
data.

Modify some tests to use the option, making sure to leave coverage of
the Input code path.
2017-03-22 08:47:26 -04:00
Mitchell Hashimoto d01886a644
command: remove legacy remote state on migration
Fixes #12871

We were forgetting to remove the legacy remote state from the actual
state value when migrating. This only causes an issue when saving a plan
since the plan contains the state itself and causes an error where both
a backend + legacy state exist.

If saved plans aren't used this causes no noticable issue.

Due to buggy upgrades already existing in the wild, I also added code to
clear the remote section if it exists in a standard unchanged backend
2017-03-20 10:14:59 -07:00
Mitchell Hashimoto 23505cc2a8 Merge pull request #12818 from hashicorp/b-legacy
command: use backendinit instead of initializing legacy directly
2017-03-17 11:06:01 -07:00
James Bardin 434e78158a Merge pull request #12812 from hashicorp/jbardin/hook-ui-race
fix race in hook ui PreApply test
2017-03-17 14:05:39 -04:00
Mitchell Hashimoto 4f59576231
command: fix awkward wording in message 2017-03-17 10:52:22 -07:00
James Bardin b9931c437d fix race in hook ui PreApply test
Fix a race in the PreApply test, and make the PreApply background task
actually concealable.
2017-03-17 13:49:05 -04:00
Mitchell Hashimoto 96e38041ab
command: use backendinit instead of initializing legacy directly
Fixes #12806

This should've been part of 2c19aa69d9

This is the same issue, just missed a spot. Tests are hard to cover for
this since we're removing the legacy backends one by one, eventually
it'll be gone. A good sign is that we don't import backendlegacy at all
anymore in command/
2017-03-17 10:41:13 -07:00
Mitchell Hashimoto c87f3dfdd5
command/init: add test for -backend-config k/v 2017-03-17 10:22:48 -07:00
Mitchell Hashimoto df8529719c
command/init: backend-config accepts key=value pairs
This augments backend-config to also accept key=value pairs.
This should make Terraform easier to script rather than having to
generate a JSON file.

You must still specify the backend type as a minimal amount in
configurations, example:

```
terraform { backend "consul" {} }
```

This is required because Terraform needs to be able to detect the
_absense_ of that value for unsetting, if that is necessary at some
point.
2017-03-16 23:27:05 -07:00
Mitchell Hashimoto e1f4eca93c
command: apply needs to look at the right field for backend state
Plans were properly encoding backend configuration but the apply was
reading it from the wrong field. :( This meant that every apply from a
plan was applying it locally with backends.

This needs to get released ASAP.
2017-03-16 15:44:15 -07:00
Mitchell Hashimoto 25312c8985
command/push: update copy for remote state error 2017-03-16 14:41:37 -07:00
James Bardin 9bfa361e21 Merge pull request #12778 from hashicorp/jbardin/GH-12741
change to default state after backend migration
2017-03-16 16:07:25 -04:00
Mitchell Hashimoto 6921457601 Merge pull request #12777 from hashicorp/b-refresh-empty
backend/local: allow refresh on empty/non-existent state
2017-03-16 13:04:27 -07:00
James Bardin ea095eda87 change to default state after backend migration
When migrating from a multi-state backend to a single-state backend, we
have to ensure that our locally configured environment is changed back
to "default", or we won't be able to access the new backend.
2017-03-16 15:55:32 -04:00
Mitchell Hashimoto 2be1f55cbb
backend/local: allow refresh on empty/non-existent state
This allows a refresh on a non-existent or empty state file. We changed
this in 0.9.0 to error which seemed reasonable but it turns out this
complicates automation that runs refresh since it now needed to
determine if the state file was empty before running.

Its easier to just revert this into a warning with exit code zero.

The reason this changed is because in 0.8.x and earlier, the output
would be simply empty with exit code zero which seemed odd.
2017-03-16 12:11:31 -07:00
Mitchell Hashimoto 81639480fb
command: recompute config hash with ConfigFile set
Fixes #12749

If we merge in an extra partial config we need to recompute the hash to
compare with the old value to detect that change.

This hash needs to NOT be stored and just used as a temporary. We want
to keep the original hash in the state so that we don't detect a change
from the config (since the config will always be partial).
2017-03-16 11:47:59 -07:00
Mitchell Hashimoto 87201ec854
command/push: test for push with new backends 2017-03-16 10:52:58 -07:00
Mitchell Hashimoto 8b208a597d
command/push: don't allow pushing with local backend 2017-03-16 10:47:48 -07:00
Radek Simko 4448e45678 Merge pull request #12372 from hashicorp/f-kubernetes
kubernetes: Add provider + namespace resource
2017-03-16 07:18:39 +00:00
Mitchell Hashimoto f7964194eb
command: fix odd formatting that snuck in 2017-03-13 16:41:33 -07:00
Mitchell Hashimoto d475fc29a8
command: test that terraform meta information is passed through 2017-03-13 16:31:35 -07:00
Radek Simko f1db0fcf9b
kubernetes: Add provider + namespace resource 2017-03-13 21:19:17 +00:00
Radek Simko 4d6242dfe0 command: Add tests for UiHook (#12447) 2017-03-13 20:09:25 +00:00
Sean Chittenden 17fb98afa2 Circonus Provider (#12338)
* Begin stubbing out the Circonus provider.

* Remove all references to `reverse:secret_key`.

This value is dynamically set by the service and unused by Terraform.

* Update the `circonus_check` resource.

Still a WIP.

* Add docs for the `circonus_check` resource.

Commit miss, this should have been included in the last commit.

* "Fix" serializing check tags

I still need to figure out how I can make them order agnostic w/o using
a TypeSet.  I'm worried that's what I'm going to have to do.

* Spike a quick circonus_broker data source.

* Convert tags to a Set so the order does not matter.

* Add a `circonus_account` data source.

* Correctly spell account.

Pointed out by: @postwait

* Add the `circonus_contact_group` resource.

* Push descriptions into their own file in order to reduce the busyness of the schema when reviewing code.

* Rename `circonus_broker` and `broker` to `circonus_collector` and `collector`, respectively.

Change made with concent by Circonus to reduce confusion (@postwait, @maier, and several others).

* Use upstream contsants where available.

* Import the latest circonus-gometrics.

* Move to using a Set of collectors vs a list attached to a single attribute.

* Rename "cid" to "id" in the circonus_account data source and elsewhere
where possible.

* Inject a tag automatically.  Update gometrics.

* Checkpoint `circonus_metric` resource.

* Enable provider-level auto-tagging.  This is disabled by default.

* Rearrange metric.  This is an experimental "style" of a provider.  We'll see.

That moment. When you think you've gone off the rails on a mad scientist
experiment but like the outcome and think you may be onto something but
haven't proven it to yourself or anyone else yet?  That.  That exact
feeling of semi-confidence while being alone in the wilderness.  Please
let this not be the Terraform provider equivalent of DJB's C style of
coding.

We'll know in another resource or two if this was a horrible mistake or
not.

* Begin moving `resource_circonus_check` over to the new world order/structure:

Much of this is WIP and incomplete, but here is the new supported
structure:

```
variable "used_metric_name" {
  default = "_usage`0`_used"
}

resource "circonus_check" "usage" {
  # collectors = ["${var.collectors}"]
  collector {
    id = "${var.collectors[0]}"
  }

  name       = "${var.check_name}"
  notes      = "${var.notes}"

  json {
    url = "https://${var.target}/account/current"

    http_headers = {
      "Accept"                = "application/json"
      "X-Circonus-App-Name"   = "TerraformCheck"
      "X-Circonus-Auth-Token" = "${var.api_token}"
    }
  }

  stream {
    name = "${circonus_metric.used.name}"
    tags = "${circonus_metric.used.tags}"
    type = "${circonus_metric.used.type}"
  }

  tags = {
    source = "circonus"
  }
}

resource "circonus_metric" "used" {
  name = "${var.used_metric_name}"

  tags = {
    source = "circonus"
  }

  type = "numeric"
}
```

* Document the `circonus_metric` resource.

* Updated `circonus_check` docs.

* If a port was present, automatically set it in the Config.

* Alpha sort the check parameters now that they've been renamed.

* Fix a handful of panics as a result of the schema changing.

* Move back to a `TypeSet` for tags.  After a stint with `TypeMap`, move
back to `TypeSet`.

A set of strings seems to match the API the best.  The `map` type was
convenient because it reduced the amount of boilerplate, but you loose
out on other things.  For instance, tags come in the form of
`category:value`, so naturally it seems like you could use a map, but
you can't without severe loss of functionality because assigning two
values to the same category is common.  And you can't normalize map
input or suppress the output correctly (this was eventually what broke
the camel's back).  I tried an experiment of normalizing the input to be
`category:value` as the key in the map and a value of `""`, but... seee
diff suppress.  In this case, simple is good.

While here bring some cleanups to _Metric since that was my initial
testing target.

* Rename `providerConfig` to `_ProviderConfig`

* Checkpoint the `json` check type.

* Fix a few residual issues re: missing descriptions.

* Rename `validateRegexp` to `_ValidateRegexp`

* Use tags as real sets, not just a slice of strings.

* Move the DiffSuppressFunc for tags down to the Elem.

* Fix up unit tests to chase the updated, default hasher function being used.

* Remove `Computed` attribute from `TypeSet` objects.

This fixes a pile of issues re: update that I was having.

* Rename functions.

`GetStringOk` -> `GetStringOK`
`GetSetAsListOk` -> `GetSetAsListOK`
`GetIntOk` -> `GetIntOK`

* Various small cleanups and comments rolled into a single commit.

* Add a `postgresql` check type for the `circonus_check` resource.

* Rename various validator functions to be _CapitalCase vs capitalCase.

* Err... finish the validator renames.

* Add `GetFloat64()` support.

* Add `icmp_ping` check type support.

* Catch up to the _API*Attr renames.

Deliberately left out of the previous commit in order to create a clean
example of what is required to add a new check type to the
`circonus_check` resource.

* Clarify when the `target` attribute is required for the `postgresql`
check type.

* Correctly pull the metric ID attribute from the right location.

* Add a circonus_stream_group resource (a.k.a. a Circonus "metric cluster")

* Add support for the [`caql`](https://login.circonus.com/user/docs/caql_reference) check type.

* Add support for the `http` check type.

* `s/SSL/TLS/g`

* Add support for `tcp` check types.

* Enumerate the available metrics that are supported for each check type.

* Add [`cloudwatch`](https://login.circonus.com/user/docs/Data/CheckTypes/CloudWatch) check type support.

* Add a `circonus_trigger` resource (a.k.a Circonus Ruleset).

* Rename a handful of functions to make it clear in the function name the
direction of flow for information moving through the provider.

TL;DR: Replace `parse` and `read` with "foo to bar"-like names.

* Fix the attribute name used in a validator.  Absent != After.

* Set the minimum `absent` predicate to 70s per testing.

* Fix the regression tests for circonus_trigger now that absent has a 70s min

* Fix up the `tcp` check to require a `host` attribute.

Fix tests.  It's clear I didn't run these before committing/pushing the
`tcp` check last time.

* Fix `circonus_check` for `cloudwatch` checks.

* Rename `parsePerCheckTypeConfig()` to `_CheckConfigToAPI` to be
consistent with other function names.

grep(1)ability of code++

* Slack buttons as an integer are string encoded.

* Fix updates for `circonus_contact`.

* Fix the out parameters for contact groups.

* Move to using `_CastSchemaToTF()` where appropriate.

* Fix circonus_contact_group.  Updates work as expected now.

* Use `_StateSet()` in place of `d.Set()` everywhere.

* Make a quick pass over the collector datasource to modernize its style

* Quick pass for items identified by `golint`.

* Fix up collectors

* Fix the `json` check type.

Reconcile possible sources of drift.  Update now works as expected.

* Normalize trigger durations to seconds.

* Improve the robustness of the state handling for the `circonus_contact_group` resource.

* I'm torn on this, but sort the contact groups in the notify list.

This does mean that if the first contact group in the list has a higher
lexical sort order the plan won't converge until the offending resource
is tainted and recreated.  But there's also some sorting happening
elsewhere, so.... sort and taint for now and this will need to be
revisited in the future.

* Add support for the `httptrap` check type.

* Remove empty units from the state file.

* Metric clusters can return a 404.  Detect this accordingly in its
respective Exists handler.

* Add a `circonus_graph` resource.

* Fix a handful of bugs in the graph provider.

* Re-enable the necessary `ConflictsWith` definitions and normalize attribute names.

* Objects that have been deleted via the UI return a 404. Handle in Exists().

* Teach `circonus_graph`'s Stack set to accept nil values.

* Set `ForceNew: true` for a graph's name.

* Chase various API fixes required to make `circonus_graph` work as expected.

* Fix up the handling of sub-1 zoom resolutions for graphs.

* Add the `check_by_collector` out parameter to the `circonus_check` resource.

* Improve validation of line vs area graphs.  Fix graph_style.

* Fix up the `logarithmic` graph axis option.

* Resolve various trivial `go vet` issues.

* Add a stream_group out parameter.

* Remove incorrectly applied `Optional` attributes to the `circonus_account` resource.

* Remove various `Optional` attributes from the `circonus_collector` data source.

* Centralize the common need to suppress leading and trailing whitespace into `suppressWhitespace`.

* Sync up with upstream vendor fixes for circonus_graph.

* Update the checksum value for the http check.

* Chase `circonus_graph`'s underlying `line_style` API object change from `string` to `*string`.

* Clean up tests to use a generic terraform regression testing account.

* Add support for the MySQL to the `circonus_check` resource.

* Begin stubbing out the Circonus provider.

* Remove all references to `reverse:secret_key`.

This value is dynamically set by the service and unused by Terraform.

* Update the `circonus_check` resource.

Still a WIP.

* Add docs for the `circonus_check` resource.

Commit miss, this should have been included in the last commit.

* "Fix" serializing check tags

I still need to figure out how I can make them order agnostic w/o using
a TypeSet.  I'm worried that's what I'm going to have to do.

* Spike a quick circonus_broker data source.

* Convert tags to a Set so the order does not matter.

* Add a `circonus_account` data source.

* Correctly spell account.

Pointed out by: @postwait

* Add the `circonus_contact_group` resource.

* Push descriptions into their own file in order to reduce the busyness of the schema when reviewing code.

* Rename `circonus_broker` and `broker` to `circonus_collector` and `collector`, respectively.

Change made with concent by Circonus to reduce confusion (@postwait, @maier, and several others).

* Use upstream contsants where available.

* Import the latest circonus-gometrics.

* Move to using a Set of collectors vs a list attached to a single attribute.

* Rename "cid" to "id" in the circonus_account data source and elsewhere
where possible.

* Inject a tag automatically.  Update gometrics.

* Checkpoint `circonus_metric` resource.

* Enable provider-level auto-tagging.  This is disabled by default.

* Rearrange metric.  This is an experimental "style" of a provider.  We'll see.

That moment. When you think you've gone off the rails on a mad scientist
experiment but like the outcome and think you may be onto something but
haven't proven it to yourself or anyone else yet?  That.  That exact
feeling of semi-confidence while being alone in the wilderness.  Please
let this not be the Terraform provider equivalent of DJB's C style of
coding.

We'll know in another resource or two if this was a horrible mistake or
not.

* Begin moving `resource_circonus_check` over to the new world order/structure:

Much of this is WIP and incomplete, but here is the new supported
structure:

```
variable "used_metric_name" {
  default = "_usage`0`_used"
}

resource "circonus_check" "usage" {
  # collectors = ["${var.collectors}"]
  collector {
    id = "${var.collectors[0]}"
  }

  name       = "${var.check_name}"
  notes      = "${var.notes}"

  json {
    url = "https://${var.target}/account/current"

    http_headers = {
      "Accept"                = "application/json"
      "X-Circonus-App-Name"   = "TerraformCheck"
      "X-Circonus-Auth-Token" = "${var.api_token}"
    }
  }

  stream {
    name = "${circonus_metric.used.name}"
    tags = "${circonus_metric.used.tags}"
    type = "${circonus_metric.used.type}"
  }

  tags = {
    source = "circonus"
  }
}

resource "circonus_metric" "used" {
  name = "${var.used_metric_name}"

  tags = {
    source = "circonus"
  }

  type = "numeric"
}
```

* Document the `circonus_metric` resource.

* Updated `circonus_check` docs.

* If a port was present, automatically set it in the Config.

* Alpha sort the check parameters now that they've been renamed.

* Fix a handful of panics as a result of the schema changing.

* Move back to a `TypeSet` for tags.  After a stint with `TypeMap`, move
back to `TypeSet`.

A set of strings seems to match the API the best.  The `map` type was
convenient because it reduced the amount of boilerplate, but you loose
out on other things.  For instance, tags come in the form of
`category:value`, so naturally it seems like you could use a map, but
you can't without severe loss of functionality because assigning two
values to the same category is common.  And you can't normalize map
input or suppress the output correctly (this was eventually what broke
the camel's back).  I tried an experiment of normalizing the input to be
`category:value` as the key in the map and a value of `""`, but... seee
diff suppress.  In this case, simple is good.

While here bring some cleanups to _Metric since that was my initial
testing target.

* Rename `providerConfig` to `_ProviderConfig`

* Checkpoint the `json` check type.

* Fix a few residual issues re: missing descriptions.

* Rename `validateRegexp` to `_ValidateRegexp`

* Use tags as real sets, not just a slice of strings.

* Move the DiffSuppressFunc for tags down to the Elem.

* Fix up unit tests to chase the updated, default hasher function being used.

* Remove `Computed` attribute from `TypeSet` objects.

This fixes a pile of issues re: update that I was having.

* Rename functions.

`GetStringOk` -> `GetStringOK`
`GetSetAsListOk` -> `GetSetAsListOK`
`GetIntOk` -> `GetIntOK`

* Various small cleanups and comments rolled into a single commit.

* Add a `postgresql` check type for the `circonus_check` resource.

* Rename various validator functions to be _CapitalCase vs capitalCase.

* Err... finish the validator renames.

* Add `GetFloat64()` support.

* Add `icmp_ping` check type support.

* Catch up to the _API*Attr renames.

Deliberately left out of the previous commit in order to create a clean
example of what is required to add a new check type to the
`circonus_check` resource.

* Clarify when the `target` attribute is required for the `postgresql`
check type.

* Correctly pull the metric ID attribute from the right location.

* Add a circonus_stream_group resource (a.k.a. a Circonus "metric cluster")

* Add support for the [`caql`](https://login.circonus.com/user/docs/caql_reference) check type.

* Add support for the `http` check type.

* `s/SSL/TLS/g`

* Add support for `tcp` check types.

* Enumerate the available metrics that are supported for each check type.

* Add [`cloudwatch`](https://login.circonus.com/user/docs/Data/CheckTypes/CloudWatch) check type support.

* Add a `circonus_trigger` resource (a.k.a Circonus Ruleset).

* Rename a handful of functions to make it clear in the function name the
direction of flow for information moving through the provider.

TL;DR: Replace `parse` and `read` with "foo to bar"-like names.

* Fix the attribute name used in a validator.  Absent != After.

* Set the minimum `absent` predicate to 70s per testing.

* Fix the regression tests for circonus_trigger now that absent has a 70s min

* Fix up the `tcp` check to require a `host` attribute.

Fix tests.  It's clear I didn't run these before committing/pushing the
`tcp` check last time.

* Fix `circonus_check` for `cloudwatch` checks.

* Rename `parsePerCheckTypeConfig()` to `_CheckConfigToAPI` to be
consistent with other function names.

grep(1)ability of code++

* Slack buttons as an integer are string encoded.

* Fix updates for `circonus_contact`.

* Fix the out parameters for contact groups.

* Move to using `_CastSchemaToTF()` where appropriate.

* Fix circonus_contact_group.  Updates work as expected now.

* Use `_StateSet()` in place of `d.Set()` everywhere.

* Make a quick pass over the collector datasource to modernize its style

* Quick pass for items identified by `golint`.

* Fix up collectors

* Fix the `json` check type.

Reconcile possible sources of drift.  Update now works as expected.

* Normalize trigger durations to seconds.

* Improve the robustness of the state handling for the `circonus_contact_group` resource.

* I'm torn on this, but sort the contact groups in the notify list.

This does mean that if the first contact group in the list has a higher
lexical sort order the plan won't converge until the offending resource
is tainted and recreated.  But there's also some sorting happening
elsewhere, so.... sort and taint for now and this will need to be
revisited in the future.

* Add support for the `httptrap` check type.

* Remove empty units from the state file.

* Metric clusters can return a 404.  Detect this accordingly in its
respective Exists handler.

* Add a `circonus_graph` resource.

* Fix a handful of bugs in the graph provider.

* Re-enable the necessary `ConflictsWith` definitions and normalize attribute names.

* Objects that have been deleted via the UI return a 404. Handle in Exists().

* Teach `circonus_graph`'s Stack set to accept nil values.

* Set `ForceNew: true` for a graph's name.

* Chase various API fixes required to make `circonus_graph` work as expected.

* Fix up the handling of sub-1 zoom resolutions for graphs.

* Add the `check_by_collector` out parameter to the `circonus_check` resource.

* Improve validation of line vs area graphs.  Fix graph_style.

* Fix up the `logarithmic` graph axis option.

* Resolve various trivial `go vet` issues.

* Add a stream_group out parameter.

* Remove incorrectly applied `Optional` attributes to the `circonus_account` resource.

* Remove various `Optional` attributes from the `circonus_collector` data source.

* Centralize the common need to suppress leading and trailing whitespace into `suppressWhitespace`.

* Sync up with upstream vendor fixes for circonus_graph.

* Update the checksum value for the http check.

* Chase `circonus_graph`'s underlying `line_style` API object change from `string` to `*string`.

* Clean up tests to use a generic terraform regression testing account.

* Rename all identifiers that began with a `_` and replace with a corresponding lowercase glyph.

* Remove stale comment in types.

* Move the calls to `ResourceData`'s `SetId()` calls to be first in the
list so that no resources are lost in the event of a `panic()`.

* Remove `stateSet` from the `circonus_trigger` resource.

* Remove `stateSet` from the `circonus_stream_group` resource.

* Remove `schemaSet` from the `circonus_graph` resource.

* Remove `stateSet` from the `circonus_contact` resource.

* Remove `stateSet` from the `circonus_metric` resource.

* Remove `stateSet` from the `circonus_account` data source.

* Remove `stateSet` from the `circonus_collector` data source.

* Remove stray `stateSet` call from the `circonus_contact` resource.

This is an odd artifact to find... I'm completely unsure as to why it
was there to begin with but am mostly certain it's a bug and needs to be
removed.

* Remove `stateSet` from the `circonus_check` resource.

* Remove the `stateSet` helper function.

All call sites have been converted to return errors vs `panic()`'ing at
runtime.

* Remove a pile of unused functions and type definitions.

* Remove the last of the `attrReader` interface.

* Remove an unused `Sprintf` call.

* Update `circonus-gometrics` and remove unused files.

* Document what `convertToHelperSchema()` does.

Rename `castSchemaToTF` to `convertToHelperSchema`.

Change the function parameter ordering so the `map` of attribute
descriptions: this is much easier to maintain when the description map
is first when creating schema inline.

* Move descriptions into their respective source files.

* Remove all instances of `panic()`.

In the case of software bugs, log an error.  Never `panic()` and always
return a value.

* Rename `stream_group` to `metric_cluster`.

* Rename triggers to rule sets

* Rename `stream` to `metric`.

* Chase the `stream` -> `metric` change into the docs.

* Remove some unused test functions.

* Add the now required `color` attribute for graphing a `metric_cluster`.

* Add a missing description to silence a warning.

* Add `id` as a selector for the account data source.

* Futureproof testing: Randomize all asset names to prevent any possible resource conflicts.

This isn't a necessary change for our current build and regression
testing, but *just in case* we have a radical change to our testing
framework in the future, make all resource names fully random.

* Rename various values to match the Circonus docs.

* s/alarm/alert/g

* Ensure ruleset criteria can not be empty.
2017-03-10 14:19:17 -06:00
James Bardin 200d5787ca Merge pull request #12433 from hashicorp/jbardin/extra-args
missing args assignment after parsing flags
2017-03-09 11:03:56 -05:00
Clint 5d894e4ffd Fix up command and some go fmt issues (#12509) 2017-03-07 16:03:45 -06:00
Paul Stack b57e0bee2a provider/datadog: Update to datadog_monitor still used d.GetOk (#12497)
Fixes: #12494

The Create was changed to use the default and not d.GetOk - the update
wasn't - this was causing issues when trying to update to a false value

```
% make testacc TEST=./builtin/providers/datadog
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/03/07 16:20:54 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/datadog -v  -timeout 120m
=== RUN   TestDatadogMonitor_import
--- PASS: TestDatadogMonitor_import (4.77s)
=== RUN   TestDatadogUser_import
--- PASS: TestDatadogUser_import (6.23s)
=== RUN   TestProvider
--- PASS: TestProvider (0.00s)
=== RUN   TestProvider_impl
--- PASS: TestProvider_impl (0.00s)
=== RUN   TestAccDatadogMonitor_Basic
--- PASS: TestAccDatadogMonitor_Basic (3.83s)
=== RUN   TestAccDatadogMonitor_BasicNoTreshold
--- PASS: TestAccDatadogMonitor_BasicNoTreshold (4.92s)
=== RUN   TestAccDatadogMonitor_Updated
--- PASS: TestAccDatadogMonitor_Updated (5.88s)
=== RUN   TestAccDatadogMonitor_TrimWhitespace
--- PASS: TestAccDatadogMonitor_TrimWhitespace (3.23s)
=== RUN   TestAccDatadogMonitor_Basic_float_int
--- PASS: TestAccDatadogMonitor_Basic_float_int (5.73s)
=== RUN   TestAccDatadogTimeboard_update
--- PASS: TestAccDatadogTimeboard_update (8.86s)
=== RUN   TestValidateAggregatorMethod
--- PASS: TestValidateAggregatorMethod (0.00s)
=== RUN   TestAccDatadogUser_Updated
--- PASS: TestAccDatadogUser_Updated (6.05s)
PASS
ok  	github.com/hashicorp/terraform/builtin/providers/datadog	49.506s
```
2017-03-07 16:36:37 +02:00
James Bardin e58a02405e missing args assignment after parsing flags
`env list` was missing the args re-assignment after parsing the flags.
This is only a problem if the variables are automatically be populated
as arguments from a tfvars file.
2017-03-03 18:19:56 -05:00
Mitchell Hashimoto 03493f7d46
command: validate backend config
The validation itself was added a couple weeks ago but I forgot to
actually call it. :sad:
2017-03-02 14:07:49 -08:00
Mitchell Hashimoto d1b930d5b5
command: remove unused test 2017-03-02 11:21:48 -08:00
Mitchell Hashimoto 0224a12a00
command: fix go vet 2017-03-02 11:19:54 -08:00
Mitchell Hashimoto 5086f9f568
command: remove log 2017-03-02 11:15:38 -08:00