The reconfigure flag will force init to ignore any saved backend state.
This is useful when a user does not want any backend migration to
happen, or if the saved configuration can't be loaded at all for some
reason.
This commit adds the ability to provision files locally.
This is useful for cases where TerraForm generates assets
such as TLS certificates or templated documents that need
to be saved locally.
- While output variables can be used to return values to
the user, it is not extremly suitable for large content or
when many of these are generated, nor is it practical for
operators to manually save them on disk.
- While `local-exec` could be used with an `echo`, this
provider works across platforms and do not require any
convoluted escaping.
A couple commits got rebased together here, and it's easier to enumerate
them in a single commit.
Skip copying of states during migration if they are the same state. This
can happen when trying to reconfigure a backend's options, or if the
state was manually transferred. This can fail unexpectedly with locking
enabled.
Honor the `-input` flag for all confirmations (the new test hit some
more). Also unify where we reference the Meta.forceInitCopy and transfer
the value to the existing backendMigrateOpts.force field.
Add the -lock-timeout flag to the appropriate commands.
Add the -lock flag to `init` and `import` which were missing it.
Set both stateLock and stateLockTimeout in Meta.flagsSet, and remove the
extra references for clarity.
Add fields required to create an appropriate context for all calls to
clistate.Lock.
Add missing checks for Meta.stateLock, where we would attempt to lock,
even if locking should be skipped.
- Have the ui Lock helper use state.LockWithContext.
- Rename the message package to clistate, since that's how it's imported
everywhere.
- Use a more idiomatic placement of the Context in the LockWithContext
args.
Don't erase local state during backend migration if the new and old
paths are the same. Skipping the confirmation and copy are handled in
another patch, but the local state was always erased by default, even
when it was our new state.
It's possible to not change the backend config, but require updating the
stored backend state by moving init options from the config file to the
`-backend-config` flag. If the config is the same, but the hash doesn't
match, update the stored state.
This method mirrors that of config.Backend, so we can compare the
configration of a backend read from a config vs that of a backend read
from a state. This will prevent init from reinitializing when using
`-backend-config` options that match the existing state.
The variable validator assumes that any AST node it gets from an
interpolation walk is an indicator of an interpolation. Unfortunately,
back in f223be15 we changed the interpolation walker to emit a LiteralNode
as a way to signal that the result is a literal but not identical to the
input due to escapes.
The existence of this issue suggests a bit of a design smell in that the
interpolation walker interface at first glance appears to skip over all
literals, but it actually emits them in this one situation. In the long
run we should perhaps think about whether the abstraction is right here,
but this is a shallow, tactical change that fixes#13001.
golang/tools commit 23ca8a263 changed the format of the leading comment
to comply with some new standards discussed here:
https://golang.org/issue/13560
This is the result of running generate with the latest version of
stringer. Everyone working on Terraform will need to update stringer
after this is merged, to avoid reverting this:
go get -u golang.org/x/tools/cmd/stringer
Environment names can be used in a number of contexts, and should be
properly escaped for safety. Since most state names are store in path
structures, and often in a URL, use `url.PathEscape` to check for
disallowed characters
The plan file should contain all data required to execute the apply
operation. Validation requires interpolation, and the `file()`
interpolation function may fail if the module files are not present.
This is the case currently with how TFE executes plans.
The `-force-copy` option will suppress confirmation for copying state
data.
Modify some tests to use the option, making sure to leave coverage of
the Input code path.
Fixes#12871
We were forgetting to remove the legacy remote state from the actual
state value when migrating. This only causes an issue when saving a plan
since the plan contains the state itself and causes an error where both
a backend + legacy state exist.
If saved plans aren't used this causes no noticable issue.
Due to buggy upgrades already existing in the wild, I also added code to
clear the remote section if it exists in a standard unchanged backend
Fixes#12806
This should've been part of 2c19aa69d9
This is the same issue, just missed a spot. Tests are hard to cover for
this since we're removing the legacy backends one by one, eventually
it'll be gone. A good sign is that we don't import backendlegacy at all
anymore in command/
This augments backend-config to also accept key=value pairs.
This should make Terraform easier to script rather than having to
generate a JSON file.
You must still specify the backend type as a minimal amount in
configurations, example:
```
terraform { backend "consul" {} }
```
This is required because Terraform needs to be able to detect the
_absense_ of that value for unsetting, if that is necessary at some
point.
Plans were properly encoding backend configuration but the apply was
reading it from the wrong field. :( This meant that every apply from a
plan was applying it locally with backends.
This needs to get released ASAP.
When migrating from a multi-state backend to a single-state backend, we
have to ensure that our locally configured environment is changed back
to "default", or we won't be able to access the new backend.
This allows a refresh on a non-existent or empty state file. We changed
this in 0.9.0 to error which seemed reasonable but it turns out this
complicates automation that runs refresh since it now needed to
determine if the state file was empty before running.
Its easier to just revert this into a warning with exit code zero.
The reason this changed is because in 0.8.x and earlier, the output
would be simply empty with exit code zero which seemed odd.
Fixes#12749
If we merge in an extra partial config we need to recompute the hash to
compare with the old value to detect that change.
This hash needs to NOT be stored and just used as a temporary. We want
to keep the original hash in the state so that we don't detect a change
from the config (since the config will always be partial).
* Begin stubbing out the Circonus provider.
* Remove all references to `reverse:secret_key`.
This value is dynamically set by the service and unused by Terraform.
* Update the `circonus_check` resource.
Still a WIP.
* Add docs for the `circonus_check` resource.
Commit miss, this should have been included in the last commit.
* "Fix" serializing check tags
I still need to figure out how I can make them order agnostic w/o using
a TypeSet. I'm worried that's what I'm going to have to do.
* Spike a quick circonus_broker data source.
* Convert tags to a Set so the order does not matter.
* Add a `circonus_account` data source.
* Correctly spell account.
Pointed out by: @postwait
* Add the `circonus_contact_group` resource.
* Push descriptions into their own file in order to reduce the busyness of the schema when reviewing code.
* Rename `circonus_broker` and `broker` to `circonus_collector` and `collector`, respectively.
Change made with concent by Circonus to reduce confusion (@postwait, @maier, and several others).
* Use upstream contsants where available.
* Import the latest circonus-gometrics.
* Move to using a Set of collectors vs a list attached to a single attribute.
* Rename "cid" to "id" in the circonus_account data source and elsewhere
where possible.
* Inject a tag automatically. Update gometrics.
* Checkpoint `circonus_metric` resource.
* Enable provider-level auto-tagging. This is disabled by default.
* Rearrange metric. This is an experimental "style" of a provider. We'll see.
That moment. When you think you've gone off the rails on a mad scientist
experiment but like the outcome and think you may be onto something but
haven't proven it to yourself or anyone else yet? That. That exact
feeling of semi-confidence while being alone in the wilderness. Please
let this not be the Terraform provider equivalent of DJB's C style of
coding.
We'll know in another resource or two if this was a horrible mistake or
not.
* Begin moving `resource_circonus_check` over to the new world order/structure:
Much of this is WIP and incomplete, but here is the new supported
structure:
```
variable "used_metric_name" {
default = "_usage`0`_used"
}
resource "circonus_check" "usage" {
# collectors = ["${var.collectors}"]
collector {
id = "${var.collectors[0]}"
}
name = "${var.check_name}"
notes = "${var.notes}"
json {
url = "https://${var.target}/account/current"
http_headers = {
"Accept" = "application/json"
"X-Circonus-App-Name" = "TerraformCheck"
"X-Circonus-Auth-Token" = "${var.api_token}"
}
}
stream {
name = "${circonus_metric.used.name}"
tags = "${circonus_metric.used.tags}"
type = "${circonus_metric.used.type}"
}
tags = {
source = "circonus"
}
}
resource "circonus_metric" "used" {
name = "${var.used_metric_name}"
tags = {
source = "circonus"
}
type = "numeric"
}
```
* Document the `circonus_metric` resource.
* Updated `circonus_check` docs.
* If a port was present, automatically set it in the Config.
* Alpha sort the check parameters now that they've been renamed.
* Fix a handful of panics as a result of the schema changing.
* Move back to a `TypeSet` for tags. After a stint with `TypeMap`, move
back to `TypeSet`.
A set of strings seems to match the API the best. The `map` type was
convenient because it reduced the amount of boilerplate, but you loose
out on other things. For instance, tags come in the form of
`category:value`, so naturally it seems like you could use a map, but
you can't without severe loss of functionality because assigning two
values to the same category is common. And you can't normalize map
input or suppress the output correctly (this was eventually what broke
the camel's back). I tried an experiment of normalizing the input to be
`category:value` as the key in the map and a value of `""`, but... seee
diff suppress. In this case, simple is good.
While here bring some cleanups to _Metric since that was my initial
testing target.
* Rename `providerConfig` to `_ProviderConfig`
* Checkpoint the `json` check type.
* Fix a few residual issues re: missing descriptions.
* Rename `validateRegexp` to `_ValidateRegexp`
* Use tags as real sets, not just a slice of strings.
* Move the DiffSuppressFunc for tags down to the Elem.
* Fix up unit tests to chase the updated, default hasher function being used.
* Remove `Computed` attribute from `TypeSet` objects.
This fixes a pile of issues re: update that I was having.
* Rename functions.
`GetStringOk` -> `GetStringOK`
`GetSetAsListOk` -> `GetSetAsListOK`
`GetIntOk` -> `GetIntOK`
* Various small cleanups and comments rolled into a single commit.
* Add a `postgresql` check type for the `circonus_check` resource.
* Rename various validator functions to be _CapitalCase vs capitalCase.
* Err... finish the validator renames.
* Add `GetFloat64()` support.
* Add `icmp_ping` check type support.
* Catch up to the _API*Attr renames.
Deliberately left out of the previous commit in order to create a clean
example of what is required to add a new check type to the
`circonus_check` resource.
* Clarify when the `target` attribute is required for the `postgresql`
check type.
* Correctly pull the metric ID attribute from the right location.
* Add a circonus_stream_group resource (a.k.a. a Circonus "metric cluster")
* Add support for the [`caql`](https://login.circonus.com/user/docs/caql_reference) check type.
* Add support for the `http` check type.
* `s/SSL/TLS/g`
* Add support for `tcp` check types.
* Enumerate the available metrics that are supported for each check type.
* Add [`cloudwatch`](https://login.circonus.com/user/docs/Data/CheckTypes/CloudWatch) check type support.
* Add a `circonus_trigger` resource (a.k.a Circonus Ruleset).
* Rename a handful of functions to make it clear in the function name the
direction of flow for information moving through the provider.
TL;DR: Replace `parse` and `read` with "foo to bar"-like names.
* Fix the attribute name used in a validator. Absent != After.
* Set the minimum `absent` predicate to 70s per testing.
* Fix the regression tests for circonus_trigger now that absent has a 70s min
* Fix up the `tcp` check to require a `host` attribute.
Fix tests. It's clear I didn't run these before committing/pushing the
`tcp` check last time.
* Fix `circonus_check` for `cloudwatch` checks.
* Rename `parsePerCheckTypeConfig()` to `_CheckConfigToAPI` to be
consistent with other function names.
grep(1)ability of code++
* Slack buttons as an integer are string encoded.
* Fix updates for `circonus_contact`.
* Fix the out parameters for contact groups.
* Move to using `_CastSchemaToTF()` where appropriate.
* Fix circonus_contact_group. Updates work as expected now.
* Use `_StateSet()` in place of `d.Set()` everywhere.
* Make a quick pass over the collector datasource to modernize its style
* Quick pass for items identified by `golint`.
* Fix up collectors
* Fix the `json` check type.
Reconcile possible sources of drift. Update now works as expected.
* Normalize trigger durations to seconds.
* Improve the robustness of the state handling for the `circonus_contact_group` resource.
* I'm torn on this, but sort the contact groups in the notify list.
This does mean that if the first contact group in the list has a higher
lexical sort order the plan won't converge until the offending resource
is tainted and recreated. But there's also some sorting happening
elsewhere, so.... sort and taint for now and this will need to be
revisited in the future.
* Add support for the `httptrap` check type.
* Remove empty units from the state file.
* Metric clusters can return a 404. Detect this accordingly in its
respective Exists handler.
* Add a `circonus_graph` resource.
* Fix a handful of bugs in the graph provider.
* Re-enable the necessary `ConflictsWith` definitions and normalize attribute names.
* Objects that have been deleted via the UI return a 404. Handle in Exists().
* Teach `circonus_graph`'s Stack set to accept nil values.
* Set `ForceNew: true` for a graph's name.
* Chase various API fixes required to make `circonus_graph` work as expected.
* Fix up the handling of sub-1 zoom resolutions for graphs.
* Add the `check_by_collector` out parameter to the `circonus_check` resource.
* Improve validation of line vs area graphs. Fix graph_style.
* Fix up the `logarithmic` graph axis option.
* Resolve various trivial `go vet` issues.
* Add a stream_group out parameter.
* Remove incorrectly applied `Optional` attributes to the `circonus_account` resource.
* Remove various `Optional` attributes from the `circonus_collector` data source.
* Centralize the common need to suppress leading and trailing whitespace into `suppressWhitespace`.
* Sync up with upstream vendor fixes for circonus_graph.
* Update the checksum value for the http check.
* Chase `circonus_graph`'s underlying `line_style` API object change from `string` to `*string`.
* Clean up tests to use a generic terraform regression testing account.
* Add support for the MySQL to the `circonus_check` resource.
* Begin stubbing out the Circonus provider.
* Remove all references to `reverse:secret_key`.
This value is dynamically set by the service and unused by Terraform.
* Update the `circonus_check` resource.
Still a WIP.
* Add docs for the `circonus_check` resource.
Commit miss, this should have been included in the last commit.
* "Fix" serializing check tags
I still need to figure out how I can make them order agnostic w/o using
a TypeSet. I'm worried that's what I'm going to have to do.
* Spike a quick circonus_broker data source.
* Convert tags to a Set so the order does not matter.
* Add a `circonus_account` data source.
* Correctly spell account.
Pointed out by: @postwait
* Add the `circonus_contact_group` resource.
* Push descriptions into their own file in order to reduce the busyness of the schema when reviewing code.
* Rename `circonus_broker` and `broker` to `circonus_collector` and `collector`, respectively.
Change made with concent by Circonus to reduce confusion (@postwait, @maier, and several others).
* Use upstream contsants where available.
* Import the latest circonus-gometrics.
* Move to using a Set of collectors vs a list attached to a single attribute.
* Rename "cid" to "id" in the circonus_account data source and elsewhere
where possible.
* Inject a tag automatically. Update gometrics.
* Checkpoint `circonus_metric` resource.
* Enable provider-level auto-tagging. This is disabled by default.
* Rearrange metric. This is an experimental "style" of a provider. We'll see.
That moment. When you think you've gone off the rails on a mad scientist
experiment but like the outcome and think you may be onto something but
haven't proven it to yourself or anyone else yet? That. That exact
feeling of semi-confidence while being alone in the wilderness. Please
let this not be the Terraform provider equivalent of DJB's C style of
coding.
We'll know in another resource or two if this was a horrible mistake or
not.
* Begin moving `resource_circonus_check` over to the new world order/structure:
Much of this is WIP and incomplete, but here is the new supported
structure:
```
variable "used_metric_name" {
default = "_usage`0`_used"
}
resource "circonus_check" "usage" {
# collectors = ["${var.collectors}"]
collector {
id = "${var.collectors[0]}"
}
name = "${var.check_name}"
notes = "${var.notes}"
json {
url = "https://${var.target}/account/current"
http_headers = {
"Accept" = "application/json"
"X-Circonus-App-Name" = "TerraformCheck"
"X-Circonus-Auth-Token" = "${var.api_token}"
}
}
stream {
name = "${circonus_metric.used.name}"
tags = "${circonus_metric.used.tags}"
type = "${circonus_metric.used.type}"
}
tags = {
source = "circonus"
}
}
resource "circonus_metric" "used" {
name = "${var.used_metric_name}"
tags = {
source = "circonus"
}
type = "numeric"
}
```
* Document the `circonus_metric` resource.
* Updated `circonus_check` docs.
* If a port was present, automatically set it in the Config.
* Alpha sort the check parameters now that they've been renamed.
* Fix a handful of panics as a result of the schema changing.
* Move back to a `TypeSet` for tags. After a stint with `TypeMap`, move
back to `TypeSet`.
A set of strings seems to match the API the best. The `map` type was
convenient because it reduced the amount of boilerplate, but you loose
out on other things. For instance, tags come in the form of
`category:value`, so naturally it seems like you could use a map, but
you can't without severe loss of functionality because assigning two
values to the same category is common. And you can't normalize map
input or suppress the output correctly (this was eventually what broke
the camel's back). I tried an experiment of normalizing the input to be
`category:value` as the key in the map and a value of `""`, but... seee
diff suppress. In this case, simple is good.
While here bring some cleanups to _Metric since that was my initial
testing target.
* Rename `providerConfig` to `_ProviderConfig`
* Checkpoint the `json` check type.
* Fix a few residual issues re: missing descriptions.
* Rename `validateRegexp` to `_ValidateRegexp`
* Use tags as real sets, not just a slice of strings.
* Move the DiffSuppressFunc for tags down to the Elem.
* Fix up unit tests to chase the updated, default hasher function being used.
* Remove `Computed` attribute from `TypeSet` objects.
This fixes a pile of issues re: update that I was having.
* Rename functions.
`GetStringOk` -> `GetStringOK`
`GetSetAsListOk` -> `GetSetAsListOK`
`GetIntOk` -> `GetIntOK`
* Various small cleanups and comments rolled into a single commit.
* Add a `postgresql` check type for the `circonus_check` resource.
* Rename various validator functions to be _CapitalCase vs capitalCase.
* Err... finish the validator renames.
* Add `GetFloat64()` support.
* Add `icmp_ping` check type support.
* Catch up to the _API*Attr renames.
Deliberately left out of the previous commit in order to create a clean
example of what is required to add a new check type to the
`circonus_check` resource.
* Clarify when the `target` attribute is required for the `postgresql`
check type.
* Correctly pull the metric ID attribute from the right location.
* Add a circonus_stream_group resource (a.k.a. a Circonus "metric cluster")
* Add support for the [`caql`](https://login.circonus.com/user/docs/caql_reference) check type.
* Add support for the `http` check type.
* `s/SSL/TLS/g`
* Add support for `tcp` check types.
* Enumerate the available metrics that are supported for each check type.
* Add [`cloudwatch`](https://login.circonus.com/user/docs/Data/CheckTypes/CloudWatch) check type support.
* Add a `circonus_trigger` resource (a.k.a Circonus Ruleset).
* Rename a handful of functions to make it clear in the function name the
direction of flow for information moving through the provider.
TL;DR: Replace `parse` and `read` with "foo to bar"-like names.
* Fix the attribute name used in a validator. Absent != After.
* Set the minimum `absent` predicate to 70s per testing.
* Fix the regression tests for circonus_trigger now that absent has a 70s min
* Fix up the `tcp` check to require a `host` attribute.
Fix tests. It's clear I didn't run these before committing/pushing the
`tcp` check last time.
* Fix `circonus_check` for `cloudwatch` checks.
* Rename `parsePerCheckTypeConfig()` to `_CheckConfigToAPI` to be
consistent with other function names.
grep(1)ability of code++
* Slack buttons as an integer are string encoded.
* Fix updates for `circonus_contact`.
* Fix the out parameters for contact groups.
* Move to using `_CastSchemaToTF()` where appropriate.
* Fix circonus_contact_group. Updates work as expected now.
* Use `_StateSet()` in place of `d.Set()` everywhere.
* Make a quick pass over the collector datasource to modernize its style
* Quick pass for items identified by `golint`.
* Fix up collectors
* Fix the `json` check type.
Reconcile possible sources of drift. Update now works as expected.
* Normalize trigger durations to seconds.
* Improve the robustness of the state handling for the `circonus_contact_group` resource.
* I'm torn on this, but sort the contact groups in the notify list.
This does mean that if the first contact group in the list has a higher
lexical sort order the plan won't converge until the offending resource
is tainted and recreated. But there's also some sorting happening
elsewhere, so.... sort and taint for now and this will need to be
revisited in the future.
* Add support for the `httptrap` check type.
* Remove empty units from the state file.
* Metric clusters can return a 404. Detect this accordingly in its
respective Exists handler.
* Add a `circonus_graph` resource.
* Fix a handful of bugs in the graph provider.
* Re-enable the necessary `ConflictsWith` definitions and normalize attribute names.
* Objects that have been deleted via the UI return a 404. Handle in Exists().
* Teach `circonus_graph`'s Stack set to accept nil values.
* Set `ForceNew: true` for a graph's name.
* Chase various API fixes required to make `circonus_graph` work as expected.
* Fix up the handling of sub-1 zoom resolutions for graphs.
* Add the `check_by_collector` out parameter to the `circonus_check` resource.
* Improve validation of line vs area graphs. Fix graph_style.
* Fix up the `logarithmic` graph axis option.
* Resolve various trivial `go vet` issues.
* Add a stream_group out parameter.
* Remove incorrectly applied `Optional` attributes to the `circonus_account` resource.
* Remove various `Optional` attributes from the `circonus_collector` data source.
* Centralize the common need to suppress leading and trailing whitespace into `suppressWhitespace`.
* Sync up with upstream vendor fixes for circonus_graph.
* Update the checksum value for the http check.
* Chase `circonus_graph`'s underlying `line_style` API object change from `string` to `*string`.
* Clean up tests to use a generic terraform regression testing account.
* Rename all identifiers that began with a `_` and replace with a corresponding lowercase glyph.
* Remove stale comment in types.
* Move the calls to `ResourceData`'s `SetId()` calls to be first in the
list so that no resources are lost in the event of a `panic()`.
* Remove `stateSet` from the `circonus_trigger` resource.
* Remove `stateSet` from the `circonus_stream_group` resource.
* Remove `schemaSet` from the `circonus_graph` resource.
* Remove `stateSet` from the `circonus_contact` resource.
* Remove `stateSet` from the `circonus_metric` resource.
* Remove `stateSet` from the `circonus_account` data source.
* Remove `stateSet` from the `circonus_collector` data source.
* Remove stray `stateSet` call from the `circonus_contact` resource.
This is an odd artifact to find... I'm completely unsure as to why it
was there to begin with but am mostly certain it's a bug and needs to be
removed.
* Remove `stateSet` from the `circonus_check` resource.
* Remove the `stateSet` helper function.
All call sites have been converted to return errors vs `panic()`'ing at
runtime.
* Remove a pile of unused functions and type definitions.
* Remove the last of the `attrReader` interface.
* Remove an unused `Sprintf` call.
* Update `circonus-gometrics` and remove unused files.
* Document what `convertToHelperSchema()` does.
Rename `castSchemaToTF` to `convertToHelperSchema`.
Change the function parameter ordering so the `map` of attribute
descriptions: this is much easier to maintain when the description map
is first when creating schema inline.
* Move descriptions into their respective source files.
* Remove all instances of `panic()`.
In the case of software bugs, log an error. Never `panic()` and always
return a value.
* Rename `stream_group` to `metric_cluster`.
* Rename triggers to rule sets
* Rename `stream` to `metric`.
* Chase the `stream` -> `metric` change into the docs.
* Remove some unused test functions.
* Add the now required `color` attribute for graphing a `metric_cluster`.
* Add a missing description to silence a warning.
* Add `id` as a selector for the account data source.
* Futureproof testing: Randomize all asset names to prevent any possible resource conflicts.
This isn't a necessary change for our current build and regression
testing, but *just in case* we have a radical change to our testing
framework in the future, make all resource names fully random.
* Rename various values to match the Circonus docs.
* s/alarm/alert/g
* Ensure ruleset criteria can not be empty.
Fixes: #12494
The Create was changed to use the default and not d.GetOk - the update
wasn't - this was causing issues when trying to update to a false value
```
% make testacc TEST=./builtin/providers/datadog
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/03/07 16:20:54 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/datadog -v -timeout 120m
=== RUN TestDatadogMonitor_import
--- PASS: TestDatadogMonitor_import (4.77s)
=== RUN TestDatadogUser_import
--- PASS: TestDatadogUser_import (6.23s)
=== RUN TestProvider
--- PASS: TestProvider (0.00s)
=== RUN TestProvider_impl
--- PASS: TestProvider_impl (0.00s)
=== RUN TestAccDatadogMonitor_Basic
--- PASS: TestAccDatadogMonitor_Basic (3.83s)
=== RUN TestAccDatadogMonitor_BasicNoTreshold
--- PASS: TestAccDatadogMonitor_BasicNoTreshold (4.92s)
=== RUN TestAccDatadogMonitor_Updated
--- PASS: TestAccDatadogMonitor_Updated (5.88s)
=== RUN TestAccDatadogMonitor_TrimWhitespace
--- PASS: TestAccDatadogMonitor_TrimWhitespace (3.23s)
=== RUN TestAccDatadogMonitor_Basic_float_int
--- PASS: TestAccDatadogMonitor_Basic_float_int (5.73s)
=== RUN TestAccDatadogTimeboard_update
--- PASS: TestAccDatadogTimeboard_update (8.86s)
=== RUN TestValidateAggregatorMethod
--- PASS: TestValidateAggregatorMethod (0.00s)
=== RUN TestAccDatadogUser_Updated
--- PASS: TestAccDatadogUser_Updated (6.05s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/datadog 49.506s
```
`env list` was missing the args re-assignment after parsing the flags.
This is only a problem if the variables are automatically be populated
as arguments from a tfvars file.
Add Env and SetEnv methods to command.Meta to retrieve the current
environment name inside any command.
Make sure all calls to Backend.State contain an environment name, and
make the package compile against the update backend package.
Destroying a terraform state can't always create an empty state, as
outputs and the root module may remain. Use HasResources to warn about
deleting an environment with resources.
In order to operate in parity with other commands, the env command
should take a path argument to locate the configuration.
This however introduces the issue of a possible name conflict between a
path and subcommand, or printing an incorrect current environment for
the bare `env` command. In favor of simplicity this removes the current
env output and only prints usage when no subcommand is provided.
I made this interface way back with the original backend work and I
guess I forgot to hook it up! This is becoming an issue as I'm working
on our 2nd enhanced backend that requires this information and I
realized it was hardcoded before.
This propertly uses the CLIInit interface allowing any backend to gain
access to this data.
Module resource were being sorted lexically by name by the state filter.
If there are 10 or more resources, the order won't match the index
order, and resources will have different indexes in their new location.
Sort the FilterResults by index numerically when the names match.
Clean up the module String output for visual inspection by sorting
Resource name parts numerically when they are an integer value.
* providers/spotinst: Add support for Spotinst resources
* providers/spotinst: Fix merge conflict - layouts/docs.erb
* docs/providers/spotinst: Fix the resource description field
* providers/spotinst: Fix the acceptance tests
* providers/spotinst: Mark the device_index as a required field
* providers/spotinst: Change the associate_public_ip_address field to TypeBool
* docs/providers/spotinst: Update the description of the adjustment field
* providers/spotinst: Rename IamRole to IamInstanceProfile to make it more compatible with the AWS provider
* docs/providers/spotinst: Rename iam_role to iam_instance_profile
* providers/spotinst: Deprecate the iam_role attribute
* providers/spotinst: Fix a misspelled var (IamRole)
* providers/spotinst: Fix possible null pointer exception related to "iam_instance_profile"
* docs/providers/spotinst: Add "load_balancer_names" missing description
* providers/spotinst: New resource "spotinst_subscription" added
* providers/spotinst: Eliminate a possible null pointer exception in "spotinst_aws_group"
* providers/spotinst: Eliminate a possible null pointer exception in "spotinst_subscription"
* providers/spotinst: Mark spotinst_subscription as deleted in destroy
* providers/spotinst: Add support for custom event format in spotinst_subscription
* providers/spotinst: Disable the destroy step of spotinst_subscription
* providers/spotinst: Add support for update subscriptions
* providers/spotinst: Merge fixed conflict - layouts/docs.erb
* providers/spotinst: Vendor dependencies
* providers/spotinst: Return a detailed error message
* provider/spotinst: Update the plugin list
* providers/spotinst: Vendor dependencies using govendor
* providers/spotinst: New resource "spotinst_healthcheck" added
* providers/spotinst: Update the Spotinst SDK
* providers/spotinst: Comment out unnecessary log.Printf
* providers/spotinst: Fix the acceptance tests
* providers/spotinst: Gofmt fixes
* providers/spotinst: Use multiple functions to expand each block
* providers/spotinst: Allow ondemand_count to be zero
* providers/spotinst: Change security_group_ids from TypeSet to TypeList
* providers/spotinst: Remove unnecessary `ForceNew` fields
* providers/spotinst: Update the Spotinst SDK
* providers/spotinst: Add support for capacity unit
* providers/spotinst: Add support for EBS volume pool
* providers/spotinst: Delete health check
* providers/spotinst: Allow to set multiple availability zones
* providers/spotinst: Gofmt
* providers/spotinst: Omit empty strings from the load_balancer_names field
* providers/spotinst: Update the Spotinst SDK to v1.1.9
* providers/spotinst: Add support for new strategy parameters
* providers/spotinst: Update the Spotinst SDK to v1.2.0
* providers/spotinst: Add support for Kubernetes integration
* providers/spotinst: Fix merge conflict - vendor/vendor.json
* providers/spotinst: Update the Spotinst SDK to v1.2.1
* providers/spotinst: Add support for Application Load Balancers
* providers/spotinst: Do not allow to set ondemand_count to 0
* providers/spotinst: Update the Spotinst SDK to v1.2.2
* providers/spotinst: Add support for scaling policy operators
* providers/spotinst: Add dimensions to spotinst_aws_group tests
* providers/spotinst: Allow both ARN and name for IAM instance profiles
* providers/spotinst: Allow ondemand_count=0
* providers/spotinst: Split out the set funcs into flatten style funcs
* providers/spotinst: Update the Spotinst SDK to v1.2.3
* providers/spotinst: Add support for EBS optimized flag
* providers/spotinst: Update the Spotinst SDK to v2.0.0
* providers/spotinst: Use stringutil.Stringify for debugging
* providers/spotinst: Update the Spotinst SDK to v2.0.1
* providers/spotinst: Key pair is now optional
* providers/spotinst: Make sure we do not nullify signals on strategy update
* providers/spotinst: Hash both Strategy and EBS Block Device
* providers/spotinst: Hash AWS load balancer
* providers/spotinst: Update the Spotinst SDK to v2.0.2
* providers/spotinst: Verify namespace exists before appending policy
* providers/spotinst: Image ID will be in a separate block from now on, so as to allow ignoring changes only on the image ID. This change is backwards compatible.
* providers/spotinst: user data decoded when returned from spotinst api, so that TF compares the two states properly, and does not update without cause.
Fixes#12154
The "-backup" flag before for "state *" CLI had some REALLY bizarre behavior:
it would change the _destination_ state and actually not create any
additional backup at all (the original state was unchanged and the
normal timestamped backup still are written). Really weird.
This PR makes the -backup flag work as you'd expect with one caveat:
we'll _still_ create the timestamped backup file. The timestamped backup
file helps make sure that you always get a backup history when using
these commands. We don't want to make it easy for you to overwrite a
state with the `-backup` flag.
We need to initialize the backend even if the config has no backend set.
This allows `init` to work when unsetting a previously set backend.
Without this, there was no way to unset a backend.
Gove LockInfo a Marshal method for easy serialization, and a String
method for more readable output.
Have the state.Locker implementations use LockError when possible to
return LockInfo and an error.
During backend initialization, especially during a migration, there is a
chance that an existing state could be overwritten.
Attempt to get a locks when writing the new state. It would be nice to
always have a lock when reading the states, but the recursive structure
of the Meta.Backend config functions makes that quite complex.
Remove the lock command for now to avoid confusion about the behavior of
locks. Rename lock to force-unlock to make it more aparent what it does.
Add a success message, and chose red because it can be a dangerous
operation.
Add confirmation akin to `destroy`, and a `-force` option for
automation and testing.
The new test pattern is to chdir into a temp location for the test, but
the prevents us from locating the testdata directory in the source. Add
a source path to testLockState so we can find the statelocker.go source.
Previously when runnign a plan with no exitsing state, the plan would be
written out and then backed up on the next WriteState by another
BackupState instance. Since we now maintain a single State instance
thoughout an operation, the backup happens before any state exists so no
backup file is created.
This is OK, as the backup state the tests were checking for is from the
plan file, which already exists separate from the state.
Terraform can't tell the difference between an empty output and an
undefined output. This is often confusing for folks using interpolation.
As much as it would be great to fix upstream, changing this error
message to be a bit more helpful is a good stop-gap to avoid
frustration.