This e2etest runs an init, plan, apply, destroy sequence against a test
configuration using the real template and null providers downloaded from
the official repository.
This test _does_ trample a bit on the scope of some already-existing
tests, but this is mainly just to check our assumptions about how
Terraform behaves to ensure that we can reach our main conclusion here:
that the main Terraform workflow commands interact correctly with each
other in real use and we can complete the full workflow.
We already have good tests for the business logic around provider
installation, but the existing tests all stub out the main repository
server. This test completes that coverage by verifying that the installer
is able to run against the real repository and install an official release
of the template provider.
This basic test is here primarily because it's one of the few that can
run without reaching out to external services, and so it means our usual
test runs will catch situations where the main executable build is
somehow broken.
The version command itself is not very interesting to test, but it's
convenient in that its behavior is very predictable and self-contained.
Previously we had no automated testing of whether we can produce a
Terraform executable that actually works. Our various functional tests
have good coverage of specific Terraform features and whole operations,
but we lacked end-to-end testing of actual usage of the generated binary,
without any stubbing.
This package is intended as a vehicle for such end-to-end testing. When
run normally under "go test" it will produce a build of the main Terraform
binary and make it available for tests to execute. The harness exposes
a flag for whether tests are allowed to reach out to external network
services, controlled with our standard TF_ACC environment variable, so
that basic local tests can be safely run as part of "make test" while
more elaborate tests can be run easily when desired.
It also provides a separate mode of operation where the included script
make-archive.sh can be used to produce a self-contained test archive that
can be copied to another system to run the tests there. This is intended
to allow testing of cross-compiled binaries, by shipping them over to
the target OS and architecture to run without requiring a full Go compiler
installation on the target system.
The goal here is not to test again functionality that's already
well-covered by our existing tests, but rather to test chains of normal
operations against the build binary that are not otherwise tested
together.
The improved err scanner loop in meta causes these to race. There's no
need to write back to the same commands struct, so just use a new
instance in each iteration.
Meta.process was relying on the system readdir to order the arguments,
but readdir doesn't guarantee any ordering. Read the directory contents
as a whole and sort them in place before adding the tfvars files.
Previously the APIs for state persistence and management had some problematic cases where we depended on hidden mutations of the state structure as side-effects of otherwise-innocent-looking operations, which was a frequent cause of accidental regressions due to faulty assumptions.
This new model attempts to isolate certain state mutations to just within the state managers, and makes the state managers work on separated snapshots of the state rather than on the "live" object to reduce the risk of race conditions.
Due to how the state filter machinery works, passing no arguments is valid
and matches _all_ resources.
It is very unlikely that someone wants to remove everything from state, so
this ends up being a very dangerous default for the "terraform state rm"
command, and surprising for someone who perhaps runs it looking for the
usage information.
So we'll be pragmatic here and reject the no-arguments case for this
command, accepting that it makes the unlikely case of intentionally
deleting all resources harder in order to make it less likely that it
will happen _unintentionally_.
If someone does really want to remove all resources from the state, they
can provide an explicit empty string argument, but this isn't documented
because it's a weird case that doesn't seem worth mentioning.
This fixes#15283.
The state returned from the testState helper shouldn't rely on any
mutations caused by WriteState. The Init function (which is analogous to
NewState) shoudl set any required fields.
This command serves as an alternative to the human-oriented list of workspaces for scripting use-cases where it's useful to know the _current_ workspace name.
Previously we relied on a constellation of coincidences for everything to
work out correctly with state serials. In particular, callers needed to
be very careful about mutating states (or not) because many different bits
of code shared pointers to the same objects.
Here we move to a model where all of the state managers always use
distinct instances of state, copied when WriteState is called. This means
that they are truly a snapshot of the state as it was at that call, even
if the caller goes on mutating the state that was passed.
We also adjust the handling of serials so that the state managers ignore
any serials in incoming states and instead just treat each Persist as
the next version after what was most recently Refreshed.
(An exception exists for when nothing has been refreshed, e.g. because
we are writing a state to a location for the first time. In that case
we _do_ trust the caller, since the given state is either a new state
or it's a copy of something we're migrating from elsewhere with its
state and lineage intact.)
The intent here is to allow the rest of Terraform to not worry about
serials and state identity, and instead just treat the state as a mutable
structure. We'll just snapshot it occasionally, when WriteState is called,
and deal with serials _only_ at persist time.
This is intended as a more robust version of #15423, which was a quick
hotfix to an issue that resulted from our previous slopping handling
of state serials but arguably makes the problem worse by depending on
an additional coincidental behavior of the local backend's apply
implementation.
Skips checksum validation if the `TF_SKIP_PROVIDER_VERIFY` environment variable is set. Undocumented variable, as the primary goal is to significantly improve the local provider development workflow.
We need to release the lock just before deleting the state, in case the backend
can't remove the resource while holding the lock. This is currently true for
Windows local files.
TODO: While there is little safety in locking while deleting the state, it
might be nice to be able to coordinate processes around state deletion, i.e. in
a CI environment. Adding Delete() as a required method of States would allow
the removal of the resource to be delegated from the Backend to the State
itself.
A common reason to want to use `terraform plan` is to have a chance to
review and confirm a plan before running it. If in fact that is the
only reason you are running plan, this new `terraform apply -auto-approve=false`
flag provides an easier alternative to
P=$(mktemp -t plan)
terraform refresh
terraform plan -refresh=false -out=$P
terraform apply $P
rm $P
The flag defaults to true for now, but in a future version of Terraform it will
default to false.
The import command wasn't loading the full plugin path for discovery.
Run a basic plugin init sequence, and verify we can find a plugin (even
though the plugin is invalid and will fail).
The import command wasn't loading the plugin path at all, relying on the
local directory for binaries.
Load the plugin dir into Meta, and pass in ForceLocal for consistency.
The Backend returned was going to be a Local anyway, so the added check
wasn't ensuring anything.
The "confirm" method was directly checking the meta struct's input field,
but that only represents the -input command line flag, and doesn't
respect the TF_INPUT environment variable.
By calling the Input method instead, we check both.
This fixes#15338.
When using a `state` command, if the `-state` flag is provided we do not
want to modify the Backend state. In this case we should always create a
local state instance.
The backup flag was also being ignored, and some tests were relying on
that, which have been fixed.
If we provide a -state flag to a state command, we do not want terraform
to modify the backend state. This test fails since the state specified
in the backend doesn't exist
Remove "checksum" from the error, and only indicate that the plugin has
changed.
Always show requested versions even if it's "any", and found versions of
plugins.
This change makes various minor adjustments to the rendering of plans
in the output of "terraform plan":
- Resources are identified using the standard resource address syntax,
rather than exposing the legacy internal representation used in the
module diff resource keys. This fixes#8713.
- Subjectively, having square brackets in the addresses made it look more
visually "off" when the same name but with different indices were
shown together with differing-length "symbols", so the symbols are now
all padded and right-aligned to three characters for consistent layout
across all operations.
- The -/+ action is now more visually distinct, using several different
colors to help communicate what it will do and including a more obvious
"(new resource required)" marker to help draw attention to this not
being just an update diff. This fixes#15350.
- The resources are now sorted in a manner that sorts index [10] after
index [9], rather than after index [1] as we did before. This makes it
easier to scan the list and avoids the common confusion where it seems
that there are only 10 items when in fact there are 11-20 items with
all the tens hiding further up in the list.
Previously init would crash if given these options:
-backend=false -get-plugins=true
This is because the state is used as a source of provider dependency
information, and we need to instantiate the backend to get the state.
To avoid the crash, we now use the following adjusted behavior:
- if -backend=true, we behave as before
- if -backend=false, we instead try to instantiate the backend the same
way any other command would, without modifying its configuration
- if we're able to instantiate the backend, we use it to fetch state
for dependency resolution purposes
- if the backend is not instantiable then we assume it's not yet
configured and proceed with a nil state, which may cause us to see an
incomplete picture of the dependencies but still allows the install
to succeed. Subsequently running "terraform plan" will not work until
the backend is (re-)initialized, so the incomplete picture of required
plugins is safe.
This takes care of a few dangling cases where we were still stringifying
empty version constraints, which creates confusing error messages due to
it stringing as the empty string.
For the "no suitable versions available" message, we fall back on the
"provider not found" message if no versions were found even though it's
unconstrained. This should only happen in an edge case where the
provider's index page exists on the releases server but no versions are
yet present.
For the message about plugin protocol versions, this again is an edge
case since with no constraints this should happen only if we release
an incompatible Terraform version but don't release a new version of the
plugin that's compatible. In this case we just show the constraint as
"(any version)" to make sure we always show _something_.
Previously we only did this when _upgrading_, but that's unnecessarily
specific and confusing since e.g. plugins can get upgraded implicitly by
constraint changes, which would not then trigger the purge process.
Instead, we'll assume that the user is able to easily re-download plugins
that were purged here, or if they need more specific guarantees they will
manage manually a plugin directory and disable the auto-install behavior
using `-plugin-dir`.
Now we are able to recognize and handle a few special error situations
from plugin installation with more verbose error messages that give the
user better feedback on how to proceed.
The -plugin-dir option lets the user specify custom search paths for
plugins. This overrides all other plugin search paths, and prevents the
auto-installation of plugins.
We also make sure that the availability of plugins is always checked
during init, even if -get-plugins=false or -plugin-dir is set.
init should always write intternal data to the current directory, even
when a path is provided. The inherited behavior no longer applies to the
new use of init.
Now that init can take a directory for configuration, the old behavior
of writing the .terraform data directory into the target path no longer
makes sense. Don't change the dataDir field during init, and write to
the default location.
Clean up all references to Meta.dataDir, and only use the getter method
in case we chose to dynamically override this at some point.
Now when -upgrade is provided to "terraform init" (and plugin installation
isn't disabled) it will:
- ignore the contents of the auto-install plugin directory when deciding
what is "available", thus causing anything there to be reinstalled,
possibly at a newer version.
- if installation completes successfully, purge from the auto-install
plugin directory any plugin-looking files that aren't in the set of
chosen plugins.
As before, plugins outside of the auto-install directory are able to
take precedence over the auto-install ones, and these will never be
upgraded nor purged.
The thinking here is that the auto-install directory is an implementation
detail directly managed by Terraform, and so it's Terraform's
responsibility to automatically keep it clean as plugins are upgraded.
We don't yet have the -plugin-dir option implemented, but once it is it
should circumvent all of this behavior and just expect providers to be
already available in the given directory, meaning that nothing will be
auto-installed, -upgraded or -purged.
Previously we had a "getProvider" function type used to implement plugin
fetching. Here we replace that with an interface type, initially with
just a "Get" function.
For now this just simplifies the interface by allowing the target
directory and protocol version to be members of the struct rather than
passed as arguments.
A later change will extend this interface to also include a method to
purge unused plugins, so that upgrading frequently doesn't leave behind
a trail of unused executable files.
As of this commit this just upgrades modules, but this option will also
later upgrade plugins and indeed anything else that's being downloaded and
installed as part of the init.
Since there is little left that isn't core, remove the distinction for
now to reduce confusion, since a "core" binary will mostly work except
for provisioners.
We're shifting terminology from "environment" to "workspace". This takes
care of some of the main internal API surface that was using the old
terminology, though is not intended to be entirely comprehensive and is
mainly just to minimize the amount of confusion for maintainers as we
continue moving towards eliminating the old terminology.
Previously we just silently ignored warnings from validating the backend
config, but now that we have a deprecated argument it's important to print
these out so users can respond to the deprecation warning.
Feedback after 0.9 was that the term "environment" was confusing due to
it colliding with several other concepts, such as OS environment
variables, a non-aligned Terraform Enterprise concept, and differing ideas
of "environment" within various organizations.
This new term "workspace" is intended to ease some of that confusion. This
term is not used anywhere else in Terraform today, and we expect it to not
be used in a manner that would be confusing within user organizations.
This begins a deprecation cycle for the "terraform env" family of commands,
instead moving to an equivalent set of "terraform workspace" commands.
There are some remaining references to the old "environment" concept in
the code, which will be cleaned up in a separate change. This change is
instead focused on text visible in the UI and wording within code comments
for the benefit of human maintainers of the code.
This allows you to run multiple concurrent terraform operations against
different environments from the same source directory.
Fixes#14447.
Also removes some dead code which appears to do the same thing as the function I
modified.
When init was modified in 0.9 to initialize a terraform working
directory, the legacy behavior was kept to copy or fetch module sources.
This left the init command without the ability that the plan and apply
commands have to target a specific directory for the operation.
This commit removes the legacy behavior altogether, and allows init to
target a directory for initialization, bringing it into parity with plan
and apply. If one want to copy a module to the target or current
directory, that will have to be done manually before calling init. We
can later reintroduce fetching modules with init without breaking this
new behavior, by adding the source as an optional second argument.
The unit tests testing the copying of sources with init have been
removed, as well as some out of date (and commented out) init tests
regarding remote states.
ConstrainVersions was documented as returning nil, but it was instead
returning an empty set. Use the Count() method to check for nil or
empty. Add test to verify failed constraints will show up as missing.
"environment" is a very overloaded term, so here we prefer to use the
term "working directory" to talk about a local directory where operations
are executed on a given Terraform configuration.
Each provider plugin will take at least a few seconds to download, so
providing feedback about each one should make users feel less like
Terraform has hung.
Ideally we'd show ongoing progress during the download, but that's not
possible without re-working go-getter, so we'll accept this as an interim
solution for now.
This was added with the idea of using it to override the SHA256 hashes
to match those hypothetically stored in a plan, but we already have a
mechanism elsewhere for populating context fields from plan fields, so
this is not actually necessary.
When running "terraform init" with providers that are unconstrained, we
will now produce information to help the user update configuration to
constrain for the particular providers that were chosen, to prevent
inadvertently drifting onto a newer major release that might contain
breaking changes.
A ~> constraint is used here because pinning to a single specific version
is expected to create dependency hell when using child modules. By using
this constraint mode, which allows minor version upgrades, we avoid the
need for users to constantly adjust version constraints across many
modules, but make major version upgrades still be opt-in.
Any constraint at all in the configuration will prevent the display of
these suggestions, so users are free to use stronger or weaker constraints
if desired, ignoring the recommendation.
Once we've installed the necessary plugins, we'll do one more walk of
the available plugins and record the SHA256 hashes of all of the plugins
we select in the provider lock file.
The file we write here gets read when we're building ContextOpts to
initialize the main terraform context, so any command that works with
the context will then fail if any of the provider binaries change.
By reading our lock file and passing this into the context, we ensure that
only the plugins referenced in the lock file can be used. As of this
commit there is no way to create that lock file, but that will follow soon
as part of "terraform init".
We also provide a way to force a particular set of SHA256s. The main use
for this is to allow us to persist a set of plugins in the plan and
check the same plugins are used during apply, but it may also be useful
for automated tests.
As well as constraining plugins by version number, we also want to be
able to pin plugins to use specific executables so that we can detect
drift in available plugins between commands.
This commit allows such requirements to be specified, but doesn't yet
specify any such requirements, nor validate them.
Previously we encouraged users to import a resource and _then_ write the
configuration block for it. This ordering creates lots of risk, since
for various reasons users can end up subsequently running Terraform
without any configuration in place, which then causes Terraform to want
to destroy the resource that was imported.
Now we invert this and require a minimal configuration block be written
first. This helps ensure that the user ends up with a correlated resource
config and state, protecting against any inconsistency caused by typos.
This addresses #11835.
Previously we deferred validation of the resource address on the import
command until we were in the core guts, which caused the error responses
to be rather unhelpful.
By validating these things early we can give better feedback to the user.
For some reason there was a block of commented-out tests for the refresh
command in the test file for the import command. Here we remove them to
reduce the noise in this file.
Add discovery.GetProviders to fetch plugins from the relases site.
This is an early version, with no tests, that only (probably) fetches
plugins from the default location. The URLs are still subject to change,
and since there are no plugin releases, it doesn't work at all yet.
Instead of providing the a path in BackendOpts, provide a loaded
*config.Config instead. This reduces the number of places where
configuration is loaded.
This new command prints out the tree of modules annotated with their
associated required providers.
The purpose of this command is to help users answer questions such as
"why is this provider required?", "why is Terraform using an older version
of this provider?", and "what combination of modules is creating an
impossible provider version situation?"
For configurations using many modules this sort of question is likely to
come up a lot once we support versioned providers.
As a bonus use-case, this command also shows explicitly when a provider
configuration is being inherited from a parent module, to help users to
understand where the configuration is coming from for each module when
some child modules provide their own provider configurations.
We're going to use config to determine provider dependencies, so we need
to always provide a config when instantiating a context or we'll end up
loading no providers at all.
We previously had a test for running "terraform import -config=''" to
disable the config entirely, but this test is now removed because it makes
no sense. The actual functionality its testing still remains for now,
but it will be removed in a subsequent commit when we start requiring that
a resource to be imported must already exist in configuration.
Rather than providing an already-resolved map of plugins to core, we now
provide a "provider resolver" which knows how to resolve a set of provider
dependencies, to be determined later, and produce that map.
This requires the context to be instantiated in a different way, so this
very noisy diff is a mostly-mechanical update of all of the existing
places where contexts get created for testing, using some adapted versions
of the pre-existing utilities for passing in mock providers.
Previously the set of providers was fixed early on in the command package
processing. In order to be version-aware we need to defer this work until
later, so this interface exists so we can hold on to the possibly-many
versions of plugins we have available and then later, once we've finished
determining the provider dependencies, select the appropriate version of
each provider to produce the final set of providers to use.
This commit establishes the use of this new mechanism, and thus populates
the provider factory map with only the providers that result from the
dependency resolution process.
This disables support for internal provider plugins, though the
mechanisms for building and launching these are still here vestigially,
to be cleaned up in a subsequent commit.
This also adds a new awkward quirk to the "terraform import" workflow
where one can't import a resource from a provider that isn't already
mentioned (implicitly or explicitly) in config. We will do some UX work
in subsequent commits to make this behavior better.
This breaks many tests due to the change in interface, but to keep this
particular diff reasonably easy to read the test fixes are split into
a separate commit.
Currently this doesn't matter much, but we're about to start checking the
availability of providers early on and so we need to use the correct name
for the mock set of providers we use in command tests, which includes
only a provider named "test".
Without this change, the "push" tests will begin failing once we start
verifying this, since there's no "aws" provider available in the test
context.
Having this as a method of PluginMeta felt most natural, but unfortunately
that means that discovery must depend on plugin and plugin in turn
depends on core Terraform, thus making the discovery package hard to use
without creating dependency cycles.
To resolve this, we invert the dependency and make the plugin package be
responsible for instantiating clients given a meta, using a top-level
function.
Previously we did plugin discovery in the main package, but as we move
towards versioned plugins we need more information available in order to
resolve plugins, so we move this responsibility into the command package
itself.
For the moment this is just preserving the existing behavior as long as
there are only internal and unversioned plugins present. This is the
final state for provisioners in 0.10, since we don't want to support
versioned provisioners yet. For providers this is just a checkpoint along
the way, since further work is required to apply version constraints from
configuration and support additional plugin search directories.
The automatic plugin discovery behavior is not desirable for tests because
we want to mock the plugins there, so we add a new backdoor for the tests
to use to skip the plugin discovery and just provide their own mock
implementations. Most of this diff is thus noisy rework of the tests to
use this new mechanism.
Before this, invoking this codepath would print
Terraform has successfully migrated from legacy remote state to your
configured remote state.%!(EXTRA string=s3)
* core/providersplit: Split OPC Provider to separate repo
As we march towards Terraform 0.10.0, we are going to start building the
terraform providers as separate binaries - this will allow us to
continually release them. Before we go to 0.10.0, we need to be able to
continue building providers in the same manner, therefore, we have
hardcoded the path of the provider in the generate-plugins.go file
The interim solution will require us to vendor the opc provider and any
child dependencies, but when we get to 0.10.0, we will no longer have to
do this - the core will auto download the plugin binary. The plugin
package will have it's own dependencies vendored as well.
* core/providersplit: Removing the builtin version of OPC provider
* core/providersplit: Vendoring the OPC plugin
* core/providersplit: update internal plugin list
* core/providersplit: remove unused govendor item
Looking through the operations in node_resource_apply and
node_resource_destroy, there are multiple nil checks for diffApply, so
one we need to continue to assume that the diff can be nil through there
until we ensure that it is non-nill in all cases.
So regardless of how we came to get a nil diff in the UiHook PreApply
method, we need to check it.
Here we add a basic provider with a single resource type.
It's copied heavily from the `github` provider and `github_repository`
resource, as there is some overlap in those types/apis.
~~~
resource "gitlab_project" "test1" {
name = "test1"
visibility_level = "public"
}
~~~
We implement in terms of the
[go-gitlab](https://github.com/xanzy/go-gitlab) library, which provides
a wrapping of the [gitlab api](https://docs.gitlab.com/ee/api/)
We have been a little selective in the properties we surface for the
project resource, as not all properties are very instructive.
Notable is the removal of the `public` bool as the `visibility_level`
will take precedent if both are supplied which leads to confusing
interactions if they disagree.
stringer has changed the boilerplate it generates in a recent version.
We'd previously updated to the new format but accientally rolled back
to the old while merging a long-running feature branch.
This restores us back to the new format again.
The reconfigure flag will force init to ignore any saved backend state.
This is useful when a user does not want any backend migration to
happen, or if the saved configuration can't be loaded at all for some
reason.
This commit adds the ability to provision files locally.
This is useful for cases where TerraForm generates assets
such as TLS certificates or templated documents that need
to be saved locally.
- While output variables can be used to return values to
the user, it is not extremly suitable for large content or
when many of these are generated, nor is it practical for
operators to manually save them on disk.
- While `local-exec` could be used with an `echo`, this
provider works across platforms and do not require any
convoluted escaping.
A couple commits got rebased together here, and it's easier to enumerate
them in a single commit.
Skip copying of states during migration if they are the same state. This
can happen when trying to reconfigure a backend's options, or if the
state was manually transferred. This can fail unexpectedly with locking
enabled.
Honor the `-input` flag for all confirmations (the new test hit some
more). Also unify where we reference the Meta.forceInitCopy and transfer
the value to the existing backendMigrateOpts.force field.
Add the -lock-timeout flag to the appropriate commands.
Add the -lock flag to `init` and `import` which were missing it.
Set both stateLock and stateLockTimeout in Meta.flagsSet, and remove the
extra references for clarity.
Add fields required to create an appropriate context for all calls to
clistate.Lock.
Add missing checks for Meta.stateLock, where we would attempt to lock,
even if locking should be skipped.
- Have the ui Lock helper use state.LockWithContext.
- Rename the message package to clistate, since that's how it's imported
everywhere.
- Use a more idiomatic placement of the Context in the LockWithContext
args.
Don't erase local state during backend migration if the new and old
paths are the same. Skipping the confirmation and copy are handled in
another patch, but the local state was always erased by default, even
when it was our new state.
It's possible to not change the backend config, but require updating the
stored backend state by moving init options from the config file to the
`-backend-config` flag. If the config is the same, but the hash doesn't
match, update the stored state.
This method mirrors that of config.Backend, so we can compare the
configration of a backend read from a config vs that of a backend read
from a state. This will prevent init from reinitializing when using
`-backend-config` options that match the existing state.
The variable validator assumes that any AST node it gets from an
interpolation walk is an indicator of an interpolation. Unfortunately,
back in f223be15 we changed the interpolation walker to emit a LiteralNode
as a way to signal that the result is a literal but not identical to the
input due to escapes.
The existence of this issue suggests a bit of a design smell in that the
interpolation walker interface at first glance appears to skip over all
literals, but it actually emits them in this one situation. In the long
run we should perhaps think about whether the abstraction is right here,
but this is a shallow, tactical change that fixes#13001.
golang/tools commit 23ca8a263 changed the format of the leading comment
to comply with some new standards discussed here:
https://golang.org/issue/13560
This is the result of running generate with the latest version of
stringer. Everyone working on Terraform will need to update stringer
after this is merged, to avoid reverting this:
go get -u golang.org/x/tools/cmd/stringer
Environment names can be used in a number of contexts, and should be
properly escaped for safety. Since most state names are store in path
structures, and often in a URL, use `url.PathEscape` to check for
disallowed characters
The plan file should contain all data required to execute the apply
operation. Validation requires interpolation, and the `file()`
interpolation function may fail if the module files are not present.
This is the case currently with how TFE executes plans.
The `-force-copy` option will suppress confirmation for copying state
data.
Modify some tests to use the option, making sure to leave coverage of
the Input code path.
Fixes#12871
We were forgetting to remove the legacy remote state from the actual
state value when migrating. This only causes an issue when saving a plan
since the plan contains the state itself and causes an error where both
a backend + legacy state exist.
If saved plans aren't used this causes no noticable issue.
Due to buggy upgrades already existing in the wild, I also added code to
clear the remote section if it exists in a standard unchanged backend
Fixes#12806
This should've been part of 2c19aa69d9
This is the same issue, just missed a spot. Tests are hard to cover for
this since we're removing the legacy backends one by one, eventually
it'll be gone. A good sign is that we don't import backendlegacy at all
anymore in command/
This augments backend-config to also accept key=value pairs.
This should make Terraform easier to script rather than having to
generate a JSON file.
You must still specify the backend type as a minimal amount in
configurations, example:
```
terraform { backend "consul" {} }
```
This is required because Terraform needs to be able to detect the
_absense_ of that value for unsetting, if that is necessary at some
point.
Plans were properly encoding backend configuration but the apply was
reading it from the wrong field. :( This meant that every apply from a
plan was applying it locally with backends.
This needs to get released ASAP.
When migrating from a multi-state backend to a single-state backend, we
have to ensure that our locally configured environment is changed back
to "default", or we won't be able to access the new backend.
This allows a refresh on a non-existent or empty state file. We changed
this in 0.9.0 to error which seemed reasonable but it turns out this
complicates automation that runs refresh since it now needed to
determine if the state file was empty before running.
Its easier to just revert this into a warning with exit code zero.
The reason this changed is because in 0.8.x and earlier, the output
would be simply empty with exit code zero which seemed odd.
Fixes#12749
If we merge in an extra partial config we need to recompute the hash to
compare with the old value to detect that change.
This hash needs to NOT be stored and just used as a temporary. We want
to keep the original hash in the state so that we don't detect a change
from the config (since the config will always be partial).
* Begin stubbing out the Circonus provider.
* Remove all references to `reverse:secret_key`.
This value is dynamically set by the service and unused by Terraform.
* Update the `circonus_check` resource.
Still a WIP.
* Add docs for the `circonus_check` resource.
Commit miss, this should have been included in the last commit.
* "Fix" serializing check tags
I still need to figure out how I can make them order agnostic w/o using
a TypeSet. I'm worried that's what I'm going to have to do.
* Spike a quick circonus_broker data source.
* Convert tags to a Set so the order does not matter.
* Add a `circonus_account` data source.
* Correctly spell account.
Pointed out by: @postwait
* Add the `circonus_contact_group` resource.
* Push descriptions into their own file in order to reduce the busyness of the schema when reviewing code.
* Rename `circonus_broker` and `broker` to `circonus_collector` and `collector`, respectively.
Change made with concent by Circonus to reduce confusion (@postwait, @maier, and several others).
* Use upstream contsants where available.
* Import the latest circonus-gometrics.
* Move to using a Set of collectors vs a list attached to a single attribute.
* Rename "cid" to "id" in the circonus_account data source and elsewhere
where possible.
* Inject a tag automatically. Update gometrics.
* Checkpoint `circonus_metric` resource.
* Enable provider-level auto-tagging. This is disabled by default.
* Rearrange metric. This is an experimental "style" of a provider. We'll see.
That moment. When you think you've gone off the rails on a mad scientist
experiment but like the outcome and think you may be onto something but
haven't proven it to yourself or anyone else yet? That. That exact
feeling of semi-confidence while being alone in the wilderness. Please
let this not be the Terraform provider equivalent of DJB's C style of
coding.
We'll know in another resource or two if this was a horrible mistake or
not.
* Begin moving `resource_circonus_check` over to the new world order/structure:
Much of this is WIP and incomplete, but here is the new supported
structure:
```
variable "used_metric_name" {
default = "_usage`0`_used"
}
resource "circonus_check" "usage" {
# collectors = ["${var.collectors}"]
collector {
id = "${var.collectors[0]}"
}
name = "${var.check_name}"
notes = "${var.notes}"
json {
url = "https://${var.target}/account/current"
http_headers = {
"Accept" = "application/json"
"X-Circonus-App-Name" = "TerraformCheck"
"X-Circonus-Auth-Token" = "${var.api_token}"
}
}
stream {
name = "${circonus_metric.used.name}"
tags = "${circonus_metric.used.tags}"
type = "${circonus_metric.used.type}"
}
tags = {
source = "circonus"
}
}
resource "circonus_metric" "used" {
name = "${var.used_metric_name}"
tags = {
source = "circonus"
}
type = "numeric"
}
```
* Document the `circonus_metric` resource.
* Updated `circonus_check` docs.
* If a port was present, automatically set it in the Config.
* Alpha sort the check parameters now that they've been renamed.
* Fix a handful of panics as a result of the schema changing.
* Move back to a `TypeSet` for tags. After a stint with `TypeMap`, move
back to `TypeSet`.
A set of strings seems to match the API the best. The `map` type was
convenient because it reduced the amount of boilerplate, but you loose
out on other things. For instance, tags come in the form of
`category:value`, so naturally it seems like you could use a map, but
you can't without severe loss of functionality because assigning two
values to the same category is common. And you can't normalize map
input or suppress the output correctly (this was eventually what broke
the camel's back). I tried an experiment of normalizing the input to be
`category:value` as the key in the map and a value of `""`, but... seee
diff suppress. In this case, simple is good.
While here bring some cleanups to _Metric since that was my initial
testing target.
* Rename `providerConfig` to `_ProviderConfig`
* Checkpoint the `json` check type.
* Fix a few residual issues re: missing descriptions.
* Rename `validateRegexp` to `_ValidateRegexp`
* Use tags as real sets, not just a slice of strings.
* Move the DiffSuppressFunc for tags down to the Elem.
* Fix up unit tests to chase the updated, default hasher function being used.
* Remove `Computed` attribute from `TypeSet` objects.
This fixes a pile of issues re: update that I was having.
* Rename functions.
`GetStringOk` -> `GetStringOK`
`GetSetAsListOk` -> `GetSetAsListOK`
`GetIntOk` -> `GetIntOK`
* Various small cleanups and comments rolled into a single commit.
* Add a `postgresql` check type for the `circonus_check` resource.
* Rename various validator functions to be _CapitalCase vs capitalCase.
* Err... finish the validator renames.
* Add `GetFloat64()` support.
* Add `icmp_ping` check type support.
* Catch up to the _API*Attr renames.
Deliberately left out of the previous commit in order to create a clean
example of what is required to add a new check type to the
`circonus_check` resource.
* Clarify when the `target` attribute is required for the `postgresql`
check type.
* Correctly pull the metric ID attribute from the right location.
* Add a circonus_stream_group resource (a.k.a. a Circonus "metric cluster")
* Add support for the [`caql`](https://login.circonus.com/user/docs/caql_reference) check type.
* Add support for the `http` check type.
* `s/SSL/TLS/g`
* Add support for `tcp` check types.
* Enumerate the available metrics that are supported for each check type.
* Add [`cloudwatch`](https://login.circonus.com/user/docs/Data/CheckTypes/CloudWatch) check type support.
* Add a `circonus_trigger` resource (a.k.a Circonus Ruleset).
* Rename a handful of functions to make it clear in the function name the
direction of flow for information moving through the provider.
TL;DR: Replace `parse` and `read` with "foo to bar"-like names.
* Fix the attribute name used in a validator. Absent != After.
* Set the minimum `absent` predicate to 70s per testing.
* Fix the regression tests for circonus_trigger now that absent has a 70s min
* Fix up the `tcp` check to require a `host` attribute.
Fix tests. It's clear I didn't run these before committing/pushing the
`tcp` check last time.
* Fix `circonus_check` for `cloudwatch` checks.
* Rename `parsePerCheckTypeConfig()` to `_CheckConfigToAPI` to be
consistent with other function names.
grep(1)ability of code++
* Slack buttons as an integer are string encoded.
* Fix updates for `circonus_contact`.
* Fix the out parameters for contact groups.
* Move to using `_CastSchemaToTF()` where appropriate.
* Fix circonus_contact_group. Updates work as expected now.
* Use `_StateSet()` in place of `d.Set()` everywhere.
* Make a quick pass over the collector datasource to modernize its style
* Quick pass for items identified by `golint`.
* Fix up collectors
* Fix the `json` check type.
Reconcile possible sources of drift. Update now works as expected.
* Normalize trigger durations to seconds.
* Improve the robustness of the state handling for the `circonus_contact_group` resource.
* I'm torn on this, but sort the contact groups in the notify list.
This does mean that if the first contact group in the list has a higher
lexical sort order the plan won't converge until the offending resource
is tainted and recreated. But there's also some sorting happening
elsewhere, so.... sort and taint for now and this will need to be
revisited in the future.
* Add support for the `httptrap` check type.
* Remove empty units from the state file.
* Metric clusters can return a 404. Detect this accordingly in its
respective Exists handler.
* Add a `circonus_graph` resource.
* Fix a handful of bugs in the graph provider.
* Re-enable the necessary `ConflictsWith` definitions and normalize attribute names.
* Objects that have been deleted via the UI return a 404. Handle in Exists().
* Teach `circonus_graph`'s Stack set to accept nil values.
* Set `ForceNew: true` for a graph's name.
* Chase various API fixes required to make `circonus_graph` work as expected.
* Fix up the handling of sub-1 zoom resolutions for graphs.
* Add the `check_by_collector` out parameter to the `circonus_check` resource.
* Improve validation of line vs area graphs. Fix graph_style.
* Fix up the `logarithmic` graph axis option.
* Resolve various trivial `go vet` issues.
* Add a stream_group out parameter.
* Remove incorrectly applied `Optional` attributes to the `circonus_account` resource.
* Remove various `Optional` attributes from the `circonus_collector` data source.
* Centralize the common need to suppress leading and trailing whitespace into `suppressWhitespace`.
* Sync up with upstream vendor fixes for circonus_graph.
* Update the checksum value for the http check.
* Chase `circonus_graph`'s underlying `line_style` API object change from `string` to `*string`.
* Clean up tests to use a generic terraform regression testing account.
* Add support for the MySQL to the `circonus_check` resource.
* Begin stubbing out the Circonus provider.
* Remove all references to `reverse:secret_key`.
This value is dynamically set by the service and unused by Terraform.
* Update the `circonus_check` resource.
Still a WIP.
* Add docs for the `circonus_check` resource.
Commit miss, this should have been included in the last commit.
* "Fix" serializing check tags
I still need to figure out how I can make them order agnostic w/o using
a TypeSet. I'm worried that's what I'm going to have to do.
* Spike a quick circonus_broker data source.
* Convert tags to a Set so the order does not matter.
* Add a `circonus_account` data source.
* Correctly spell account.
Pointed out by: @postwait
* Add the `circonus_contact_group` resource.
* Push descriptions into their own file in order to reduce the busyness of the schema when reviewing code.
* Rename `circonus_broker` and `broker` to `circonus_collector` and `collector`, respectively.
Change made with concent by Circonus to reduce confusion (@postwait, @maier, and several others).
* Use upstream contsants where available.
* Import the latest circonus-gometrics.
* Move to using a Set of collectors vs a list attached to a single attribute.
* Rename "cid" to "id" in the circonus_account data source and elsewhere
where possible.
* Inject a tag automatically. Update gometrics.
* Checkpoint `circonus_metric` resource.
* Enable provider-level auto-tagging. This is disabled by default.
* Rearrange metric. This is an experimental "style" of a provider. We'll see.
That moment. When you think you've gone off the rails on a mad scientist
experiment but like the outcome and think you may be onto something but
haven't proven it to yourself or anyone else yet? That. That exact
feeling of semi-confidence while being alone in the wilderness. Please
let this not be the Terraform provider equivalent of DJB's C style of
coding.
We'll know in another resource or two if this was a horrible mistake or
not.
* Begin moving `resource_circonus_check` over to the new world order/structure:
Much of this is WIP and incomplete, but here is the new supported
structure:
```
variable "used_metric_name" {
default = "_usage`0`_used"
}
resource "circonus_check" "usage" {
# collectors = ["${var.collectors}"]
collector {
id = "${var.collectors[0]}"
}
name = "${var.check_name}"
notes = "${var.notes}"
json {
url = "https://${var.target}/account/current"
http_headers = {
"Accept" = "application/json"
"X-Circonus-App-Name" = "TerraformCheck"
"X-Circonus-Auth-Token" = "${var.api_token}"
}
}
stream {
name = "${circonus_metric.used.name}"
tags = "${circonus_metric.used.tags}"
type = "${circonus_metric.used.type}"
}
tags = {
source = "circonus"
}
}
resource "circonus_metric" "used" {
name = "${var.used_metric_name}"
tags = {
source = "circonus"
}
type = "numeric"
}
```
* Document the `circonus_metric` resource.
* Updated `circonus_check` docs.
* If a port was present, automatically set it in the Config.
* Alpha sort the check parameters now that they've been renamed.
* Fix a handful of panics as a result of the schema changing.
* Move back to a `TypeSet` for tags. After a stint with `TypeMap`, move
back to `TypeSet`.
A set of strings seems to match the API the best. The `map` type was
convenient because it reduced the amount of boilerplate, but you loose
out on other things. For instance, tags come in the form of
`category:value`, so naturally it seems like you could use a map, but
you can't without severe loss of functionality because assigning two
values to the same category is common. And you can't normalize map
input or suppress the output correctly (this was eventually what broke
the camel's back). I tried an experiment of normalizing the input to be
`category:value` as the key in the map and a value of `""`, but... seee
diff suppress. In this case, simple is good.
While here bring some cleanups to _Metric since that was my initial
testing target.
* Rename `providerConfig` to `_ProviderConfig`
* Checkpoint the `json` check type.
* Fix a few residual issues re: missing descriptions.
* Rename `validateRegexp` to `_ValidateRegexp`
* Use tags as real sets, not just a slice of strings.
* Move the DiffSuppressFunc for tags down to the Elem.
* Fix up unit tests to chase the updated, default hasher function being used.
* Remove `Computed` attribute from `TypeSet` objects.
This fixes a pile of issues re: update that I was having.
* Rename functions.
`GetStringOk` -> `GetStringOK`
`GetSetAsListOk` -> `GetSetAsListOK`
`GetIntOk` -> `GetIntOK`
* Various small cleanups and comments rolled into a single commit.
* Add a `postgresql` check type for the `circonus_check` resource.
* Rename various validator functions to be _CapitalCase vs capitalCase.
* Err... finish the validator renames.
* Add `GetFloat64()` support.
* Add `icmp_ping` check type support.
* Catch up to the _API*Attr renames.
Deliberately left out of the previous commit in order to create a clean
example of what is required to add a new check type to the
`circonus_check` resource.
* Clarify when the `target` attribute is required for the `postgresql`
check type.
* Correctly pull the metric ID attribute from the right location.
* Add a circonus_stream_group resource (a.k.a. a Circonus "metric cluster")
* Add support for the [`caql`](https://login.circonus.com/user/docs/caql_reference) check type.
* Add support for the `http` check type.
* `s/SSL/TLS/g`
* Add support for `tcp` check types.
* Enumerate the available metrics that are supported for each check type.
* Add [`cloudwatch`](https://login.circonus.com/user/docs/Data/CheckTypes/CloudWatch) check type support.
* Add a `circonus_trigger` resource (a.k.a Circonus Ruleset).
* Rename a handful of functions to make it clear in the function name the
direction of flow for information moving through the provider.
TL;DR: Replace `parse` and `read` with "foo to bar"-like names.
* Fix the attribute name used in a validator. Absent != After.
* Set the minimum `absent` predicate to 70s per testing.
* Fix the regression tests for circonus_trigger now that absent has a 70s min
* Fix up the `tcp` check to require a `host` attribute.
Fix tests. It's clear I didn't run these before committing/pushing the
`tcp` check last time.
* Fix `circonus_check` for `cloudwatch` checks.
* Rename `parsePerCheckTypeConfig()` to `_CheckConfigToAPI` to be
consistent with other function names.
grep(1)ability of code++
* Slack buttons as an integer are string encoded.
* Fix updates for `circonus_contact`.
* Fix the out parameters for contact groups.
* Move to using `_CastSchemaToTF()` where appropriate.
* Fix circonus_contact_group. Updates work as expected now.
* Use `_StateSet()` in place of `d.Set()` everywhere.
* Make a quick pass over the collector datasource to modernize its style
* Quick pass for items identified by `golint`.
* Fix up collectors
* Fix the `json` check type.
Reconcile possible sources of drift. Update now works as expected.
* Normalize trigger durations to seconds.
* Improve the robustness of the state handling for the `circonus_contact_group` resource.
* I'm torn on this, but sort the contact groups in the notify list.
This does mean that if the first contact group in the list has a higher
lexical sort order the plan won't converge until the offending resource
is tainted and recreated. But there's also some sorting happening
elsewhere, so.... sort and taint for now and this will need to be
revisited in the future.
* Add support for the `httptrap` check type.
* Remove empty units from the state file.
* Metric clusters can return a 404. Detect this accordingly in its
respective Exists handler.
* Add a `circonus_graph` resource.
* Fix a handful of bugs in the graph provider.
* Re-enable the necessary `ConflictsWith` definitions and normalize attribute names.
* Objects that have been deleted via the UI return a 404. Handle in Exists().
* Teach `circonus_graph`'s Stack set to accept nil values.
* Set `ForceNew: true` for a graph's name.
* Chase various API fixes required to make `circonus_graph` work as expected.
* Fix up the handling of sub-1 zoom resolutions for graphs.
* Add the `check_by_collector` out parameter to the `circonus_check` resource.
* Improve validation of line vs area graphs. Fix graph_style.
* Fix up the `logarithmic` graph axis option.
* Resolve various trivial `go vet` issues.
* Add a stream_group out parameter.
* Remove incorrectly applied `Optional` attributes to the `circonus_account` resource.
* Remove various `Optional` attributes from the `circonus_collector` data source.
* Centralize the common need to suppress leading and trailing whitespace into `suppressWhitespace`.
* Sync up with upstream vendor fixes for circonus_graph.
* Update the checksum value for the http check.
* Chase `circonus_graph`'s underlying `line_style` API object change from `string` to `*string`.
* Clean up tests to use a generic terraform regression testing account.
* Rename all identifiers that began with a `_` and replace with a corresponding lowercase glyph.
* Remove stale comment in types.
* Move the calls to `ResourceData`'s `SetId()` calls to be first in the
list so that no resources are lost in the event of a `panic()`.
* Remove `stateSet` from the `circonus_trigger` resource.
* Remove `stateSet` from the `circonus_stream_group` resource.
* Remove `schemaSet` from the `circonus_graph` resource.
* Remove `stateSet` from the `circonus_contact` resource.
* Remove `stateSet` from the `circonus_metric` resource.
* Remove `stateSet` from the `circonus_account` data source.
* Remove `stateSet` from the `circonus_collector` data source.
* Remove stray `stateSet` call from the `circonus_contact` resource.
This is an odd artifact to find... I'm completely unsure as to why it
was there to begin with but am mostly certain it's a bug and needs to be
removed.
* Remove `stateSet` from the `circonus_check` resource.
* Remove the `stateSet` helper function.
All call sites have been converted to return errors vs `panic()`'ing at
runtime.
* Remove a pile of unused functions and type definitions.
* Remove the last of the `attrReader` interface.
* Remove an unused `Sprintf` call.
* Update `circonus-gometrics` and remove unused files.
* Document what `convertToHelperSchema()` does.
Rename `castSchemaToTF` to `convertToHelperSchema`.
Change the function parameter ordering so the `map` of attribute
descriptions: this is much easier to maintain when the description map
is first when creating schema inline.
* Move descriptions into their respective source files.
* Remove all instances of `panic()`.
In the case of software bugs, log an error. Never `panic()` and always
return a value.
* Rename `stream_group` to `metric_cluster`.
* Rename triggers to rule sets
* Rename `stream` to `metric`.
* Chase the `stream` -> `metric` change into the docs.
* Remove some unused test functions.
* Add the now required `color` attribute for graphing a `metric_cluster`.
* Add a missing description to silence a warning.
* Add `id` as a selector for the account data source.
* Futureproof testing: Randomize all asset names to prevent any possible resource conflicts.
This isn't a necessary change for our current build and regression
testing, but *just in case* we have a radical change to our testing
framework in the future, make all resource names fully random.
* Rename various values to match the Circonus docs.
* s/alarm/alert/g
* Ensure ruleset criteria can not be empty.
Fixes: #12494
The Create was changed to use the default and not d.GetOk - the update
wasn't - this was causing issues when trying to update to a false value
```
% make testacc TEST=./builtin/providers/datadog
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/03/07 16:20:54 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/datadog -v -timeout 120m
=== RUN TestDatadogMonitor_import
--- PASS: TestDatadogMonitor_import (4.77s)
=== RUN TestDatadogUser_import
--- PASS: TestDatadogUser_import (6.23s)
=== RUN TestProvider
--- PASS: TestProvider (0.00s)
=== RUN TestProvider_impl
--- PASS: TestProvider_impl (0.00s)
=== RUN TestAccDatadogMonitor_Basic
--- PASS: TestAccDatadogMonitor_Basic (3.83s)
=== RUN TestAccDatadogMonitor_BasicNoTreshold
--- PASS: TestAccDatadogMonitor_BasicNoTreshold (4.92s)
=== RUN TestAccDatadogMonitor_Updated
--- PASS: TestAccDatadogMonitor_Updated (5.88s)
=== RUN TestAccDatadogMonitor_TrimWhitespace
--- PASS: TestAccDatadogMonitor_TrimWhitespace (3.23s)
=== RUN TestAccDatadogMonitor_Basic_float_int
--- PASS: TestAccDatadogMonitor_Basic_float_int (5.73s)
=== RUN TestAccDatadogTimeboard_update
--- PASS: TestAccDatadogTimeboard_update (8.86s)
=== RUN TestValidateAggregatorMethod
--- PASS: TestValidateAggregatorMethod (0.00s)
=== RUN TestAccDatadogUser_Updated
--- PASS: TestAccDatadogUser_Updated (6.05s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/datadog 49.506s
```
`env list` was missing the args re-assignment after parsing the flags.
This is only a problem if the variables are automatically be populated
as arguments from a tfvars file.
Add Env and SetEnv methods to command.Meta to retrieve the current
environment name inside any command.
Make sure all calls to Backend.State contain an environment name, and
make the package compile against the update backend package.
Destroying a terraform state can't always create an empty state, as
outputs and the root module may remain. Use HasResources to warn about
deleting an environment with resources.
In order to operate in parity with other commands, the env command
should take a path argument to locate the configuration.
This however introduces the issue of a possible name conflict between a
path and subcommand, or printing an incorrect current environment for
the bare `env` command. In favor of simplicity this removes the current
env output and only prints usage when no subcommand is provided.
I made this interface way back with the original backend work and I
guess I forgot to hook it up! This is becoming an issue as I'm working
on our 2nd enhanced backend that requires this information and I
realized it was hardcoded before.
This propertly uses the CLIInit interface allowing any backend to gain
access to this data.
Module resource were being sorted lexically by name by the state filter.
If there are 10 or more resources, the order won't match the index
order, and resources will have different indexes in their new location.
Sort the FilterResults by index numerically when the names match.
Clean up the module String output for visual inspection by sorting
Resource name parts numerically when they are an integer value.
* providers/spotinst: Add support for Spotinst resources
* providers/spotinst: Fix merge conflict - layouts/docs.erb
* docs/providers/spotinst: Fix the resource description field
* providers/spotinst: Fix the acceptance tests
* providers/spotinst: Mark the device_index as a required field
* providers/spotinst: Change the associate_public_ip_address field to TypeBool
* docs/providers/spotinst: Update the description of the adjustment field
* providers/spotinst: Rename IamRole to IamInstanceProfile to make it more compatible with the AWS provider
* docs/providers/spotinst: Rename iam_role to iam_instance_profile
* providers/spotinst: Deprecate the iam_role attribute
* providers/spotinst: Fix a misspelled var (IamRole)
* providers/spotinst: Fix possible null pointer exception related to "iam_instance_profile"
* docs/providers/spotinst: Add "load_balancer_names" missing description
* providers/spotinst: New resource "spotinst_subscription" added
* providers/spotinst: Eliminate a possible null pointer exception in "spotinst_aws_group"
* providers/spotinst: Eliminate a possible null pointer exception in "spotinst_subscription"
* providers/spotinst: Mark spotinst_subscription as deleted in destroy
* providers/spotinst: Add support for custom event format in spotinst_subscription
* providers/spotinst: Disable the destroy step of spotinst_subscription
* providers/spotinst: Add support for update subscriptions
* providers/spotinst: Merge fixed conflict - layouts/docs.erb
* providers/spotinst: Vendor dependencies
* providers/spotinst: Return a detailed error message
* provider/spotinst: Update the plugin list
* providers/spotinst: Vendor dependencies using govendor
* providers/spotinst: New resource "spotinst_healthcheck" added
* providers/spotinst: Update the Spotinst SDK
* providers/spotinst: Comment out unnecessary log.Printf
* providers/spotinst: Fix the acceptance tests
* providers/spotinst: Gofmt fixes
* providers/spotinst: Use multiple functions to expand each block
* providers/spotinst: Allow ondemand_count to be zero
* providers/spotinst: Change security_group_ids from TypeSet to TypeList
* providers/spotinst: Remove unnecessary `ForceNew` fields
* providers/spotinst: Update the Spotinst SDK
* providers/spotinst: Add support for capacity unit
* providers/spotinst: Add support for EBS volume pool
* providers/spotinst: Delete health check
* providers/spotinst: Allow to set multiple availability zones
* providers/spotinst: Gofmt
* providers/spotinst: Omit empty strings from the load_balancer_names field
* providers/spotinst: Update the Spotinst SDK to v1.1.9
* providers/spotinst: Add support for new strategy parameters
* providers/spotinst: Update the Spotinst SDK to v1.2.0
* providers/spotinst: Add support for Kubernetes integration
* providers/spotinst: Fix merge conflict - vendor/vendor.json
* providers/spotinst: Update the Spotinst SDK to v1.2.1
* providers/spotinst: Add support for Application Load Balancers
* providers/spotinst: Do not allow to set ondemand_count to 0
* providers/spotinst: Update the Spotinst SDK to v1.2.2
* providers/spotinst: Add support for scaling policy operators
* providers/spotinst: Add dimensions to spotinst_aws_group tests
* providers/spotinst: Allow both ARN and name for IAM instance profiles
* providers/spotinst: Allow ondemand_count=0
* providers/spotinst: Split out the set funcs into flatten style funcs
* providers/spotinst: Update the Spotinst SDK to v1.2.3
* providers/spotinst: Add support for EBS optimized flag
* providers/spotinst: Update the Spotinst SDK to v2.0.0
* providers/spotinst: Use stringutil.Stringify for debugging
* providers/spotinst: Update the Spotinst SDK to v2.0.1
* providers/spotinst: Key pair is now optional
* providers/spotinst: Make sure we do not nullify signals on strategy update
* providers/spotinst: Hash both Strategy and EBS Block Device
* providers/spotinst: Hash AWS load balancer
* providers/spotinst: Update the Spotinst SDK to v2.0.2
* providers/spotinst: Verify namespace exists before appending policy
* providers/spotinst: Image ID will be in a separate block from now on, so as to allow ignoring changes only on the image ID. This change is backwards compatible.
* providers/spotinst: user data decoded when returned from spotinst api, so that TF compares the two states properly, and does not update without cause.
Fixes#12154
The "-backup" flag before for "state *" CLI had some REALLY bizarre behavior:
it would change the _destination_ state and actually not create any
additional backup at all (the original state was unchanged and the
normal timestamped backup still are written). Really weird.
This PR makes the -backup flag work as you'd expect with one caveat:
we'll _still_ create the timestamped backup file. The timestamped backup
file helps make sure that you always get a backup history when using
these commands. We don't want to make it easy for you to overwrite a
state with the `-backup` flag.
We need to initialize the backend even if the config has no backend set.
This allows `init` to work when unsetting a previously set backend.
Without this, there was no way to unset a backend.
Gove LockInfo a Marshal method for easy serialization, and a String
method for more readable output.
Have the state.Locker implementations use LockError when possible to
return LockInfo and an error.
During backend initialization, especially during a migration, there is a
chance that an existing state could be overwritten.
Attempt to get a locks when writing the new state. It would be nice to
always have a lock when reading the states, but the recursive structure
of the Meta.Backend config functions makes that quite complex.
Remove the lock command for now to avoid confusion about the behavior of
locks. Rename lock to force-unlock to make it more aparent what it does.
Add a success message, and chose red because it can be a dangerous
operation.
Add confirmation akin to `destroy`, and a `-force` option for
automation and testing.
The new test pattern is to chdir into a temp location for the test, but
the prevents us from locating the testdata directory in the source. Add
a source path to testLockState so we can find the statelocker.go source.
Previously when runnign a plan with no exitsing state, the plan would be
written out and then backed up on the next WriteState by another
BackupState instance. Since we now maintain a single State instance
thoughout an operation, the backup happens before any state exists so no
backup file is created.
This is OK, as the backup state the tests were checking for is from the
plan file, which already exists separate from the state.
Terraform can't tell the difference between an empty output and an
undefined output. This is often confusing for folks using interpolation.
As much as it would be great to fix upstream, changing this error
message to be a bit more helpful is a good stop-gap to avoid
frustration.
The old behavior in this situation was to simply delete the file. Since
we now have a lock on this file we don't want to close or delete it, so
instead truncate the file at offset 0.
Fix a number of related tests
Having the state files always created for locking breaks a lot of tests.
Most can be fixed by simple checking for state within a file, but a few
still might be writing state when they shouldn't.
* vendor: update gopkg.in/ns1/ns1-go.v2
* provider/ns1: Port the ns1 provider to Terraform core
* docs/ns1: Document the ns1 provider
* ns1: rename remaining nsone -> ns1 (#10805)
* Ns1 provider (#11300)
* provider/ns1: Flesh out support for meta structs.
Following the structure outlined by @pashap.
Using reflection to reduce copy/paste.
Putting metas inside single-item lists. This is clunky, but I couldn't
figure out how else to have a nested struct. Maybe the Terraform people
know a better way?
Inside the meta struct, all fields are always written to the state; I
can't figure out how to omit fields that aren't used. This is not just
verbose, it actually causes issues because you can't have both "up" and
"up_feed" set).
Also some minor other changes:
- Add "terraform" import support to records and zones.
- Create helper class StringEnum.
* provider/ns1: Make fmt
* provider/ns1: Remove stubbed out RecordRead (used for testing metadata change).
* provider/ns1: Need to get interface that m contains from Ptr Value with Elem()
* provider/ns1: Use empty string to indicate no feed given.
* provider/ns1: Remove old record.regions fields.
* provider/ns1: Removes redundant testAccCheckRecordState
* provider/ns1: Moves account permissions logic to permissions.go
* provider/ns1: Adds tests for team resource.
* provider/ns1: Move remaining permissions logic to permissions.go
* ns1/provider: Adds datasource.config
* provider/ns1: Small clean up of datafeed resource tests
* provider/ns1: removes testAccCheckZoneState in favor of explicit name check
* provider/ns1: More renaming of nsone -> ns1
* provider/ns1: Comment out metadata for the moment.
* Ns1 provider (#11347)
* Fix the removal of empty containers from a flatmap
Removal of empty nested containers from a flatmap would sometimes fail a
sanity check when removed in the wrong order. This would only fail
sometimes due to map iteration. There was also an off-by-one error in
the prefix check which could match the incorrect keys.
* provider/ns1: Adds ns1 go client through govendor.
* provider/ns1: Removes unused debug line
* docs/ns1: Adds docs around apikey/datasource/datafeed/team/user/record.
* provider/ns1: Gets go vet green
* Importing the OpsGenie SDK
* Adding the goreq dependency
* Initial commit of the OpsGenie / User provider
* Refactoring to return a single client
* Adding an import test / fixing a copy/paste error
* Adding support for OpsGenie docs
* Scaffolding the user documentation for OpsGenie
* Adding a TODO
* Adding the User data source
* Documentation for OpsGenie
* Adding OpsGenie to the internal plugin list
* Adding support for Teams
* Documentation for OpsGenie Team's
* Validation for Teams
* Removing Description for now
* Optional fields for a User: Locale/Timezone
* Removing an implemented TODO
* Running makefmt
* Downloading about half the internet
Someone witty might simply sign this commit with "npm install"
* Adding validation to the user object
* Fixing the docs
* Adding a test creating multple users
* Prompting for the API Key if it's not specified
* Added a test for multiple users / requested changes
* Fixing the linting
* Initial checkin for PR request
* Added an argument to provider to allow control over whether or not TLS Certs will skip verification. Controllable via provider or env variable being set
* Initial check-in to use refactored module
* Checkin of very MVP for creating/deleting host test which works and validates basic host creation and deletion
* Check in with support for creating hosts with variables working
* Checking in work to date
* Remove code that causes travis CI to fail while I debug
* Adjust create to accept multivale
* Back on track. Working basic tests. go-icinga2-api needs more test too
* Squashing
* Back on track. Working basic tests. go-icinga2-api needs more test too
* Check in refactored hostgroup support
* Check in refactored check_command, hosts, and hsotgroup with a few test
* Checking in service code
* Add in dependency for icinga2 provider
* Add documentation. Refactor, fix and extend based on feedback from Hashicorp
* Added checking and validation around invalid URL and unavailable server
* "external" provider for gluing in external logic
This provider will become a bit of glue to help people interface external
programs with Terraform without writing a full Terraform provider.
It will be nowhere near as capable as a first-class provider, but is
intended as a light-touch way to integrate some pre-existing or custom
system into Terraform.
* Unit test for the "resourceProvider" utility function
This small function determines the dependable name of a provider for
a given resource name and optional provider alias. It's simple but it's
a key part of how resource nodes get connected to provider nodes so
worth specifying the intended behavior in the form of a test.
* Allow a provider to export a resource with the provider's name
If a provider only implements one resource of each type (managed vs. data)
then it can be reasonable for the resource names to exactly match the
provider name, if the provider name is descriptive enough for the
purpose of the each resource to be obvious.
* provider/external: data source
A data source that executes a child process, expecting it to support a
particular gateway protocol, and exports its result. This can be used as
a straightforward way to retrieve data from sources that Terraform
doesn't natively support..
* website: documentation for the "external" provider
Fixes#10412
The context wasn't properly adding variable values to the Interpolator
instance which made it so that the `console` command couldn't access
variables set via tfvars and the CLI.
This also adds better test coverage in command itself for this.
Fixes#10337
The `reset_bold` escape code (21) causes the text on Windows command
prompts to just become invisible. `reset` does the same job for us in
this scenario so do that.
Add `terraform debug json2dot` to convert debug log graphs to dot
format. This is not meant to be in place of more advanced debug
visualization, but may continue to be a useful way to work with the
debug output.
The dot format generation was done with a mix of code from the terraform
package and the dot package. Unify the dot generation code, and it into
the dag package.
Use an intermediate structure to allow a dag.Graph to marshal itself
directly. This structure will be ablt to marshal directly to JSON, or be
translated to dot format. This was we can record more information about
the graph in the debug logs, and provide a way to translate those logged
structures to dot, which is convenient for viewing the graphs.
This turns the new graphs on by default and puts the old graphs behind a
flag `-Xlegacy-graph`. This effectively inverts the current 0.7.x
behavior with the new graphs.
We've incubated most of these for a few weeks now. We've found issues
and we've fixed them and we've been using these graphs internally for
awhile without any major issue. Its time to default them on and get them
part of a beta.
Fixes#7774
This modifies the `import` command to load configuration files from the
pwd. This also augments the configuration loading section for the CLI to
have a new option (default false, same as old behavior) to
allow directories with no Terraform configurations.
For import, we allow directories with no Terraform configurations so
this option is set to true.
Implement debugInfo and the DebugGraph
DebugInfo will be a global variable through which graph debug
information can we written to a compressed archive. The DebugInfo
methods are all safe for concurrent use, and noop with a nil receiver.
The API outside of the terraform package will be to call SetDebugInfo
to create the archive, and CloseDebugInfo() to properly close the file.
Each write to the archive will be flushed and sync'ed individually, so
in the event of a crash or a missing call to Close, the archive can
still be recovered.
The DebugGraph is a representation of a terraform Graph to be written to
the debug archive, currently in dot format. The DebugGraph also contains
an internal buffer with Printf and Write methods to add to this buffer.
The buffer will be written to an accompanying file in the debug archive
along with the graph.
This also adds a GraphNodeDebugger interface. Any node implementing
`NodeDebug() string` can output information to annotate the debug graph
node, and add the data to the log. This interface may change or be
removed to provide richer options for debugging graph nodes.
The new graph builders all delegate the build to the BasicGraphBuilder.
Having a Name field lets us differentiate the actual builder
implementation in the debug graphs.
Fixes#7975
This changes the InputMode for the CLI to always be:
InputModeProvider | InputModeVar | InputModeVarUnset
Which means:
* Ask for provider variables
* Ask for user variables _that are not already set_
The change is the latter point. Before, we'd only ask for variables if
zero were given. This forces the user to either have no variables set
via the CLI, env vars, tfvars or ALL variables, but no in between. As
reported in #7975, this isn't expected behavior.
The new change makes is so that unset variables are always asked for.
Users can retain the previous behavior by setting `-input=false`. This
would ensure that variables set by external sources cover all cases.
To reduce the risk of secret exposure via Terraform state and log output,
we default to creating a relatively-short-lived token (20 minutes) such
that Vault can, where possible, automatically revoke any retrieved
secrets shortly after Terraform has finished running.
This has some implications for usage of this provider that will be spelled
out in more detail in the docs that will be added in a later commit, but
the most significant implication is that a plan created by "terraform plan"
that includes secrets leased from Vault must be *applied* before the
lease period expires to ensure that the issued secrets remain valid.
No resources yet. They will follow in subsequent commits.
Fixes#5409
I didn't expect this to be such a rabbit hole!
Based on git history, it appears that for "historical reasons"(tm),
setting up the various `state.State` structures for a plan were
_completely different logic_ than a normal `terraform apply`. This meant
that it was skipping things like disabling backups with `-backup="-"`.
This PR unifies loading from a plan to the normal state setup mechanism.
A few tests that were failing prior to this PR were added, no existing
tests were changed.
When passing a bool type to a variable such as `-var foo=true`, the CLI
would parse this as a `bool` type which Terraform core cannot handle.
It would then error with an invalid type error.
This changes the handling to convert the bool to its literally string
value given on the command-line.
This creates a standard package and interface for defining, querying,
setting experiments (`-X` flags).
I expect we'll want to continue to introduce various features behind
experimental flags. I want to make doing this as easy as possible and I
want to make _removing_ experiments as easy as possible as well.
The goal with this packge has been to rely on the compiler enforcing our
experiment references as much as possible. This means that every
experiment is a global variable that must be referenced directly, so
when it is removed you'll get compiler errors where the experiment is
referenced.
This also unifies and makes it easy to grab CLI flags to enable/disable
experiments as well as env vars! This way defining an experiment is just
a couple lines of code (documented on the package).
Since it is still very much possible for this to cause problems, this
can be used to disable the shadow graph. We'll purposely not document
this since the goal is to remove this flag as we become more confident
with it.
Wait for our expected number of goroutines to be staged and ready, then
allow Run() to complete. Use a high-water-mark counter to ensure we
never exceeded the max expected concurrent goroutines at any point
during the run.
When we parse a map from HCL, it's decoded into a list of maps because
HCL allows declaring a key multiple times to implicitly add it to a
list. Since there's no terraform variable configuration that supports
this structure, we can always flatten a []map[string]interface{} value
to a map[string]interface{} and remove any type rrors trying to apply
that value.
Don't try to parse a varibale as HCL if the value can be parse as a
single number. HCL will always attempt to convert the value to a number,
even if we later find the configured variable's type to be a string.
This commit adds the `state rm` command for removing an address from
state. It is the result of a rebase from pull-request #5953 which was
lost at some point during the Terraform 0.7 feature branch merges.
Fix checksum issue with remote state
If we read a state file with "null" objects in a module and they become
initialized to an empty map the state file may be written out with empty
objects rather than "null", changing the checksum. If we can detect
this, increment the serial number to prevent a conflict in atlas.
Our fakeAtlas test server now needs to decode the state directly rather
than using the ReadState function, so as to be able to read the state
unaltered.
The terraform.State data structures have initialization spread out
throughout the package. More thoroughly initialize State during
ReadState, and add a call to init() during WriteState as another
normalization safeguard.
Expose State.init through an exported Init() method, so that a new State
can be completely realized outside of the terraform package.
Additionally, the internal init now completely walks all internal state
structures ensuring that all maps and slices are initialized. While it
was mentioned before that the `init()` methods are problematic with too
many call sites, expanding this out better exposes the entry points that
will need to be refactored later for improved concurrency handling.
The State structures had a mix of `omitempty` fields. Remove omitempty
for all maps and slices as part of this normalization process. Make
Lineage mandatory, which is now explicitly set in some tests.
Some Atlas usage patterns expect to be able to override a variable set
in Atlas, even if it's not seen in the local context. This allows
overwriting a variable that is returned from atlas, and sends it back.
Also use a unique sential value in the context where we have variables
from atlas. This way atals variables aren't combined with the local
variables, and we don't do something like inadvertantly change the type,
double encode/escape, etc.
If we have a number value in our config variables, format it as a
string, and send it with the HCL=true flag just in case.
Also use %g for for float encoding, as the output is a generally a
little friendlier.
Set the default log package output to iotuil.Discard during tests if the
`-v` flag isn't set. If we are verbose, then apply the filter according
to the TF_LOG env variable.
The behaviour whereby outputs for a particular nested module can be
output was broken by the changes for lists and maps. This commit
restores the previous behaviour by passing the module path into the
outputsAsString function.
We also add a new test of this since the code path for indivdual output
vs all outputs for a module has diverged.
The handling of remote variables was completely disabled for push.
We still need to fetch variables from atlas for push, because if the
variable is only set remotely the Input walk will still prompt the user
for a value. We add the missing remote variables to the context
to disable input.
We now only handle remote variables as atlas.TFVar and explicitly pass
around that type rather than an `interface{}`.
Shorten the text fixture slightly to make the output a little more
readable on failures.
The strings we have in the variables may contain escaped double-quotes,
which have already been parsed and had the `\`s removed. We need to
re-escape these, but only if we are in the outer string and not inside an
interpolation.
Add tf_vars to the data structures sent in terraform push.
This takes any value of type []interface{} or map[string]interface{} and
marshals it as a string representation of the equivalent HCL. This
prevents ambiguity in atlas between a string that looks like a json
structure, and an actual json structure.
For the time being we will need a way to serialize data as HCL, so the
command package has an internal encodeHCL function to do so. We can
remove this if we get complete package for marshaling HCL.
Terraform 0.7 introduces lists and maps as first-class values for
variables, in addition to string values which were previously available.
However, there was previously no way to override the default value of a
list or map, and the functionality for overriding specific map keys was
broken.
Using the environment variable method for setting variable values, there
was previously no way to give a variable a value of a list or map. These
now support HCL for individual values - specifying:
TF_VAR_test='["Hello", "World"]'
will set the variable `test` to a two-element list containing "Hello"
and "World". Specifying
TF_VAR_test_map='{"Hello = "World", "Foo" = "bar"}'
will set the variable `test_map` to a two-element map with keys "Hello"
and "Foo", and values "World" and "bar" respectively.
The same logic is applied to `-var` flags, and the file parsed by
`-var-files` ("autoVariables").
Note that care must be taken to not run into shell expansion for `-var-`
flags and environment variables.
We also merge map keys where appropriate. The override syntax has
changed (to be noted in CHANGELOG as a breaking change), so several
tests needed their syntax updating from the old `amis.us-east-1 =
"newValue"` style to `amis = "{ "us-east-1" = "newValue"}"` style as
defined in TF-002.
In order to continue supporting the `-var "foo=bar"` type of variable
flag (which is not valid HCL), a special case error is checked after HCL
parsing fails, and the old code path runs instead.
This is the first step in allowing overrides of map and list variables.
We convert Context.variables to map[string]interface{} from
map[string]string and fix up all the call sites.
* Add scaleway provider
this PR allows the entire scaleway stack to be managed with terraform
example usage looks like this:
```
provider "scaleway" {
api_key = "snap"
organization = "snip"
}
resource "scaleway_ip" "base" {
server = "${scaleway_server.base.id}"
}
resource "scaleway_server" "base" {
name = "test"
# ubuntu 14.04
image = "aecaed73-51a5-4439-a127-6d8229847145"
type = "C2S"
}
resource "scaleway_volume" "test" {
name = "test"
size_in_gb = 20
type = "l_ssd"
}
resource "scaleway_volume_attachment" "test" {
server = "${scaleway_server.base.id}"
volume = "${scaleway_volume.test.id}"
}
resource "scaleway_security_group" "base" {
name = "public"
description = "public gateway"
}
resource "scaleway_security_group_rule" "http-ingress" {
security_group = "${scaleway_security_group.base.id}"
action = "accept"
direction = "inbound"
ip_range = "0.0.0.0/0"
protocol = "TCP"
port = 80
}
resource "scaleway_security_group_rule" "http-egress" {
security_group = "${scaleway_security_group.base.id}"
action = "accept"
direction = "outbound"
ip_range = "0.0.0.0/0"
protocol = "TCP"
port = 80
}
```
Note that volume attachments require the server to be stopped, which can lead to
downtimes of you attach new volumes to already used servers
* Update IP read to handle 404 gracefully
* Read back resource on update
* Ensure IP detachment works as expected
Sadly this is not part of the official scaleway api just yet
* Adjust detachIP helper
based on feedback from @QuentinPerez in
https://github.com/scaleway/scaleway-cli/pull/378
* Cleanup documentation
* Rename api_key to access_key
following @stack72 suggestion and rename the provider api_key for more clarity
* Make tests less chatty by using custom logger
This commit removes the ability to index into complex output types using
`terraform output a_list 1` (for example), and adds a `-json` flag to
the `terraform output` command, such that the output can be piped
through a post-processor such as jq or json. This removes the need to
allow arbitrary traversal of nested structures.
It also adds tests of human readable ("normal") output with nested lists
and maps, and of the new JSON output.
When working from an existing plan, we weren't setting the PathOut field
for a LocalState. This required adding an outPath argument to the
StateFromPlan function to avoid having to introspect the returned
state.State interface to find the appropriate field.
To test we run a plan first and provide the new plan to apply with
`-state-out` set.
Data sources are given a special plan output when the diff would show it
creating a new resource, to indicate that there is only a logical data
resource being read. Data sources nested in modules weren't formatted in
the plan output in the same way.
This checks for the "data." part of the path in nested modules, and adds
acceptance tests for the plan output.
In cases where we construct state directly rather than reading it via
the usual methods, we need to ensure that the necessary maps are
initialized correctly.
This commit fixes a crash in `terraform show` where there is no primary
resource, but there is a tainted resource, because of the changes made
to tainted resource handling in 0.7.
This an effort to address hashicorp/terraform#516.
Adding the Sensitive attribute to the resource schema, opening up the
ability for resource maintainers to mark some fields as sensitive.
Sensitive fields are hidden in the output, and, possibly in the future,
could be encrypted.
This means it’s shown correctly in a plan and takes into account any
actions that are dependant on the tainted resource and, vice verse, any
actions that the tainted resource depends on.
So this changes the behaviour from saying this resource is tainted so
just forget about it and make sure it gets deleted in the background,
to saying I want that resource to be recreated (taking into account the
existing resource and it’s place in the graph).
This commit forward ports the changes made for 0.6.17, in order to store
the type and sensitive flag against outputs.
It also refactors the logic of the import for V0 to V1 state, and
fixes up the call sites of the new format for outputs in V2 state.
Finally we fix up tests which did not previously set a state version
where one is required.
This provider will have logical resources that allow Terraform to "manage"
randomness as a resource, producing random numbers on create and then
retaining the outcome in the state so that it will remain consistent
until something explicitly triggers generating new values.
Managing randomness in this way allows configurations to do things like
random distributions and ids without causing "perma-diffs".
Internally a data source read is represented as a creation diff for the
resource, but in the UI we'll show it as a distinct icon and color so that
the user can more easily understand that these operations won't affect
any real infrastructure.
Unfortunately by the time we get to formatting the plan in the UI we
only have the resource names to work with, and can't get at the original
resource mode. Thus we're forced to infer the resource mode by exploiting
knowledge of the naming scheme.
New resources logically don't have "old values" for their attributes, so
showing them as updates from the empty string is misleading and confusing.
Instead, we'll skip showing the old value in a creation diff.
Data resources don't have ids when they refresh, so we'll skip showing the
"(ID: ...)" indicator for these. Showing it with no id makes it look
like something is broken.
Since the data resource lifecycle contains no steps to deal with tainted
instances, we must make sure that they never get created.
Doing this out in the command layer is not the best, but this is currently
the only layer that has enough information to make this decision and so
this simple solution was preferred over a more disruptive refactoring,
under the assumption that this taint functionality eventually gets
reworked in terms of StateFilter anyway.
- Fix sensitive outputs for lists and maps
- Fix test prelude which was missed during conflict resolution
- Fix `terraform output` to match old behaviour and not have outputs
header and colouring
- Bump timeout on TestAtlasClient_UnresolvableConflict
This introduces the terraform state list command to list the resources
within a state. This is the first of many state management commands to
come into 0.7.
This is the first command of many to come that is considered a
"plumbing" command within Terraform (see "plumbing vs porcelain":
http://git.661346.n2.nabble.com/what-are-plumbing-and-porcelain-td2190639.html).
As such, this PR also introduces a bunch of groundwork to support
plumbing commands.
The main changes:
- Main command output is changed to split "common" and "uncommon"
commands.
- mitchellh/cli is updated to support nested subcommands, since
terraform state list is a nested subcommand.
- terraform.StateFilter is introduced as a way in core to filter/search
the state files. This is very basic currently but I expect to make it
more advanced as time goes on.
- terraform state list command is introduced to list resources in a
state. This can take a series of arguments to filter this down.
Known issues, or things that aren't done in this PR on purpose:
- Unit tests for terraform state list are on the way. Unit tests for the
core changes are all there.
This changes the representation of maps in the interpolator from the
dotted flatmap form of a string variable named "var.variablename.key"
per map element to use native HIL maps instead.
This involves porting some of the interpolation functions in order to
keep the tests green, and adding support for map outputs.
There is one backwards incompatibility: as a result of an implementation
detail of maps, one could access an indexed map variable using the
syntax "${var.variablename.key}".
This is no longer possible - instead HIL native syntax -
"${var.variablename["key"]}" must be used. This was previously
documented, (though not heavily used) so it must be noted as a backward
compatibility issue for Terraform 0.7.
This commit adds the groundwork for supporting module outputs of types
other than string. In order to do so, the state version is increased
from 1 to 2 (though the "public-facing" state version is actually as the
first state file was binary).
Tests are added to ensure that V2 (1) state is upgraded to V3 (2) state,
though no separate read path is required since the V2 JSON will
unmarshal correctly into the V3 structure.
Outputs in a ModuleState are now of type map[string]interface{}, and a
test covers round-tripping string, []string and map[string]string, which
should cover all of the types in question.
Type switches have been added where necessary to deal with the
interface{} value, but they currently default to panicking when the input
is not a string.
This adds a field terraform_version to the state that represents the
Terraform version that wrote that state. If Terraform encounters a state
written by a future version, it will error. You must use at least the
version that wrote that state.
Internally we have fields to override this behavior (StateFutureAllowed),
but I chose not to expose them as CLI flags, since the user can just
modify the state directly. This is tricky, but should be tricky to
represent the horrible disaster that can happen by enabling it.
We didn't have to bump the state format version since the absense of the
field means it was written by version "0.0.0" which will always be
older. In effect though this change will always apply to version 2 of
the state since it appears in 0.7 which bumped the version for other
purposes.
This introduces the terraform state list command to list the resources
within a state. This is the first of many state management commands to
come into 0.7.
This is the first command of many to come that is considered a
"plumbing" command within Terraform (see "plumbing vs porcelain":
http://git.661346.n2.nabble.com/what-are-plumbing-and-porcelain-td2190639.html).
As such, this PR also introduces a bunch of groundwork to support
plumbing commands.
The main changes:
- Main command output is changed to split "common" and "uncommon"
commands.
- mitchellh/cli is updated to support nested subcommands, since
terraform state list is a nested subcommand.
- terraform.StateFilter is introduced as a way in core to filter/search
the state files. This is very basic currently but I expect to make it
more advanced as time goes on.
- terraform state list command is introduced to list resources in a
state. This can take a series of arguments to filter this down.
Known issues, or things that aren't done in this PR on purpose:
- Unit tests for terraform state list are on the way. Unit tests for the
core changes are all there.
* core: Add support for marking outputs as sensitive
This commit allows an output to be marked "sensitive", in which case the
value is redacted in the post-refresh and post-apply list of outputs.
For example, the configuration:
```
variable "input" {
default = "Hello world"
}
output "notsensitive" {
value = "${var.input}"
}
output "sensitive" {
sensitive = true
value = "${var.input}"
}
```
Would result in the output:
```
terraform apply
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
notsensitive = Hello world
sensitive = <sensitive>
```
The `terraform output` command continues to display the value as before.
Limitations: Note that sensitivity is not tracked internally, so if the
output is interpolated in another module into a resource, the value will
be displayed. The value is still present in the state.
* command/fmt: Document -diff doesn't disable -write
As noted in hashicorp/terraform#6343, this description misleadingly
suggested that the `-diff` option disables the `-write` option.
This isn't the case and because of the default options (described in
c753390) the behaviour of `terraform fmt -diff` is actually the same as
`terraform fmt -write -list -diff`.
Replace the "instead of rewriting" description to clarify that.
Documentation in hcl/fmtcmd is corrected in hashicorp/hcl#117 but it's not
really necessary to bump the dependency version.
* command/fmt: Show flag defaults in help text
These were documented on the website but not in the `-help` text. This
should help to clarify that you need to pass `-list=false -write=false
-diff` if you only want to see diffs.
Accordingly I've replaced the word "disabled" with "always false" in the
STDIN special cases so that it matches the terminology used in the defaults
and better indicates that it is overridden.
NB: The 3x duplicated defaults and documentation makes me feel uneasy once
again. I'm not sure how to solve that, though.
These options don't make sense when passing STDIN. `-write` will raise an
error because there is no file to write to. `-list` will always say
`<standard input>`. So disable whenever using STDIN, making the command
much simpler:
cat main.tf | terraform fmt -
So that you can do automatic formatting from an editor. You probably want to
disable the `-write` and `-list` options so that you just get the
re-formatted content, e.g.
cat main.tf | terraform fmt -write=false -list=false -
I've added a non-exported field called `input` so that we can override this
for the tests. If not specified, like in `commands.go`, then it will default
to `os.Stdin` which works on the command line.
The most common usage usage will be enabling the `-write` and `-list`
options so that files are updated in place and a list of any modified files
is printed. This matches the default behaviour of `go fmt` (not `gofmt`). So
enable these options by default.
This does mean that you will have to explicitly disable these if you want to
generate valid patches, e.g. `terraform fmt -diff -write=false -list=false`
This uses the `fmtcmd` package which has recently been merged into HCL. Per
the usage text, this rewrites Terraform config files to their canonical
formatting and style.
Some notes about the implementation for this initial commit:
- all of the fmtcmd options are exposed as CLI flags
- it operates on all files that have a `.tf` suffix
- it currently only operates on the working directory and doesn't accept a
directory argument, but I'll extend this in subsequent commits
- output is proxied through `cli.UiWriter` so that we write in the same way
as other commands and we can capture the output during tests
- the test uses a very simple fixture just to ensure that it is working
correctly end-to-end; the fmtcmd package has more exhaustive tests
- we have to write the fixture to a file in a temporary directory because it
will be modified and for this reason it was easier to define the fixture
contents as a raw string
This means that terraform commands like `plan`, `apply`, `show`, and
`graph` will expand all modules by default.
While modules-as-black-boxes is still very true in the conceptual design
of modules, feedback on this behavior has consistently suggested that
users would prefer to see more verbose output by default.
The `-module-depth` flag and env var are retained to allow output to be
optionally limited / summarized by these commands.
When destroying infrastructure with `--target`, print out which
infrastructure will be destroyed instead of saying `Terraform will
delete all your managed infrastructure`.
```
terraform destroy --target aws_instance.test2 --target aws_instance.test1
Do you really want to destroy?
Terraform will delete the following infrastructure:
aws_instance.test2
aws_instance.test1
There is no undo. Only 'yes' will be accepted to confirm
```
Omitting `--target` arguments will use the default input description.
```
$ terraform destroy
Do you really want to destroy?
Terraform will delete all your managed infrastructure.
There is no undo. Only 'yes' will be accepted to confirm.
```
Previously the plan summary output would consider -/+ diffs as changes
even though they actually destroy and create instances. This was
misleadning and inconsistent with the accounting that gets done for the
similar summary written out after "apply".
Instead we now count the -/+ diffs as both adds and removes, which should
mean that the counts output in the plan summary should match those in
the apply summary, as long as no errors occur during apply.
This fixes#3163.
Sometimes in all the output from ```terraform plan```, it is difficult
to see the ```(forces new resource)``` message.
This patch adds a little bit of color.
By prefixing them with `cmd /c` it will work with both `winner` and
`ssh` connection types.
This PR also reverts some bad stringer changes made in PR #2673
Other than the fact that "The the" doesn't really make any sense anywhere
that it's used in Terraform, they're a post-punk band from the UK.
Fixes "The The" so that they can get back to playing songs.
When you specify `-verbose` you'll get the whole graph of operations,
which gives a better idea of the operations terraform performs and in
what order.
The DOT graph is now generated with a small internal library instead of
simple string building. This allows us to ensure the graph generation is
as consistent as possible, among other benefits.
We set `newrank = true` in the graph, which I've found does just as good
a job organizing things visually as manually attempting to rank the nodes
based on depth.
This also fixes `-module-depth`, which was broken post-AST refector.
Modules are now expanded into subgraphs with labels and borders. We
have yet to regain the plan graphing functionality, so I removed that
from the docs for now.
Finally, if `-draw-cycles` is added, extra colored edges will be drawn
to indicate the path of any cycles detected in the graph.
A notable implementation change included here is that
{Reverse,}DepthFirstWalk has been made deterministic. (Before it was
dependent on `map` ordering.) This turned out to be unnecessary to gain
determinism in the final DOT-level implementation, but it seemed
a desirable enough of a property that I left it in.
Most CBD-related cycles include destroy nodes, and destroy nodes were
all being pruned from the graph before staring the Validate walk.
In practice this meant that we had scenarios that would error out with
graph cycles on Apply that _seemed_ fine during Plan.
This introduces a Verbose option to the GraphBuilder that tells it to
generate a "worst-case" graph. Validate sets this to true so that cycle
errors will always trigger at this step if they're going to happen.
(This Verbose option will be exposed as a CLI flag to `terraform graph`
in a second incoming PR.)
refs #1651
VCS detection was on by default, and blows up when the tests are run in
a copy of the Terraform source that is not a git repository, like - say
- during a Homebrew formula install, just to pick a random example. :)
If the cached state file contains a remote type field with upper case
characters, eg 'Consul', it was no longer possible to find the 'consul'
remote plugin.
Add `-target=resource` flag to core operations, allowing users to
target specific resources in their infrastructure. When `-target` is
used, the operation will only apply to that resource and its
dependencies.
The calculated dependencies are different depending on whether we're
running a normal operation or a `terraform destroy`.
Generally, "dependencies" refers to ancestors: resources falling
_before_ the target in the graph, because their changes are required to
accurately act on the target.
For destroys, "dependencies" are descendents: those resources which fall
_after_ the target. These resources depend on our target, which is going
to be destroyed, so they should also be destroyed.
URLs are `/`-based. Windows path Separator is `\`.
Convert `\` in test fixture path to `/` with filepath.ToSlash
such that proper URLs are constructed even on Windows.
Fixes a test failure on Windows.