Once you start reading from stdin, that is a blocking call that will
never finish. So when a context is canceled causing the input method to
return, the read will remain blocking in the running goroutine.
There isn't a real solution for it (e.g. its not possible to unblock the
read) so the only solution is to make the reader reusable.
When rendering the diff, the NoOp changes should come from the LCS
sequence, rather than the new sequence. The two indexes will not align
in many cases, adding the wrong new object or indexing out of bounds.
* command/state_list.go: fix bug loading user-defined state
If the user supplied a state path via the `-state` flag and terraform
was running in a workspace other than `default`, the state was not being
loaded properly. Fixes#19920
In study of existing providers we've found a pattern we werent previously
accounting for of using a nested block type to represent a group of
arguments that relate to a particular feature that is always enabled but
where it improves configuration readability to group all of its settings
together in a nested block.
The existing NestingSingle was not a good fit for this because it is
designed under the assumption that the presence or absence of the block
has some significance in enabling or disabling the relevant feature, and
so for these always-active cases we'd generate a misleading plan where
the settings for the feature appear totally absent, rather than showing
the default values that will be selected.
NestingGroup is, therefore, a slight variation of NestingSingle where
presence vs. absence of the block is not distinguishable (it's never null)
and instead its contents are treated as unset when the block is absent.
This then in turn causes any default values associated with the nested
arguments to be honored and displayed in the plan whenever the block is
not explicitly configured.
The current SDK cannot activate this mode, but that's okay because its
"legacy type system" opt-out flag allows it to force a block to be
processed in this way anyway. We're adding this now so that we can
introduce the feature in a future SDK without causing a breaking change
to the protocol, since the set of possible block nesting modes is not
extensible.
If the registry is unresponsive, you will now get an error
specific to this, rather than a misleading "provider unavailable" type
error. Also adds debug logging for when errors like this may occur
Due to these tests happening in the wrong order, removing an object from
the end of a sequence of objects would previously cause a bounds-check
panic.
Rather than a more severe rework of the logic here, for now we'll just
introduce an extra precondition to prevent the panic. The code that
follows already handles the case where there _is_ no new object (i.e. the
"old" object is being deleted) as long as we're able to pass through this
type-checking logic.
The new "JSON list of objects - removing item" test covers this problem
by rendering a diff for an object being removed from the end of a list
of objects within a JSON value.
Terraform Registry (and other registry implementations) can now return
an array of warnings with the versions response. These warnings are now
displayed to the user during a `terraform init`.
In earlier refactoring we updated these commands to support the new
address and state types, but attempted to partially retain the old-style
"StateFilter" abstraction that originally lived in the Terraform package,
even though that was no longer being used for any other functionality.
Unfortunately the adaptation of the existing filtering to the new types
wasn't exact and so these commands ended up having a few bugs that were
not covered by the existing tests.
Since the old StateFilter behavior was the source of various misbehavior
anyway, here it's removed altogether and replaced with some simpler
functions in the state_meta.go file that are tailored to the use-cases of
these sub-commands.
As well as just generally behaving more consistently with the other
parts of Terraform that use the new resource address types, this commit
fixes the following bugs:
- A resource address of aws_instance.foo would previously match an
resource of that type and name in any module, which disagreed with the
expected interpretation elsewhere of meaning a single resource in the
root module.
- The "terraform state mv" command was not supporting moves from a single
resource address to an indexed address and vice-versa, because the old
logic didn't need to make that distinction while they are two separate
address types in the new logic. Now we allow resources that do not have
count/for_each to be treated as if they are instances for the purposes
of this command, which is a better match for likely user intent and for
the old behavior.
Finally, we also clean up a little some of the usage output from these
commands, which hasn't been updated for some time and so had both some
stale information and some inaccurate terminology.
* command/providers schemas: return empty json object if config parses successfully but no providers found
* command/show (state): return an empty object if state is nil
* configs/configupgrade: detect possible relative module sources
If a module source appears to be a relative local path but does not have
a preceding ./, print a #TODO message for the user.
* internal/initwd: limit go-getter detectors to those supported by terraform
* internal/initwd: move isMaybeRelativeLocalPath check into getWithGoGetter
To avoid making two calls to getter.Detect, which potentially makes
non-trivial API calls, the "isMaybeRelativeLocalPath" check was moved to
a later step and a custom error type was added so user-friendly
diagnostics could be displayed in the event that a possible relative local
path was detected.
Our initial prototype of new-style diff rendering excluded this because
the old SDK has no support for this construct. However, we want to be able
to introduce this construct in the new SDK without breaking compatibility
with existing versions of Terraform Core, so we need to implement it now
so it's ready to be used once the SDK implements it.
The key associated with each block allows us to properly correlate the
items to recognize the difference between an in-place update of an
existing block and the addition/deletion of a block.
Our null-to-empty normalization was previously assuming these would always
be collection types, but that isn't true when a block contains something
dynamic since we must then use tuple or object types instead to properly
represent all of the individual element types.
We use cty a little differently when a nested list block contains a
dynamically-typed attribute: it appears as a tuple value instead of a
list value so that we can retain the individual types of each element.
Here we introduce a test for that case, but doing so required also making
the runTestCases function handle types in a stricter way so that it will
produce planned values that match how Terraform Core would do it,
including the necessary late-bound type information for the
dynamically-typed attribute.
Previously, these commands were not checking if the user specified a
`-plugin-dir` flag during `terraform init` and would therefor fail if
providers were not in one of the standard directories.
Fixes#20547
When the user aborts input, it may end up as an unknown value, which
needs to be converted to null for PrepareConfig.
Allow PrepareConfig to accept null config values in order to fill in
missing defaults.
When a planfile is supplied to the `terraform show -json` command, the
context that loads only included schemas for resources in the plan. We
found an edge case where removing a data source from the configuration
(though only if there are no managed resources from the same provider)
would cause jsonstate.Marshal to fail because the provider schema wasn't
in the plan context.
jsonplan.Marshal now takes two schemas, one for plan and one for state.
If the state schema is nil it will simply use the plan schemas.
* command/show: fixing bugs in modulecalls
jsonconfig and jsonplan both had subtle bugs with the logic for
marshaling module calls that only showed up when multiple modules were
referenced. This PR fixes those bugs and extends the existing tests to
include multiple modules.
* sort all the things, mostly for tests
* docs: update plan command documentation. Fixes#19235
* docs: added a missing reserved variable name. Fixes#19159.
* website: add note that resource names cannot start with a number
* website: add some notes to the 0.12 upgrade guide
We are now allowing the legacy SDK to opt out of the safety checks we try
to do after plan and apply, and so in such cases the before/after values
in planned changes may be inconsistent with our usual rules.
To avoid adding lots of extra complexity to the diff renderer to deal with
these situations, instead we'll normalize the handling of nested blocks
prior to using these values.
In the long run it'd be better to do this normalization at the source,
immediately after we receive an object from a provider using the opt-out,
but we're doing this at the outermost layer for now to avoid risking
unintended impacts on other Terraform Core components when we're just
about to enter the beta phase of the v0.12.0 release cycle.
This mirrors the change made for providers, so that default values can
be inserted into the config by the backend implementation. This is only
the interface and method name changes, it does not yet add any default
values.
We brought forward a new implementation of "terraform validate" that was
originally scheduled for a later release after finding that it would be
simpler than reworking the old implementation for new v0.12 assumptions,
but we didn't yet implement "terraform plan -validate-only" in spite of
it being mentioned in the updated docs for "terraform validate".
For now then, the documentation will make the weaker suggestion of running
"terraform plan" to validate a particular _run_ rather than a particular
_module_, which is the closest thing we have for now. At some point after
v0.12.0 we will evaluate whether a validate-only mode for "terraform plan"
(which could then run without configuring the providers at all) is needed.
A common new-user mistake is to place variable _declarations_ into .tfvars
files instead of variable _values_. To guide towards the correct approach
here, we add a specialized error message for that situation that includes
guidance on the distinction between declaring and setting values for
variables, and an example of what setting a value should look like.
* command/jsonconfig: provider config marshaling enhancements
This PR fixes a bug wherein the keys in "provider_config" were the
"addrs.ProviderConfig", and therefore being overwritten for each module,
instead of the intended "addrs.AbsProviderConfig".
We realized that there was still opportunity for ambiguity, for example
if a user made a provider alias that was the same name as a module, so
we opted to use the syntax `modulename:providername(.provideralias)`
* command/json*: fixed a bug where we were attempting to lookup schemas
with the provider name, instead of provider type.
* command/show: add "module_version" to "module_calls" in config portion
of `terraform show`.
Also extended the `terraform show -json` test to run `init` so we could
add examples with modules. This does _not_ test the "module_version"
yet, but it _did_ help expose a bug in jsonplan where modules were
duplicated. This is also fixed in this PR.
* command/jsonconfig: rename version to version_constraint and
resolved_source to source.
* command/jsonconfig: display module variables in config output
The tests have been updated to reflect this change.
* command/jsonconfig: properly handle variables with nil defaults
Now that we're actually verifying correct behavior of providers during
plan and apply, our mock providers need to behave like real providers,
properly propagating any configured values through the plan and into the
final state.
For most of these it was simpler to just switch over to using the newer
PlanResourceChangeFn mock interface, away from the legacy DiffFn approach,
because then we can just return the ProposedNewState verbatim because our
schema for these tests does not require any default values to be
populated.
* command/jsonplan:
- add variables to plan output
- print known planned values for resources
Previously, resource attribute values were only displayed if the values
were wholly known. Now we will filter the unknown values out of the
change and print the known values.
* command/jsonstate: added depends_on and tainted fields
* command/show: update tests to reflect added fields
We now require a provider to populate all of its defaults -- including
unknown value placeholders -- during PlanResourceChange. That means the
mock provider for testing "terraform show -json" must now manage the
population of the computed "id" attribute during plan.
To make this logic a little easier, we also change the ApplyResourceChange
implementation to fill in a non-null id, since that makes it easier for
the mock PlanResourceChange to recognize when it needs to populate that
default value during an update.
* command/jsonstate: do not hide SchemaVersion of '0'
* command/jsonconfig: module_calls should be a map
* command/jsonplan: include current terraform version in output
* command/jsonconfig: properly marshal expressions from a module call
Previously this was looking at the root module's variables, instead of
the child module variables, to build the module schema. This fixes that
bug.
* command/show: add support for -json output for state
* command/jsonconfig: do not marshal empty count/for each expressions
* command/jsonstate: continue gracefully if the terraform version is somehow missing from state
* command/jsonplan: sort resources by address
* command/show: extend test case to include resources with count
* command/json*: document resource ordering as consistent but undefined
* command/show: properly marshal attribute values to json
marshalAttributeValues in jsonstate and jsonplan packages was returning
a cty.Value, which json/encoding could not marshal. These functions now
convert those cty.Values into json.RawMessages.
* command/jsonplan: planned values should include resources that are not changing
* command/jsonplan: return a filtered list of proposed 'after' attributes
Previously, proposed 'after' attributes were not being shown if the
attributes were not WhollyKnown. jsonplan now iterates through all the
`after` attributes, omitting those which are not wholly known.
The same was roughly true for after_unknown, and that structure is now
correctly populated. In the future we may choose to filter the
after_unknown structure to _only_ display unknown attributes, instead of
all attributes.
* command/jsonconfig: use a unique key for providers so that aliased
providers don't get munged together
This now uses the same "provider" key from configs.Module, e.g.
`providername.provideralias`.
* command/jsonplan: unknownAsBool needs to iterate through objects that are not wholly known
* command/jsonplan: properly display actions as strings according to the RFC,
instead of a plans.Action string.
For example:
a plans.Action string DeleteThenCreate should be displayed as ["delete",
"create"]
Tests have been updated to reflect this.
* command/jsonplan: return "null" for unknown list items.
The length of a list could be meaningful on its own, so we will turn
unknowns into "null". The same is less likely true for maps and objects,
so we will continue to omit unknown values from those.
We missed this on the initial update pass because this was calling
directly into the module package API rather than going through the Meta
methods that we updated for the new config loader.
m.installModules here is the same method that "terraform init" is using
for this purpose, ensuring the two will behave the same way. This changes
the output a little compared to the old installer, but it still includes
the important information about where each module is coming from.
This possibility was lost in the rewrite to use HCL2, but it's used by
a number of external utilities and text editor integrations, so we'll
restore it here.
Using the stdin/stdout mode is generally preferable for text editor use
since it allows formatting of the in-memory buffer rather than directly
the file on disk, but for editors that don't have support for that sort of
tooling it can be convenient to just launch a single command and directly
modify the on-disk file.
Since the HCL formatter only works with tokens, it can in principle be
called with any input and produce some output. However, when given invalid
syntax it will tend to produce nonsensical results that may drastically
change the input file and be hard for the user to undo.
Since there's no strong reason to try to format an invalid or incomplete
file, we'll instead try parsing first and fail if parsing does not
complete successfully.
Since we talk directly to the HCL API here this is only a _syntax_ check,
and so it can be applied to files that are invalid in other ways as far
as Terraform is concerned, such as using unsupported top-level block types,
resource types that don't exist, etc.
There are a few constructs from 0.11 and prior that cause 0.12 parsing to
fail altogether, which previously created a chicken/egg problem because
we need to install the providers in order to run "terraform 0.12upgrade"
and thus fix the problem.
This changes "terraform init" to use the new "early configuration" loader
for module and provider installation. This is built on the more permissive
parser in the terraform-config-inspect package, and so it allows us to
read out the top-level blocks from the configuration while accepting
legacy HCL syntax.
In the long run this will let us do version compatibility detection before
attempting a "real" config load, giving us better error messages for any
future syntax additions, but in the short term the key thing is that it
allows us to install the dependencies even if the configuration isn't
fully valid.
Because backend init still requires full configuration, this introduces a
new mode of terraform init where it detects heuristically if it seems like
we need to do a configuration upgrade and does a partial init if so,
before finally directing the user to run "terraform 0.12upgrade" before
running any other commands.
The heuristic here is based on two assumptions:
- If the "early" loader finds no errors but the normal loader does, the
configuration is likely to be valid for Terraform 0.11 but not 0.12.
- If there's already a version constraint in the configuration that
excludes Terraform versions prior to v0.12 then the configuration is
probably _already_ upgraded and so it's just a normal syntax error,
even if the early loader didn't detect it.
Once the upgrade process is removed in 0.13.0 (users will be required to
go stepwise 0.11 -> 0.12 -> 0.13 to upgrade after that), some of this can
be simplified to remove that special mode, but the idea of doing the
dependency version checks against the liberal parser will remain valuable
to increase our chances of reporting version-based incompatibilities
rather than syntax errors as we add new features in future.
* command/show: added test scaffold for json output
More test cases will be added once the basic shape of the tests is
validated.
- command/json* packages now sort resources by address, matching
behavior elsewhere
- using cmp in tests instead of reflect.DeepEqual for the diffs
- updating expected output in tests to match sorting
Previously we were doing this rather inconsistently: some commands would
do it and others would not. By doing it here we ensure we always apply the
same normalization, regardless of which operation we're running.
This normalization is mostly for cosmetic purposes in error messages, but
it also ends up being used to populate path.module and path.root and so
it's important that we always produce consistent results here so that
we don't produce flappy changes as users work with different commands.
The fact that thus mutates a data structure as a side-effect is not ideal
but this is the best place to ensure it always gets applied without doing
any significant refactoring, since everything after this point happens in
the backend package where the normalizePath method is not available.
* command/show: adding functions to aid refactoring
The planfile -> statefile -> state logic path was getting hard to follow
with blurry human eyes. The getPlan... and getState... functions were
added to help streamline the logic flow. Continued refactoring may follow.
* command/show: use ctx.Config() instead of a config snapshot
As originally written, the jsonconfig marshaller was getting an error
when loading configs that included one or more modules. It's not clear
if that was an error in the function call or in the configloader itself,
but as a simpler solution existed I did not dig too far.
* command/jsonplan: implement jsonplan.Marshal
Split the `config` portion into a discrete package to aid in naming
sanity (so we could have for example jsonconfig.Resource instead of
jsonplan.ConfigResource) and to enable marshaling the config on it's
own.
Older versions of terraform could save the backend hash number in a
value larger than an int.
While we could conditionally decode the state into an intermediary data
structure for upgrade, or detect the specific decode error and modify
the json, it seems simpler to just decode into the most flexible value
for now, which is a uint64.
Fixes#18822
The `tuncatedId` function had been introduced in #12261 and increased the
`maxIdLen` to 80 in #13317. Since the number of bytes itself seems to be
unimportant, the ID should be truncated to 80 characters, not 80 bytes.
A lot of commands used `c.Meta.flagSet()` to create the initial flagset for the command, while quite a few of them didn’t actually use or support the flags that are then added.
So I updated a few commands to use `flag.NewFlagSet()` instead to only add the flags that are actually needed/supported.
Additionally this prevents a few commands from using locking while they actually don’t need locking (as locking is enabled as a default in `c.Meta.flagSet()`.
Next to adding the locking for the `state push` command, this commit also fixes a small bug where the lock would not be propertly released when running the `state show` command.
And finally it renames some variables in the `[un]taint` code in order to try to standardize the var names of a few frequently used variables (e.g. statemgr.Full, states.State, states.SyncState).
In a couple places in tests we execute a child "go build" to make a helper
program. Now that we're running in module mode, "go build" will normally
default to downloading and caching dependencies, which we don't want
because we're still using vendoring for the moment.
Therefore we need to instruct these child builds to use vendoring too,
avoiding the need to download all of the dependencies and ensuring that
we'll be building with the same dependencies that we'd use for a normal
build.
Several of these tests rely on external services (e.g. Terraform Registry)
that have not yet been updated to support the needs of Terraform v0.12.0,
so for now we'll skip all of these tests and wait until those systems have
been updated.
This should be removed before Terraform v0.12.0 final to enable these
tests to be used as part of pre-release smoke testing.
The local filesystem state manager no longer creates backup files eagerly,
instead creating them only if on first write there is already a snapshot
present in the target file.
Therefore for this test to exercise the codepaths it intends to we must
create an initial state snapshot for it to overwrite, creating the backup
in the process.
There are several other tests for this behavior elsewhere, so this test
is primarily to verify that the refresh command is configuring the backend
appropriately to get the backups written in the desired location.
We now only create a backup state file if the given output file already
exists, which it does not in this test.
(The behavior of creating the backup files is already covered by other
tests, so no need for this one go out of its way to do it.)
We now don't create a local state backup until the first snapshot write,
so we don't expect there to be a backup file until the end of the test.
(There is already a check at the end there, unmodified by this change.)
The filesystem backend has the option of using a different file for its
initial read.
Previously we were incorrectly writing the contents of that file out into
the backup file, rather than the prior contents of the output file. Now
we will always read the output file in RefreshState in order to decide
what we will back up but then we will optionally additionally read the
input file and prefer its content as the "current" state snapshot.
This is verified by command.TestMetaBackend_planLocalStatePath and
TestMetaBackend_configureNew, which are both now passing.
The changes to how we handle setting the state path on the local backend
broke the heuristic we were using here for detecting migration from one
local backend to another with the same state path, which would by default
end up deleting the state altogether after migration.
We now use the StatePaths method to do this, which takes into account
both the default values and any settings that have been set.
Additionally this addresses a flaw in the old method which could
potentially have deleted all non-default workspace state files if the
"path" setting were changed without also changing the "workspace_dir"
setting. This new approach is conservative because it will preserve all
of the files if any one overlaps.
In an earlier change we fixed the "backendFromConfig" codepath to be
able to properly detect changes to the -backend-config arguments during
"terraform init", but this detection is too strict for the normal case
of running an operation in a previously-initialized directory.
Before any of the recent changes, the logic here was to selectively update
the hash to include -backend-config settings in the init case. Since
that late hash recalculation was confusing, here we take the alternative
path of using the hash only in the normal case and full value comparison
in the init case. Treating both of these cases separately makes things
marginally easier to follow here.
The import command was imposing the default state path at the CLI level,
rather than leaving that to be handled by the backend. As a result, the
output state was always forced to be terraform.tfstate, regardless of
the backend settings.
This test is testing some strange implementation details of the old
local backend which do not hold with the new filesystem state manager.
Specifically, it was expecting state to be read from the stateOutPath
rather than the statePath, which makes no sense here because the backend
is configured to read from the default terraform.tfstate file (which does
not exist.)
There is another problem with this test which will be addressed in a
subsequent commit.
As part of integrating the new "remote" backend we relaxed the requirement
that a "default" workspace must exist in all backends and now skip
migrating empty workspace states to avoid creating unnecessary "default"
workspaces when switching between backends that require it and backends
that don't, such as when switching from the local backend (which always
has a "default" workspace) to Terraform Enterprise.
This was failing because we now handle the settings for the local backend
a little differently as a result of decoding it with the HCL2 machinery.
Specifically, the backend.State* fields are now assumed to be what is
given in configuration, and any CLI overrides are maintained separately
in OverrideState* fields so that they can be imposed "just in time" in
StatePaths.
This is particularly important because OverrideStatePath (when set) is
used regardless of workspace name, while StatePath is a suitable value
only for the "default" workspace, with others needing to be constructed
from StateWorkspaceDir instead.
Our new state model has a different implementation of "empty" that doesn't
consider lineage/serial, so we need to have some actual content in these
state fixtures to avoid them being skipped during state migrations.
We previously hacked around the import/export functionality being missing
in the statemgr layer after refactoring, but now it's been reintroduced
to fix functionality elsewhere we should use the centralized Import and
Export functions to ensure consistent behavior.
In particular, this pushes the logic for checking lineage and serial
during push down into the state manager itself, which is better because
all other details about lineage and serial are managed within the state
managers.
This test was initially failing because its fixture had a state which our
new state models consider to be "empty", and thus it was not migrated.
After fixing that (by adding an output to the fixture), this revealed a
bug that the lineage was not being persisted through the migration. This
is fixed by using the statemgr.Migrate method instead of writing via the
normal Writer interface, which allows two cooperating state managers to
properly transfer the lineage and serial along with the state snapshot.
This test was incorrectly updated in a previous iteration, with it
creating a modified state to write but then not actually writing it,
writing an empty test state instead.
This made the test fail because a backup state file is created only if
the new state snapshot is different to the old when written.
Some other test is leaving behind a terraform.tfstate after it concludes,
which can cause this test to fail in a strange way due to picking up
extra provider requirements from that state.
This check doesn't fix that problem, but it at least makes the test fail
in a more helpful way to avoid time wasted trying to debug this test when
it's some other test that actually has the bug.
This test is currently failing due to the command completing successfully,
which would previously cause a panic because we didn't properly initialize
the MockUi and so its error buffer is nil unless written to.
(The failure this was masking will be fixed in a subsequent commit.)
In prior refactoring we lost the required core version check from
"terraform init", which we restore here.
Additionally, this test used to have an incorrect name that suggested it
was testing something in the "getProvider" codepath, but version checking
happens regardless of what other options are selected.
After all of the refactoring we were no longer checking the Terraform
version field in a state file, causing this test to fail.
This restores that check, though with a slightly different error message.
This test was using old-style state files as its input, differing only by
lineage. Since lineages are now managed within the state manager itself,
the test can't use that to distinguish the two files and so we put a
different output in each one instead.
This also introduces some TRACE logging to the migration codepaths.
There's some hard-to-follow control flow here and so this extra logging
helps to understand the reason for a particular outcome, and since this
codepath is visited only in "terraform init" anyway it doesn't hurt to
be a bit more verbose here.
In the refactoring for new HCL this codepath stopped taking into account
changes to the CLI -backend-config options when deciding if a backend
migration is required.
This restores that behavior in a different way than it used to be: rather
than re-hashing the merged config and comparing the hashes, we instead
just compare directly the configuration values, which must be exactly
equal in order to skip migration.
This change is covered by the test TestInit_inputFalse, although as of
this commit it is still not passing due a downstream problem within the
migration code itself.
This test was re-using the same InitCommand value to run multiple times,
which is not realistic. Since we now cache configuration source code
inside command.Meta on load, it's important that we use a fresh
InitCommand instance here so it'll see the modified configuration file
we've left on disk.
Here we were going to the trouble of copying the body so we could mutate
it, but then ended up mutating the original anyway and then returning the
unmodified copy. Whoops!
This fix is verified by a number of "init" command tests that exercise the
-backend-config option, including TestInit_backendConfigFile and several
others whose names have the prefix TestInit_backendConfig .
When we originally wrote this message we struggled a bit for how to refer
to the releases server without writing an awkwardly-ungrammatical
sentence, and so "the official repository" became a placeholder name for
it.
Now that we'll be looking in Terraform Registry this gives us a nice
proper noun to use. This message will need to evolve more as our
integration with the registry gets more sophisticated, but for now this
works.
Some over-zealous bulk updating of this test file caused this test to be
producing a remote state config cache file on disk when it doesn't
actually need one: the backend config comes from the plan file when
applying a saved plan.
Some merging conflict shenanigans here led to this usage not lining up
with the imported symbol name, meaning that the tests couldn't compile any
more.
We missed fixing this up during the big updates for the new plan/state
models since the failures were being masked by testBackendState being
broken.
This is the same sort of update made to many other tests: add schema to
the mock provider, adjust for the new plan/state types, and make
allowances for the new built-in diffing behavior in core.
The hashing function for cached backend configuration is different now, so
our hard-coded hash of the HTTP backend address wasn't working anymore.
Here we update the hash so that tests using this test backend will work
again. Rather than leaving it hard-coded, we'll instead compute it the
same way as "terraform init" would.
In practice only one test is actually using this function right now, so
we also update the test fixture for that test (TestPlan_outBackend) to
match the new expectations, though as of this commit it's still failing
with an unrelated error.
The mission of this process method used to include dealing with
auto-loaded tfvars files, but it doesn't do that anymore.
It does still deal with the -no-color option, but the test wasn't
exercising that part before.
Now the test here focuses on the -no-color behavior.
The process method still has a "vars" flag argument which is no longer
used. Since this is an unexported method we could potentially address this
but this commit is intentionally limited only to fixing the test.
Comments here indicate that this was erroneously returning an error but
we accepted it anyway to get the tests passing again after other work.
The tests over in the "terraform" package agree that cancelling should be
a successful outcome rather than an error.
I think that cancelling _should_ actually be an error, since Terraform did
not complete the operation it set out to complete, but that's a change
we'd need to make cautiously since automation wrapper scripts may be
depending on the success-on-cancel behavior.
Therefore this just fixes the command package test to agree with the
Terraform package tests and adds some FIXME notes to capture the potential
that we might want to update this later.
The State.Equal function is now more precise than this test needs. It's
only trying to distinguish between an empty state and a non-empty state,
so the string representation of state is good enough to get that done
while disregarding other subtle differences.
This new source type should be used for variables loaded from .tfvars files that were explicitly passed as command line arguments (e.g. -var-file=foo.tfvars)
Without using absolute paths any module info is lost in the output. And the attributes were randomly ordered and so changed between different executions of the command.
When HCL encounters an error during expression evaluation, it annotates
its diagnostics with information about the expression that was being
evaluated and the EvalContext it was evaluated in.
This gives us enough information to show helpful hints to the user about
the final values of any reference expressions that are present in the
expression, which is very useful extra context for expressions that get
evaluated multiple times, such as:
- Any expression in a block with "count" or "for_each" set
- The sub-expressions within a "for" expression
This work was done against APIs that were already changed in the branch
before work began, and so it doesn't apply to the v0.12 development work.
To allow v0.12 to merge down to master, we'll revert this work out for now
and then re-introduce equivalent functionality in later commits that works
against the new APIs.
This reinstates an old behavior that was lost in the reorganization of how
we deal with the -var and -var-file options.
This fix is verified by TestApply_planVars now passing.
In the new implementation of collecting variables I initially forgot the
JSON variant of terraform.tfvars.
This fix is verified by TestApply_varFileDefaultJSON now passing.
This was previously targeting the old state manager and state types, so it
needed some considerable rework to get it working again with the new state
types.
Since our new resource address syntax lacks the weird extra .deposed
special case we had before, we instead interpret addresses as
whole-instance addresses here and remove the deposed objects along with
the current one (if present), since this is more likely to match with
user expectations because we don't consider deposed objects to be
independently addressable in any other situation.
With that said, to be more explicit about what is going on we do now have
a -dry-run mode and maintain separate counts of current and deposed
instances so that we can expose that in the UI where relevant.
We temporarily disabled this because it needed some further work to update
it for the new state models, which has now been done.
We no longer need the configuration objects for the outputs because the
state itself contains all of the information needed for displaying these.
We used to treat the "id" attribute of a resource as special and elevate
it into its own struct field "ID" in the state, but the new state format
and provider protocol treats it just as any other attribute.
However, it's still useful to show the value of a single identifying
attribute when there isn't room in the UI for showing all of the
attributes, and so here we take a new strategy of considering "id" along
with some other conventional names as special only in the UI layer.
This new heuristic approach can be adjusted over time as new provider
patterns emerge, but for now it covers some common conventions we've seen
in real providers.
With that said, since all existing providers made for Terraform versions
prior to v0.12 were forced to set "id", we won't see any use of other
attributes here until providers are updated to remove the placeholder
ids they were generating in cases where an id was not actually relevant
but was forced by the old protocol. At that point the UX should be
improved by showing a more relevant attribute instead.
We now also allow for the possibility of no id at all, since that is valid
for resources that exist only within the Terraform state, like the ones
from the "random" and "tls" providers.
This command isn't yet updated for the new state types, but since we were
not returning a non-successful error status here the tests were just
failing in a weird way instead. Now we'll fail with a message that makes
it clear there is still work to do in the real implementation here.
We previously stubbed most of this out because it hadn't yet been updated
to support the new state types, etc.
This restores all of the previous behavior as covered by the tests.
We intentionally remove one behavior that was not covered by the tests:
we used to allow retrieval of outputs from non-root modules using the
-module option, but since we no longer persist non-root outputs in the
state we can no longer support this without a full expression evaluation
walk, and that'd be overkill for this otherwise-simple command. Descendant
module outputs are not part of the public interface of a configuration
anyway, so accessing them from outside in this way is an anti-pattern.
(For debugging scenarios it is still possible to access these from
"terraform console", which _does_ do a full evaluation graph walk to
prepare its evaluation scope.)
The plan file writer requires a backend config to be present, but we don't
really need one for the sake of _this_ test, since we don't activate the
backend to render a plan graph, and so we just write in a placeholder.
This connects a missing link left by earlier refactoring: the command
package is responsible for gathering up variable values provided by the
user and passing them through to the backend to use in operations.
Our serialization of the backend configuration has changed slightly for
Terraform 0.12 due to reimplementing it in terms of the HCL2 types, so
the base case that should be unchanged during the test needs to be
changed.
In all real cases the schemas should be populated here, but we don't want
to panic in UI rendering code if there's a bug here.
This can also be tripped up by tests with incomplete mocks. It's
unfortunate that this can therefore mask some problems in tests, but tests
can protect against it by asserting on specific output text rather than
just assuming that a zero exit status is a pass.
If we fail to parse the resource address given to "terraform import" then
it's helpful to produce a "source code" snippet of what the user provided
so they might see more precisely which part of the address was invalid.
Most of this is just updates to allow for the fact that we now always save
the provider address as part of resource state, whereas before it was only
saved conditionally.
This also updates TestTaint_module for the intentional change that it now
expects a child module to be specified using normal resource address
syntax, rather than as a separate -module option.
Added a very simple test with state and schema.
TODO: if tests are added we should test using golden files (and example
state files, instead of strings). This seemed unnecessary with the
simple test cases.
In previous work we didn't quite connect these dots. The connection here
is sub-awesome since the existing interfaces here had some unfortunate
assumptions that we'd like to move away from (like the idea of a "nil
backend" implying the local backend) but we're accepting this for now to
avoid another big round of refactoring.
The main implication of this is that we will now always include a backend
configuration in the plan, though it might just be a placeholder config
for the local backend in the remaining cases where that's still implicitly
set. Later we will change this so that there is no implicit local backend
at all (terraform init is always required, _it_ will deal with setting
implicitly setting the local backend when appropriate), which will allow
us to rework this to be more straightforward and less "spooky".
If we don't do this, we can't produce any output when applying a saved
plan file.
Here we also introduce a check to the local backend's ReportResult
function so that it won't panic if CLI init is skipped, although that
will no longer happen in the apply-from-file case due to the change
described in the previous paragraph.
It must now provide a basic implementation of plan and apply for its
mock provider, which in this case can just pass through the proposed value
generated by core because there are no computed attributes in this schema.
Previously we used a single plan action "Replace" to represent both the
destroy-before-create and the create-before-destroy variants of replacing.
However, this forces the apply graph builder to jump through a lot of
hoops to figure out which nodes need it forced on and rebuild parts of
the graph to represent that.
If we instead decide between these two cases at plan time, the actual
determination of it is more straightforward because each resource is
represented by only one node in the plan graph, and then we can ensure
we put the right nodes in the graph during DiffTransformer and thus avoid
the logic for dealing with deposed instances being spread across various
different transformers and node types.
As a nice side-effect, this also allows us to show the difference between
destroy-then-create and create-then-destroy in the rendered diff in the
CLI, although this change doesn't fully implement that yet.
For PreApply hook purposes we only actually use the Delete, Create, and
Update actions, because other actions are handled in different ways than
a direct call to ApplyResourceChange.
However, if there's a bug in core that causes it to pass a different
action, it's better for us to mark it as being explicitly unknown in the
UI rather than simply defaulting to "Modifying...", which can thus obscure
the problem and make for a confusing result.
We'll now show an "update" symbol prior to the argument to this synthetic
jsonencode(...) call, for consistency with how we show nested values in
other cases and to attach a verb to any "# forces replacement".
We'll also show a special form in the case where the value seems to differ
only in whitespace, so users can understand what's going on in that
hopefully-rare situation, particularly if those whitespace-only changes
end up forcing us to replace a remote object.
Since our own syntax for primitive values is similar to that of JSON, and
since we permit automatic conversions from number and bool to string, we
must do this special JSON value diff formatting only if the value is a
JSON array or object to avoid confusing results.
Because so far we've not supported dynamically-typed complex data
structures, several providers have used strings containing JSON to stand
in for these.
In order to get a readable diff in those cases, we'll recognize situations
where old and new are both JSON and present a diff of the effective value
of the JSON, using a faux call to the jsonencode(...) function to indicate
when we've done so.
This is a bit of a "cute" heuristic, but is important at least for now
until we can migrate away from that practice of passing large JSON strings
to providers and use dynamically-typed attributes instead.
This extra comment line gives us a place to show the full resource address
(since the block header line only includes type and name) and also allows
us to explain in long form the meaning of the change icon on the following
line.
This is a light adaptation of our earlier prototype of structural diff
rendering, as a starting point for what we'll actually ship. This is not
consistent with the latest mocks, so will need some additional work before
it is ready, but integrating this allows us to at least see the plan
contents while fixing up remaining issues elsewhere.
Previously we just left these out of the plan altogether, but in the new
plan types we intentionally include change information for every resource
instance, even if no changes are actually planned, to allow alternative
plan file viewers to show what isn't changing as well as what is.
This codepath is going to be significantly changed before release to make
it support structural diff of the new data types, but this lets us lean on
the old renderer to produce partial output in the mean time while we
continue to work on getting things working end-to-end after the
considerable refactoring that's been going on.
Due to how often the state and plan types are referenced throughout
Terraform, there isn't a great way to switch them out gradually. As a
consequence, this huge commit gets us from the old world to a _compilable_
new world, but still has a large number of known test failures due to
key functionality being stubbed out.
The stubs here are for anything that interacts with providers, since we
now need to do the follow-up work to similarly replace the old
terraform.ResourceProvider interface with its replacement in the new
"providers" package. That work, along with work to fix the remaining
failing tests, will follow in subsequent commits.
The aim here was to replace all references to terraform.State and its
downstream types with states.State, terraform.Plan with plans.Plan,
state.State with statemgr.State, and switch to the new implementations of
the state and plan file formats. However, due to the number of times those
types are used, this also ended up affecting numerous other parts of core
such as terraform.Hook, the backend.Backend interface, and most of the CLI
commands.
Just as with 5861dbf3fc49b19587a31816eb06f511ab861bb4 before, I apologize
in advance to the person who inevitably just found this huge commit while
spelunking through the commit history.
The "config" package is no longer used and will be removed as part
of the 0.12 release cleanup. Since configschema is part of the
"new world" of configuration modelling, it makes more sense for
it to live as a subdirectory of the newer "configs" package.
In order to properly migrate the contents of resource, data, provider and
provisioner blocks we will need the provider's schema in order to
understand what is expected, so we can resolve some ambiguities inherent
in the legacy HCL AST.
This includes an initial prototype of migrating the content of resource
blocks just to verify that the information is being gathered correctly.
As with the rest of the upgrade_native.go file, this will be reorganized
significantly once the basic end-to-end flow is established and we can
see how to organize this code better.
Since the intent of the validate command is to check config validity
regardless of context (input variables, state, etc), we use unknown values
of the requested type here, which will then allow us to complete type
checking against the specified types of the variables without assuming
any particular values.
This is the frontend to the work-in-progress codepath for upgrading the
source code for a module written for Terraform v0.11 or earlier to use
the new syntax and idiom of v0.12.
The underlying upgrade code is not yet complete as of this commit, and
so the command is not yet very useful. We will continue to iterate on
the upgrade code in subsequent commits.
Because we gather together diagnostics from many different parts of the
codebase, the list often ends up being in a non-ideal order. Here we
define a partial ordering for diagnostics that should hopefully make them
easier to scan when many are present, by grouping together diagnostics
that are of the same severity and belong to the same file.
We use sort.Stable here because we have a partial order and so we need
to make sure that diagnostics that do not have a relative ordering will
remain in their original order.
This sorting is applied just in time before rendering the diagnostics
in command.Meta.showDiagnostics.
This doesn't yet include test updates, since there are problems in core
currently blocking these tests from running. The tests will therefore be
updated in a subsequent commit.
Previously we were defaulting the provider configuration selection to a
provider in the root module inferred from the resource type name.
This is close, but not quite right: we need to _start_ with a provider
configuration in the same module as we're importing into, and then our
provider resolution steps during import graph construction will use that
as a starting point for a walk up the tree to find the nearest matching
configuration (which might eventually still be in the root, but not
necessarily).
This now uses the HCL2 parser and evaluator APIs and evaluates in terms
of a new-style *lang.Scope, rather than the old terraform.Interpolator
type that is no longer functional.
The Context.Eval method used here behaves differently than the
Context.Interpolater method used previously: it performs a graph walk
to populate transient values such as input variables, local values, and
output values, and produces its scope in terms of the result of that
graph walk. Because of this, it is a lot more robust than the prior method
when asked to resolve references other than those that are persisted
in the state.
Previously an empty diagnostics would appear as "null" in the JSON output,
since that is how encoding/json serializes a nil slice. It's more
convenient for users of dynamic languages to keep the type consistent
in all cases, since they can then just iterate the list without needing a
special case for when it is null.
Due to how deeply the configuration types go into Terraform Core, there
isn't a great way to switch out to HCL2 gradually. As a consequence, this
huge commit gets us from the old state to a _compilable_ new state, but
does not yet attempt to fix any tests and has a number of known missing
parts and bugs. We will continue to iterate on this in forthcoming
commits, heading back towards passing tests and making Terraform
fully-functional again.
The three main goals here are:
- Use the configuration models from the "configs" package instead of the
older models in the "config" package, which is now deprecated and
preserved only to help us write our migration tool.
- Do expression inspection and evaluation using the functionality of the
new "lang" package, instead of the Interpolator type and related
functionality in the main "terraform" package.
- Represent addresses of various objects using types in the addrs package,
rather than hand-constructed strings. This is not critical to support
the above, but was a big help during the implementation of these other
points since it made it much more explicit what kind of address is
expected in each context.
Since our new packages are built to accommodate some future planned
features that are not yet implemented (e.g. the "for_each" argument on
resources, "count"/"for_each" on modules), and since there's still a fair
amount of functionality still using old-style APIs, there is a moderate
amount of shimming here to connect new assumptions with old, hopefully in
a way that makes it easier to find and eliminate these shims later.
I apologize in advance to the person who inevitably just found this huge
commit while spelunking through the commit history.
For the moment this is just a lightly-adapted copy of
ModuleTreeDependencies named ConfigTreeDependencies, with the goal that
the two can live concurrently for the moment while not all callers are yet
updated and then we can drop ModuleTreeDependencies and its helper
functions altogether in a later commit.
This can then be used to make "terraform init" and "terraform providers"
work properly with the HCL2-powered configuration loader.
This is a rather-messy, complex change to get the "command" package
building again against the new backend API that was updated for
the new configuration loader.
A lot of this is mechanical rewriting to the new API, but
meta_config.go and meta_backend.go in particular saw some major
changes to interface with the new loader APIs and to deal with
the change in order of steps in the backend API.
The new config loader requires some steps to happen in a different
order, particularly in regard to knowing the schema in order to
decode the configuration.
Here we lean directly on the configschema package, rather than
on helper/schema.Backend as before, because it's generally
sufficient for our needs here and this prepares us for the
helper/schema package later moving out into its own repository
to seed a "plugin SDK".
The remote API this talks to will be going away very soon, before our next
major release, and so we'll remove the command altogether in that release.
This also removes the "encodeHCL" function, which was used only for
adding a .tfvars-formatted file to the uploaded archive.
In the long run we'd like to offer machine-readable output for more
commands, but for now we'll just start with a tactical feature in
"terraform validate" since this is useful for automated testing scenarios,
editor integrations, etc, and doesn't include any representations of types
that are expected to have breaking changes in the near future.
As part of some light reorganization of our commands, this new
implementation no longer does validation of variables and will thus avoid
the need to spin up a fully-valid context. Instead, its focus is on
validating the configuration itself, regardless of any variables, state,
etc.
This change anticipates us later adding a -validate-only flag to
"terraform plan" which will then take over the related use-case of
checking if a particular execution of Terraform is valid, _including_ the
state, variables, etc.
Although leaving variables out of validate feels pretty arbitrary today
while all of the variable sources are local anyway, we have plans to
allow per-workspace variables to be stored in the backend in future and
at that point it will no longer be possible to fully validate variables
without accessing the backend. The "terraform plan" command explicitly
requires access to the backend, while "terraform validate" is now
explicitly for local-only validation of a single module.
In a future commit this will be extended to do basic type checking of
the configuration based on provider schemas, etc.
We need to share a single config loader across all callers because that
allows us to maintain the source code cache we'll use for snippets in
error messages.
Nothing calls this yet. Callers will be gradually updated away from Module
and Config in subsequent commits.
If we get a diagnostic message that references a source range, and if the
source code for the referenced file is available, we'll show a snippet of
the source code with the source range highlighted.
At the moment we have no cache of source code, so in practice this
codepath can never be visited. Callers to format.Diagnostic will be
gradually updated in subsequent commits.
In some cases this is needed to keep the UX clean and to make sure any remote exit codes are passed through to the local process.
The most obvious example for this is when using the "remote" backend. This backend runs Terraform remotely and stream the output back to the local terminal.
When an error occurs during the remote execution, all the needed error information will already be in the streamed output. So if we then return an error ourselves, users will get the same errors twice.
By allowing the backend to specify the correct exit code, the UX remains the same while preserving the correct exit codes.
Certain backends (currently only the `remote` backend) do not support using both the default and named workspaces at the same time.
To make the migration easier for users that currently use both types of workspaces, this commit adds logic to ask the user for a new workspace name during the migration process.
This commit fixes a bug that (in the case of the `local` backend) would only check if the selected workspace had a state when deciding to preform a migration.
When the selected workspace didn’t have a state (but other existing workspace(s) did), the migration would not be preformed and the other workspaces would be ignored.
By adding this method you now only have to pass a `*disco.Disco` object around in order to do discovery and use any configured credentials for the discovered hosts.
Of course you can also still pass around both a `*disco.Disco` and a `auth.CredentialsSource` object if there is a need or a reason for that!
- Fixes#11696
- This changes makes `terraform output -json` return '{}' instead of
throwing an error about "no outputs defined"
- If `-json` is not set, the user will receive an error as before
- This UX helps new users to understand how outputs are used
- Allows for easier automation of TF CLI as an empty set of outputs is
usually acceptable, but any other error from `output` would be
re-raised to the user.
Rather than try to modify all the hundreds of calls to the temp helper
functions, and cleanup the temp files at every call site, have all tests
work within a single temp directory that is removed at the end of
TestMain.
The state locking improvements for the regular command had the side
effect of locking the state in the console, import, graph and push
commands. Those commands had been updated to get a state via the
Backend.Context method, which locks the state whenever possible, and now
need to call Unlock directly.
Add Unlock calls to all commands that call Context directly.
Use the new StateLocker field to provide a wrapper for locking the state
during terraform.Context creation in the commands that directly
manipulate the state.
Simplify the use of clistate.Lock by creating a clistate.Locker
instance, which stores the context of locking a state, to allow unlock
to be called without knowledge of how the state was locked.
This alows the backend code to bring the needed UI methods to the point
where the state is locked, and still unlock the state from an outer
scope.
Provide a NoopLocker as well, so that callers can always call Unlock
without verifying the status of the lock.
Add the StateLocker field to the backend.Operation, so that the state
lock can be carried between the different function scopes of the backend
code. This will allow the backend context to lock the state before it's
read, while allowing the different operations to unlock the state when
they complete.
Use the new StateLocker field to provide a wrapper for locking the state
during terraform.Context creation in the commands that directly
manipulate the state.
Simplify the use of clistate.Lock by creating a clistate.Locker
instance, which stores the context of locking a state, to allow unlock
to be called without knowledge of how the state was locked.
This alows the backend code to bring the needed UI methods to the point
where the state is locked, and still unlock the state from an outer
scope.
The error was being silently dropped before.
There is an interpolation error, because the plan is canceled before
some of the resources can be evaluated. There might be a better way to
handle this in the walk cancellation, but the behavior has not changed.
Make the plan and apply shutdown match implementation-wise
If the user wishes to interrupt the running operation, only the first
interrupt was communicated to the operation by canceling the provided
context. A second interrupt would start the shutdown process, but not
communicate this to the running operation. This order of event could
cause partial writes of state.
What would happen is that once the command returns, the plugin system
would stop the provider processes. Once the provider processes dies, all
pending Eval operations would return return with an error, and quickly
cause the operation to complete. Since the backend code didn't know that
the process was shutting down imminently, it would continue by
attempting to write out the last known state. Under the right
conditions, the process would exit part way through the writing of the
state file.
Add Stop and Cancel CancelFuncs to the RunningOperation, to allow it to
easily differentiate between the two signals. The backend will then be
able to detect a shutdown and abort more gracefully.
In order to ensure that the backend is not in the process of writing the
state out, the command will always attempt to wait for the process to
complete after cancellation.
Since an early version of Terraform, the `destroy` command has always
had the `-force` flag to allow an auto approval of the interactive
prompt. 0.11 introduced `-auto-approve` as default to `false` when using
the `apply` command.
The `-auto-approve` flag was introduced to reduce ambiguity of it's
function, but the `-force` flag was never updated for a destroy.
People often use wrappers when automating commands in Terraform, and the
inconsistency between `apply` and `destroy` means that additional logic
must be added to the wrappers to do similar functions. Both commands are
more or less able to run with similar syntax, and also heavily share
their code.
This commit updates the command in `destroy` to use the `-auto-approve` flag
making working with the Terraform CLI a more consistent experience.
We leave in `-force` in `destroy` for the time-being and flag it as
deprecated to ensure a safe switchover period.
The plan shutdown test often fail on slow CI hosts, becase the plan
completes befor the main thread can cancel it. Since attempting to make
the MockProvider concurrent proved too invasive for now, just slow the
test down a bit to help ensure Stop gets called.
To avoid breaking automation where plugin-path was assumed to be set
permanently, only remove the plugin-path record if it was explicitly set
to and empty string.
The existing prompts were worded as if backend configurations were
named, but they can only really be referenced by their type. Change the
wording to reference them as type "X backend". When migrating state,
refer to the backends as the "previously configured" and "newly
configured", since they will often have the same type.
Rather than relying on interrupting Diff, just make sure Stop was called
on the provider. The DiffFn is protected by a mutex in the mock
provider, which means that the tests can't rely on concurent calls to
diff working.
There's no point in trying to track these, they're lost after each test.
Kill them after a short delay so we don't have goroutines from every single
command test to wade through if we have a stack dump.
Only check for input twice in the meta.confirm method. This prevents an
errant newline from aborting the run while allowing Terraform to exit if
there is no input available. We don't just check for a tty, since we
still rely on being able to pipe input in for testing.
Remove the redundant confirmation loops in the migration code, and only
use the confirm method.
Make sure the init inputFalse test actually errors from missing input,
since skipping input will still fail later during provider
initialization. We need to make sure there are two different states that
aren't a noop for migration, and reset the command struct for each run.
Also verify that we don't go into an infinite loop if there is no input.
The duplicate prompts can be confusing when the user confirms that a
migration should happen and we immediately prompt a second time for the
same thing with slightly different wording. The extra hand-holding that
this provides for legacy remote states is less critical now, since it's
been 2 major release cycles since those were removed.
The init command needs to parse the state to resolve providers, but
changes to the state format can cause that to fail with difficult to
understand errors. Check the terraform version during init and provide
the same error that would be returned by plan or apply.
If users run "terraform import" in a directory with no Terraform
configuration files, it's likely that they've made a mistake either by
being in the wrong directory or forgetting to use the -config option
on the command line.
To help users find their mistake in this case, we'll now produce a
specialized error message for this situation:
Error: No Terraform configuration files
The directory /home/user/example does not contain any Terraform
configuration files (.tf or .tf.json). To specify a different
configuration directory, use the -config="..." command line option.
While here, this also converts some of the other existing messages to
diagnostics so that we can show any configuration warnings along with
the error message, and move towards the new standard error presentation.
Previously we required callers to separately call .Validate on the root
module to determine if there were any value errors, but we did that
inconsistently and would thus see crashes in some cases where later code
would try to use invalid configuration as if it were valid.
Now we run .Validate automatically after config loading, returning the
resulting diagnostics. Since we return a diagnostics here, it's possible
to return both warnings and errors.
We return the loaded module even if it's invalid, so callers are free to
ignore returned errors and try to work with the config anyway, though they
will need to be defensive against invalid configuration themselves in
that case.
As a result of this, all of the commands that load configuration now need
to use diagnostic printing to signal errors. For the moment this just
allows us to return potentially-multiple config errors/warnings in full
fidelity, but also sets us up for later when more subsystems are able
to produce rich diagnostics so we can show them all together.
Finally, this commit also removes some stale, commented-out code for the
"legacy" (pre-0.8) graph implementation, which has not been available
for some time.
Now that the local backend can be cancelled during plan and refresh, we
don't really need the testShutdownHook. Simplify the tests by just
checking for Stop being called on the provider.
Add a shutdown hook to verify that a context has been correctly
cancelled, so we can remove the sleep and stop guessing.
Add a plan version of the shutdown test as well.
There was no cancellation context for a plan, so it would always have to
run to completion as SIGINT was being swallowed.
Move the shutdown channel to the command Meta since it's used in
multiple commands.
Validation is the best time to return detailed diagnostics
to the user since we're much more likely to have source
location information, etc than we are in later operations.
This change doesn't actually add any detail to the messages
yet, but it changes the interface so that we can gradually
introduce more detailed diagnostics over time.
While here there are some minor adjustments to some of the
messages to improve their consistency with terminology we
use elsewhere.
As part of the 0.10 core/provider split we moved this provider, along with
all the others, out into its own repository.
In retrospect, the "terraform" provider doesn't really make sense to be
separated since it's just a thin wrapper around some core code anyway,
and so re-integrating it into core avoids the confusion that results when
Terraform Core and the terraform provider have inconsistent versions of
the backend code and dependencies.
There is no good reason to use a different version of the backend code
in the provider than in core, so this new "internal provider" mechanism
is stricter than the old one: it's not possible to use an external build
of this provider at all, and version constraints for it are rejected as
a result.
This provider is also run in-process rather than in a child process, since
again it's just a very thin wrapper around code that's already running
in Terraform core anyway, and so the process barrier between the two does
not create enough advantage to warrant the additional complexity.
Change "Downloading" to 'Initializing" to match the provider loading
dialog.
List each module being loaded.
If a regisry module is being downloaded, list the registry host, and the
version discovered.
Show the source string from the config that is being fetched, rather
than the go-getter url. The full source can be found in the logs for
debugging.
Add much more extensive logging
This allows the user to customize the location where Terraform stores
the files normally placed in the ".terraform" subdirectory, if e.g. the
current working directory is not writable.
In the 0.10 release we added an opt-in mode where Terraform would prompt
interactively for confirmation during apply. We made this opt-in to give
those who wrap Terraform in automation some time to update their scripts
to explicitly opt out of this behavior where appropriate.
Here we switch the default so that a "terraform apply" with no arguments
will -- if it computes a non-empty diff -- display the diff and wait for
the user to type "yes" in similar vein to the "terraform destroy" command.
This makes the commonly-used "terraform apply" a safe workflow for
interactive use, so "terraform plan" is now mainly for use in automation
where a separate planning step is used. The apply command remains
non-interactive when given an explicit plan file.
The previous behavior -- though not recommended -- can be obtained by
explicitly setting the -auto-approve option on the apply command line,
and indeed that is how all of the tests are updated here so that they can
continue to run non-interactively.
Update the command package to use the new module storage. Move the old
command output strings into the module storage itself. This could be
moved back later either by using ui callbacks, or designing a module
storage interface once we know what the final requirements will look
like.
We encourage users to share the "terraform version" output as part of
filing an issue, but previously it only printed the core Terraform version
and this left provider maintainers with no information about which
_provider_ version an issue relates to.
Here we make a best effort to show versions for providers, though we will
omit some or all of them if either "terraform init" hasn't been run (and
so no providers were selected yet) or if there are other inconsistencies
that would cause Terraform to object on startup and require a re-run of
"terraform init".
Two different errors here caused this test to pass even though it was
incorrect: the wanted version string was incorrect, but the test for it
was also inverted, and so together this made the test pass even though
it was actually not testing the output at all.
Update all references to the version values to use the new package.
The VersionString function was left in the terraform package
specifically for the aws provider, which is vendored. We can remove that
last call once the provider is updated.
The command package is the main place we need access to these, so that
we can use them during init (to install packages, for example) and so that
we can use them to configure remote backends.
For the moment we're just providing an empty credentials object, which
will start to include both statically-configured and
helper-program-provided credentials sources in subsequent commits.
This uses the new diagnostics printer for config-related errors in the
main five commands that deal with config.
The immediate motivation for this is to allow HCL2-produced diagnostics
to be printed out in their full fidelity, though it also slightly changes
the presentation of other errors so that they are not presented in all
red text, which can be hard to read on some terminals.
This new method showDiagnostics takes any value that would be accepted by
tfdiags.Append and renders it to the UI.
This is intended to encourage consistent handling of the different kinds
of errors and diagnostics that can be produced, and allow richer error
objects like the HCL2 diagnostics to be easily unwrapped and shown in
their full-fidelity.
Previously we were using fmt.Sprintf and thus forcing the stringification
of the wrapped error.
Using errwrap allows us to unpack the original error at the top of the
stack, which is useful when the wrapped error is really a hcl.Diagnostics
containing potentially-multiple errors and possibly warnings.
This is a tough one to unit tests because the behavior is tangled up in
the code that hits releases.hashicorp.com, so we'll add this e2etest as
some extra insurance that this works end-to-end.
Since we now have a guide that recommends some specific ways to run
Terraform in automation, we can mimic those suggestions in an e2e test and
thus ensure they keep working.
Here we test the three different approaches suggested in the guide:
- init, plan, apply (main case)
- init, apply (e.g. for deploying to a QA/staging environment)
- init, plan (e.g. for verifying a pull request)
In 6712192724 we stopped counting data
source destroys in the destroy tally since they are an implementation
detail.
This caused this test to start failing, though since the new behavior is
correct here we just update the test to match.
Shell tab completion for all of the subcommands under
"terraform workspace", providing the appropriate kind of auto-complete for
each argument, along with completion for for any flags.
This helper is a Predictor for the "complete" package that tries to
auto-complete workspace names from the current backend, if it's
initialized and operable.
The predictors built in to the "complete" package assume that the same
type of argument is repeated indefinitely, but most Terraform commands
don't work like that, so this helper allows us to define a sequence of
predictors that apply to each argument in turn.
This new option allows importing without configuration present.
Configuration is required by default as a confirmation that the provided resource name is correct, but it can be useful to override this in tools that wrap Terraform to do more involved operations.
In #15884 we adjusted the plan output to give an explicit command to run
to apply a plan, whereas before this command was just alluded to in the
prose.
Since releasing that, we've got good feedback that it's confusing to
include such instructions when Terraform is running in a workflow
automation tool, because such tools usually abstract away exactly what
commands are run and require users to take different actions to
proceed through the workflow.
To accommodate such environments while retaining helpful messages for
normal CLI usage, here we introduce a new environment variable
TF_IN_AUTOMATION which, when set to a non-empty value, is a hint to
Terraform that it isn't being run in an interactive command shell and
it should thus tone down the "next steps" messaging.
The documentation for this setting is included as part of the "...in
automation" guide since it's not generally useful in other cases. We also
intentionally disclaim comprehensive support for this since we want to
avoid creating an extreme number of "if running in automation..."
codepaths that would increase the testing matrix and hurt maintainability.
The focus is specifically on the output of the three commands we give in
the automation guide, which at present means the following two situations:
* "terraform init" does not include the final paragraphs that suggest
running "terraform plan" and tell you in what situations you might need
to re-run "terraform init".
* "terraform plan" does not include the final paragraphs that either
warn about not specifying "-out=..." or instruct to run
"terraform apply" with the generated plan file.
In 3ea1592 the plan rendering was refactored to add an extra indirection
of producing a display-oriented plan object first and then rendering from
that object.
There was a logic error while adapting the existing plan rendering code
to use the new display-oriented object: the core InstanceDiff object sets
the "Destroy" flag (a boolean) for both DiffDestroy and DiffDestroyCreate,
and so this code previously checked r.Destroy to recognize the
"destroy-create" case. This was incorrectly adapted to a check for the
display action being DiffDestroy, when it should actually have been
DiffDestroyCreate.
The effect of this bug was to cause the "(forces new resource)"
annotations to not be displayed on attributes, though the resource-level
information still correctly reflected that a new resource was required.
This fix restores the attribute-level annotations.
The previous diff presentation was rather "wordy", and not very friendly
to those who can't see color either because they have color-blindness or
because they don't have a color-supporting terminal.
This new presentation uses the actual symbols used in the plan output
and tries to be more concise. It also uses some framing characters to
try to separate the different stages of "terraform plan" to make it
easier to visually navigate.
The apply command also adopts this new plan presentation, in preparation
for "terraform apply" (with interactive plan confirmation) becoming the
primary, safe workflow in the next major release.
Finally, we standardize on the terminology "perform" and "actions" rather
than "execute" and "changes" to reflect the fact that reading is now an
action and that isn't actually a _change_.
Previously the rendered plan output was constructed directly from the
core plan and then annotated with counts derived from the count hook.
At various places we applied little adjustments to deal with the fact that
the user-facing diff model is not identical to the internal diff model,
including the special handling of data source reads and destroys. Since
this logic was just muddled into the rendering code, it behaved
inconsistently with the tally of adds, updates and deletes.
This change reworks the plan formatter so that it happens in two stages:
- First, we produce a specialized Plan object that is tailored for use
in the UI. This applies all the relevant logic to transform the
physical model into the user model.
- Second, we do a straightforward visual rendering of the display-oriented
plan object.
For the moment this is slightly overkill since there's only one rendering
path, but it does give us the benefit of letting the counts be derived
from the same data as the full detailed diff, ensuring that they'll stay
consistent.
Later we may choose to have other UIs for plans, such as a
machine-readable output intended to drive a web UI. In that case, we'd
want the web UI to consume a serialization of the _display-oriented_ plan
so that it doesn't need to re-implement all of these UI special cases.
This introduces to core a new diff action type for "refresh". Currently
this is used _only_ in the UI layer, to represent data source reads.
Later it would be good to use this type for the core diff as well, to
improve consistency, but that is left for another day to keep this change
focused on the UI.
Previously we were checking required_version only during "real" operations, and not during initialization. Catching it during init is better because that's the first command users run on a new working directory.
Go 1.9 adds this new function which, when called, marks the caller as
being a "helper function". Helper function stack frames are then skipped
when trying to find a line of test code to blame for a test failure, so
that the code in the main test function appears in the test failure output
rather than a line within the helper function itself.
This covers many -- but probaly not all -- of our test helpers across
various packages.
Fix the -state and -state-out wording to be consistent with other
commands. Remove the erroneous reference to remote state in the website
version of the flag description.
While the `local.Local` backend is the only implementation of
`backend.Local`, creating the backend with `ForceLocal` bypasses the
`backend.Backend` in the `local.Local` causing a local state to be
implicitly created rather than using the configured state backend.
Add a test that imports into a configured backend (using the "local"
backend as a remote state proxy). This further confirms the confusing
nature of ForceLocal, as the backend _is_ local, but not from the
viewpoint of meta.Backend.
This restores the earlier behavior of the first positional argument to
terraform init in 0.9, but as a command line option.
The positional argument was removed to improve consistency with other
commands that take a working directory as their first positional argument.
It was originally intended that this functionality would return in a
later release along with some other general improvements to Terraform's
module handling, but we're introducing here an interim solution that
uses the existing module source concept, to allow for easier porting of
workflows that previously depended on the automatic copy behavior.
In a future release this feature may change again as the module
improvements design firms up, but we expect it to be broadly compatible
with this temporary state.
In order to use a backend for the state commands, we need an initialized
meta. Use a single Meta instance rather than temporary ones to make sure
the backends are initialized properly.
This e2etest runs an init, plan, apply, destroy sequence against a test
configuration using the real template and null providers downloaded from
the official repository.
This test _does_ trample a bit on the scope of some already-existing
tests, but this is mainly just to check our assumptions about how
Terraform behaves to ensure that we can reach our main conclusion here:
that the main Terraform workflow commands interact correctly with each
other in real use and we can complete the full workflow.
We already have good tests for the business logic around provider
installation, but the existing tests all stub out the main repository
server. This test completes that coverage by verifying that the installer
is able to run against the real repository and install an official release
of the template provider.
This basic test is here primarily because it's one of the few that can
run without reaching out to external services, and so it means our usual
test runs will catch situations where the main executable build is
somehow broken.
The version command itself is not very interesting to test, but it's
convenient in that its behavior is very predictable and self-contained.
Previously we had no automated testing of whether we can produce a
Terraform executable that actually works. Our various functional tests
have good coverage of specific Terraform features and whole operations,
but we lacked end-to-end testing of actual usage of the generated binary,
without any stubbing.
This package is intended as a vehicle for such end-to-end testing. When
run normally under "go test" it will produce a build of the main Terraform
binary and make it available for tests to execute. The harness exposes
a flag for whether tests are allowed to reach out to external network
services, controlled with our standard TF_ACC environment variable, so
that basic local tests can be safely run as part of "make test" while
more elaborate tests can be run easily when desired.
It also provides a separate mode of operation where the included script
make-archive.sh can be used to produce a self-contained test archive that
can be copied to another system to run the tests there. This is intended
to allow testing of cross-compiled binaries, by shipping them over to
the target OS and architecture to run without requiring a full Go compiler
installation on the target system.
The goal here is not to test again functionality that's already
well-covered by our existing tests, but rather to test chains of normal
operations against the build binary that are not otherwise tested
together.
The improved err scanner loop in meta causes these to race. There's no
need to write back to the same commands struct, so just use a new
instance in each iteration.
Meta.process was relying on the system readdir to order the arguments,
but readdir doesn't guarantee any ordering. Read the directory contents
as a whole and sort them in place before adding the tfvars files.
Previously the APIs for state persistence and management had some problematic cases where we depended on hidden mutations of the state structure as side-effects of otherwise-innocent-looking operations, which was a frequent cause of accidental regressions due to faulty assumptions.
This new model attempts to isolate certain state mutations to just within the state managers, and makes the state managers work on separated snapshots of the state rather than on the "live" object to reduce the risk of race conditions.
Due to how the state filter machinery works, passing no arguments is valid
and matches _all_ resources.
It is very unlikely that someone wants to remove everything from state, so
this ends up being a very dangerous default for the "terraform state rm"
command, and surprising for someone who perhaps runs it looking for the
usage information.
So we'll be pragmatic here and reject the no-arguments case for this
command, accepting that it makes the unlikely case of intentionally
deleting all resources harder in order to make it less likely that it
will happen _unintentionally_.
If someone does really want to remove all resources from the state, they
can provide an explicit empty string argument, but this isn't documented
because it's a weird case that doesn't seem worth mentioning.
This fixes#15283.
The state returned from the testState helper shouldn't rely on any
mutations caused by WriteState. The Init function (which is analogous to
NewState) shoudl set any required fields.
This command serves as an alternative to the human-oriented list of workspaces for scripting use-cases where it's useful to know the _current_ workspace name.
Previously we relied on a constellation of coincidences for everything to
work out correctly with state serials. In particular, callers needed to
be very careful about mutating states (or not) because many different bits
of code shared pointers to the same objects.
Here we move to a model where all of the state managers always use
distinct instances of state, copied when WriteState is called. This means
that they are truly a snapshot of the state as it was at that call, even
if the caller goes on mutating the state that was passed.
We also adjust the handling of serials so that the state managers ignore
any serials in incoming states and instead just treat each Persist as
the next version after what was most recently Refreshed.
(An exception exists for when nothing has been refreshed, e.g. because
we are writing a state to a location for the first time. In that case
we _do_ trust the caller, since the given state is either a new state
or it's a copy of something we're migrating from elsewhere with its
state and lineage intact.)
The intent here is to allow the rest of Terraform to not worry about
serials and state identity, and instead just treat the state as a mutable
structure. We'll just snapshot it occasionally, when WriteState is called,
and deal with serials _only_ at persist time.
This is intended as a more robust version of #15423, which was a quick
hotfix to an issue that resulted from our previous slopping handling
of state serials but arguably makes the problem worse by depending on
an additional coincidental behavior of the local backend's apply
implementation.
Skips checksum validation if the `TF_SKIP_PROVIDER_VERIFY` environment variable is set. Undocumented variable, as the primary goal is to significantly improve the local provider development workflow.
We need to release the lock just before deleting the state, in case the backend
can't remove the resource while holding the lock. This is currently true for
Windows local files.
TODO: While there is little safety in locking while deleting the state, it
might be nice to be able to coordinate processes around state deletion, i.e. in
a CI environment. Adding Delete() as a required method of States would allow
the removal of the resource to be delegated from the Backend to the State
itself.
A common reason to want to use `terraform plan` is to have a chance to
review and confirm a plan before running it. If in fact that is the
only reason you are running plan, this new `terraform apply -auto-approve=false`
flag provides an easier alternative to
P=$(mktemp -t plan)
terraform refresh
terraform plan -refresh=false -out=$P
terraform apply $P
rm $P
The flag defaults to true for now, but in a future version of Terraform it will
default to false.
The import command wasn't loading the full plugin path for discovery.
Run a basic plugin init sequence, and verify we can find a plugin (even
though the plugin is invalid and will fail).
The import command wasn't loading the plugin path at all, relying on the
local directory for binaries.
Load the plugin dir into Meta, and pass in ForceLocal for consistency.
The Backend returned was going to be a Local anyway, so the added check
wasn't ensuring anything.
The "confirm" method was directly checking the meta struct's input field,
but that only represents the -input command line flag, and doesn't
respect the TF_INPUT environment variable.
By calling the Input method instead, we check both.
This fixes#15338.
When using a `state` command, if the `-state` flag is provided we do not
want to modify the Backend state. In this case we should always create a
local state instance.
The backup flag was also being ignored, and some tests were relying on
that, which have been fixed.
If we provide a -state flag to a state command, we do not want terraform
to modify the backend state. This test fails since the state specified
in the backend doesn't exist
Remove "checksum" from the error, and only indicate that the plugin has
changed.
Always show requested versions even if it's "any", and found versions of
plugins.
This change makes various minor adjustments to the rendering of plans
in the output of "terraform plan":
- Resources are identified using the standard resource address syntax,
rather than exposing the legacy internal representation used in the
module diff resource keys. This fixes#8713.
- Subjectively, having square brackets in the addresses made it look more
visually "off" when the same name but with different indices were
shown together with differing-length "symbols", so the symbols are now
all padded and right-aligned to three characters for consistent layout
across all operations.
- The -/+ action is now more visually distinct, using several different
colors to help communicate what it will do and including a more obvious
"(new resource required)" marker to help draw attention to this not
being just an update diff. This fixes#15350.
- The resources are now sorted in a manner that sorts index [10] after
index [9], rather than after index [1] as we did before. This makes it
easier to scan the list and avoids the common confusion where it seems
that there are only 10 items when in fact there are 11-20 items with
all the tens hiding further up in the list.
Previously init would crash if given these options:
-backend=false -get-plugins=true
This is because the state is used as a source of provider dependency
information, and we need to instantiate the backend to get the state.
To avoid the crash, we now use the following adjusted behavior:
- if -backend=true, we behave as before
- if -backend=false, we instead try to instantiate the backend the same
way any other command would, without modifying its configuration
- if we're able to instantiate the backend, we use it to fetch state
for dependency resolution purposes
- if the backend is not instantiable then we assume it's not yet
configured and proceed with a nil state, which may cause us to see an
incomplete picture of the dependencies but still allows the install
to succeed. Subsequently running "terraform plan" will not work until
the backend is (re-)initialized, so the incomplete picture of required
plugins is safe.
This takes care of a few dangling cases where we were still stringifying
empty version constraints, which creates confusing error messages due to
it stringing as the empty string.
For the "no suitable versions available" message, we fall back on the
"provider not found" message if no versions were found even though it's
unconstrained. This should only happen in an edge case where the
provider's index page exists on the releases server but no versions are
yet present.
For the message about plugin protocol versions, this again is an edge
case since with no constraints this should happen only if we release
an incompatible Terraform version but don't release a new version of the
plugin that's compatible. In this case we just show the constraint as
"(any version)" to make sure we always show _something_.
Previously we only did this when _upgrading_, but that's unnecessarily
specific and confusing since e.g. plugins can get upgraded implicitly by
constraint changes, which would not then trigger the purge process.
Instead, we'll assume that the user is able to easily re-download plugins
that were purged here, or if they need more specific guarantees they will
manage manually a plugin directory and disable the auto-install behavior
using `-plugin-dir`.
Now we are able to recognize and handle a few special error situations
from plugin installation with more verbose error messages that give the
user better feedback on how to proceed.
The -plugin-dir option lets the user specify custom search paths for
plugins. This overrides all other plugin search paths, and prevents the
auto-installation of plugins.
We also make sure that the availability of plugins is always checked
during init, even if -get-plugins=false or -plugin-dir is set.
init should always write intternal data to the current directory, even
when a path is provided. The inherited behavior no longer applies to the
new use of init.
Now that init can take a directory for configuration, the old behavior
of writing the .terraform data directory into the target path no longer
makes sense. Don't change the dataDir field during init, and write to
the default location.
Clean up all references to Meta.dataDir, and only use the getter method
in case we chose to dynamically override this at some point.
Now when -upgrade is provided to "terraform init" (and plugin installation
isn't disabled) it will:
- ignore the contents of the auto-install plugin directory when deciding
what is "available", thus causing anything there to be reinstalled,
possibly at a newer version.
- if installation completes successfully, purge from the auto-install
plugin directory any plugin-looking files that aren't in the set of
chosen plugins.
As before, plugins outside of the auto-install directory are able to
take precedence over the auto-install ones, and these will never be
upgraded nor purged.
The thinking here is that the auto-install directory is an implementation
detail directly managed by Terraform, and so it's Terraform's
responsibility to automatically keep it clean as plugins are upgraded.
We don't yet have the -plugin-dir option implemented, but once it is it
should circumvent all of this behavior and just expect providers to be
already available in the given directory, meaning that nothing will be
auto-installed, -upgraded or -purged.
Previously we had a "getProvider" function type used to implement plugin
fetching. Here we replace that with an interface type, initially with
just a "Get" function.
For now this just simplifies the interface by allowing the target
directory and protocol version to be members of the struct rather than
passed as arguments.
A later change will extend this interface to also include a method to
purge unused plugins, so that upgrading frequently doesn't leave behind
a trail of unused executable files.
As of this commit this just upgrades modules, but this option will also
later upgrade plugins and indeed anything else that's being downloaded and
installed as part of the init.
Since there is little left that isn't core, remove the distinction for
now to reduce confusion, since a "core" binary will mostly work except
for provisioners.
We're shifting terminology from "environment" to "workspace". This takes
care of some of the main internal API surface that was using the old
terminology, though is not intended to be entirely comprehensive and is
mainly just to minimize the amount of confusion for maintainers as we
continue moving towards eliminating the old terminology.
Previously we just silently ignored warnings from validating the backend
config, but now that we have a deprecated argument it's important to print
these out so users can respond to the deprecation warning.
Feedback after 0.9 was that the term "environment" was confusing due to
it colliding with several other concepts, such as OS environment
variables, a non-aligned Terraform Enterprise concept, and differing ideas
of "environment" within various organizations.
This new term "workspace" is intended to ease some of that confusion. This
term is not used anywhere else in Terraform today, and we expect it to not
be used in a manner that would be confusing within user organizations.
This begins a deprecation cycle for the "terraform env" family of commands,
instead moving to an equivalent set of "terraform workspace" commands.
There are some remaining references to the old "environment" concept in
the code, which will be cleaned up in a separate change. This change is
instead focused on text visible in the UI and wording within code comments
for the benefit of human maintainers of the code.
This allows you to run multiple concurrent terraform operations against
different environments from the same source directory.
Fixes#14447.
Also removes some dead code which appears to do the same thing as the function I
modified.
When init was modified in 0.9 to initialize a terraform working
directory, the legacy behavior was kept to copy or fetch module sources.
This left the init command without the ability that the plan and apply
commands have to target a specific directory for the operation.
This commit removes the legacy behavior altogether, and allows init to
target a directory for initialization, bringing it into parity with plan
and apply. If one want to copy a module to the target or current
directory, that will have to be done manually before calling init. We
can later reintroduce fetching modules with init without breaking this
new behavior, by adding the source as an optional second argument.
The unit tests testing the copying of sources with init have been
removed, as well as some out of date (and commented out) init tests
regarding remote states.
ConstrainVersions was documented as returning nil, but it was instead
returning an empty set. Use the Count() method to check for nil or
empty. Add test to verify failed constraints will show up as missing.
"environment" is a very overloaded term, so here we prefer to use the
term "working directory" to talk about a local directory where operations
are executed on a given Terraform configuration.
Each provider plugin will take at least a few seconds to download, so
providing feedback about each one should make users feel less like
Terraform has hung.
Ideally we'd show ongoing progress during the download, but that's not
possible without re-working go-getter, so we'll accept this as an interim
solution for now.
This was added with the idea of using it to override the SHA256 hashes
to match those hypothetically stored in a plan, but we already have a
mechanism elsewhere for populating context fields from plan fields, so
this is not actually necessary.