Rather than try to modify all the hundreds of calls to the temp helper
functions, and cleanup the temp files at every call site, have all tests
work within a single temp directory that is removed at the end of
TestMain.
The state locking improvements for the regular command had the side
effect of locking the state in the console, import, graph and push
commands. Those commands had been updated to get a state via the
Backend.Context method, which locks the state whenever possible, and now
need to call Unlock directly.
Add Unlock calls to all commands that call Context directly.
Use the new StateLocker field to provide a wrapper for locking the state
during terraform.Context creation in the commands that directly
manipulate the state.
Simplify the use of clistate.Lock by creating a clistate.Locker
instance, which stores the context of locking a state, to allow unlock
to be called without knowledge of how the state was locked.
This alows the backend code to bring the needed UI methods to the point
where the state is locked, and still unlock the state from an outer
scope.
Provide a NoopLocker as well, so that callers can always call Unlock
without verifying the status of the lock.
Add the StateLocker field to the backend.Operation, so that the state
lock can be carried between the different function scopes of the backend
code. This will allow the backend context to lock the state before it's
read, while allowing the different operations to unlock the state when
they complete.
Use the new StateLocker field to provide a wrapper for locking the state
during terraform.Context creation in the commands that directly
manipulate the state.
Simplify the use of clistate.Lock by creating a clistate.Locker
instance, which stores the context of locking a state, to allow unlock
to be called without knowledge of how the state was locked.
This alows the backend code to bring the needed UI methods to the point
where the state is locked, and still unlock the state from an outer
scope.
The error was being silently dropped before.
There is an interpolation error, because the plan is canceled before
some of the resources can be evaluated. There might be a better way to
handle this in the walk cancellation, but the behavior has not changed.
Make the plan and apply shutdown match implementation-wise
If the user wishes to interrupt the running operation, only the first
interrupt was communicated to the operation by canceling the provided
context. A second interrupt would start the shutdown process, but not
communicate this to the running operation. This order of event could
cause partial writes of state.
What would happen is that once the command returns, the plugin system
would stop the provider processes. Once the provider processes dies, all
pending Eval operations would return return with an error, and quickly
cause the operation to complete. Since the backend code didn't know that
the process was shutting down imminently, it would continue by
attempting to write out the last known state. Under the right
conditions, the process would exit part way through the writing of the
state file.
Add Stop and Cancel CancelFuncs to the RunningOperation, to allow it to
easily differentiate between the two signals. The backend will then be
able to detect a shutdown and abort more gracefully.
In order to ensure that the backend is not in the process of writing the
state out, the command will always attempt to wait for the process to
complete after cancellation.
Since an early version of Terraform, the `destroy` command has always
had the `-force` flag to allow an auto approval of the interactive
prompt. 0.11 introduced `-auto-approve` as default to `false` when using
the `apply` command.
The `-auto-approve` flag was introduced to reduce ambiguity of it's
function, but the `-force` flag was never updated for a destroy.
People often use wrappers when automating commands in Terraform, and the
inconsistency between `apply` and `destroy` means that additional logic
must be added to the wrappers to do similar functions. Both commands are
more or less able to run with similar syntax, and also heavily share
their code.
This commit updates the command in `destroy` to use the `-auto-approve` flag
making working with the Terraform CLI a more consistent experience.
We leave in `-force` in `destroy` for the time-being and flag it as
deprecated to ensure a safe switchover period.
The plan shutdown test often fail on slow CI hosts, becase the plan
completes befor the main thread can cancel it. Since attempting to make
the MockProvider concurrent proved too invasive for now, just slow the
test down a bit to help ensure Stop gets called.
To avoid breaking automation where plugin-path was assumed to be set
permanently, only remove the plugin-path record if it was explicitly set
to and empty string.
The existing prompts were worded as if backend configurations were
named, but they can only really be referenced by their type. Change the
wording to reference them as type "X backend". When migrating state,
refer to the backends as the "previously configured" and "newly
configured", since they will often have the same type.
Rather than relying on interrupting Diff, just make sure Stop was called
on the provider. The DiffFn is protected by a mutex in the mock
provider, which means that the tests can't rely on concurent calls to
diff working.
There's no point in trying to track these, they're lost after each test.
Kill them after a short delay so we don't have goroutines from every single
command test to wade through if we have a stack dump.
Only check for input twice in the meta.confirm method. This prevents an
errant newline from aborting the run while allowing Terraform to exit if
there is no input available. We don't just check for a tty, since we
still rely on being able to pipe input in for testing.
Remove the redundant confirmation loops in the migration code, and only
use the confirm method.
Make sure the init inputFalse test actually errors from missing input,
since skipping input will still fail later during provider
initialization. We need to make sure there are two different states that
aren't a noop for migration, and reset the command struct for each run.
Also verify that we don't go into an infinite loop if there is no input.
The duplicate prompts can be confusing when the user confirms that a
migration should happen and we immediately prompt a second time for the
same thing with slightly different wording. The extra hand-holding that
this provides for legacy remote states is less critical now, since it's
been 2 major release cycles since those were removed.
The init command needs to parse the state to resolve providers, but
changes to the state format can cause that to fail with difficult to
understand errors. Check the terraform version during init and provide
the same error that would be returned by plan or apply.
If users run "terraform import" in a directory with no Terraform
configuration files, it's likely that they've made a mistake either by
being in the wrong directory or forgetting to use the -config option
on the command line.
To help users find their mistake in this case, we'll now produce a
specialized error message for this situation:
Error: No Terraform configuration files
The directory /home/user/example does not contain any Terraform
configuration files (.tf or .tf.json). To specify a different
configuration directory, use the -config="..." command line option.
While here, this also converts some of the other existing messages to
diagnostics so that we can show any configuration warnings along with
the error message, and move towards the new standard error presentation.
Previously we required callers to separately call .Validate on the root
module to determine if there were any value errors, but we did that
inconsistently and would thus see crashes in some cases where later code
would try to use invalid configuration as if it were valid.
Now we run .Validate automatically after config loading, returning the
resulting diagnostics. Since we return a diagnostics here, it's possible
to return both warnings and errors.
We return the loaded module even if it's invalid, so callers are free to
ignore returned errors and try to work with the config anyway, though they
will need to be defensive against invalid configuration themselves in
that case.
As a result of this, all of the commands that load configuration now need
to use diagnostic printing to signal errors. For the moment this just
allows us to return potentially-multiple config errors/warnings in full
fidelity, but also sets us up for later when more subsystems are able
to produce rich diagnostics so we can show them all together.
Finally, this commit also removes some stale, commented-out code for the
"legacy" (pre-0.8) graph implementation, which has not been available
for some time.
Now that the local backend can be cancelled during plan and refresh, we
don't really need the testShutdownHook. Simplify the tests by just
checking for Stop being called on the provider.
Add a shutdown hook to verify that a context has been correctly
cancelled, so we can remove the sleep and stop guessing.
Add a plan version of the shutdown test as well.
There was no cancellation context for a plan, so it would always have to
run to completion as SIGINT was being swallowed.
Move the shutdown channel to the command Meta since it's used in
multiple commands.
Validation is the best time to return detailed diagnostics
to the user since we're much more likely to have source
location information, etc than we are in later operations.
This change doesn't actually add any detail to the messages
yet, but it changes the interface so that we can gradually
introduce more detailed diagnostics over time.
While here there are some minor adjustments to some of the
messages to improve their consistency with terminology we
use elsewhere.
As part of the 0.10 core/provider split we moved this provider, along with
all the others, out into its own repository.
In retrospect, the "terraform" provider doesn't really make sense to be
separated since it's just a thin wrapper around some core code anyway,
and so re-integrating it into core avoids the confusion that results when
Terraform Core and the terraform provider have inconsistent versions of
the backend code and dependencies.
There is no good reason to use a different version of the backend code
in the provider than in core, so this new "internal provider" mechanism
is stricter than the old one: it's not possible to use an external build
of this provider at all, and version constraints for it are rejected as
a result.
This provider is also run in-process rather than in a child process, since
again it's just a very thin wrapper around code that's already running
in Terraform core anyway, and so the process barrier between the two does
not create enough advantage to warrant the additional complexity.
Change "Downloading" to 'Initializing" to match the provider loading
dialog.
List each module being loaded.
If a regisry module is being downloaded, list the registry host, and the
version discovered.
Show the source string from the config that is being fetched, rather
than the go-getter url. The full source can be found in the logs for
debugging.
Add much more extensive logging
This allows the user to customize the location where Terraform stores
the files normally placed in the ".terraform" subdirectory, if e.g. the
current working directory is not writable.
In the 0.10 release we added an opt-in mode where Terraform would prompt
interactively for confirmation during apply. We made this opt-in to give
those who wrap Terraform in automation some time to update their scripts
to explicitly opt out of this behavior where appropriate.
Here we switch the default so that a "terraform apply" with no arguments
will -- if it computes a non-empty diff -- display the diff and wait for
the user to type "yes" in similar vein to the "terraform destroy" command.
This makes the commonly-used "terraform apply" a safe workflow for
interactive use, so "terraform plan" is now mainly for use in automation
where a separate planning step is used. The apply command remains
non-interactive when given an explicit plan file.
The previous behavior -- though not recommended -- can be obtained by
explicitly setting the -auto-approve option on the apply command line,
and indeed that is how all of the tests are updated here so that they can
continue to run non-interactively.
Update the command package to use the new module storage. Move the old
command output strings into the module storage itself. This could be
moved back later either by using ui callbacks, or designing a module
storage interface once we know what the final requirements will look
like.
We encourage users to share the "terraform version" output as part of
filing an issue, but previously it only printed the core Terraform version
and this left provider maintainers with no information about which
_provider_ version an issue relates to.
Here we make a best effort to show versions for providers, though we will
omit some or all of them if either "terraform init" hasn't been run (and
so no providers were selected yet) or if there are other inconsistencies
that would cause Terraform to object on startup and require a re-run of
"terraform init".
Two different errors here caused this test to pass even though it was
incorrect: the wanted version string was incorrect, but the test for it
was also inverted, and so together this made the test pass even though
it was actually not testing the output at all.
Update all references to the version values to use the new package.
The VersionString function was left in the terraform package
specifically for the aws provider, which is vendored. We can remove that
last call once the provider is updated.
The command package is the main place we need access to these, so that
we can use them during init (to install packages, for example) and so that
we can use them to configure remote backends.
For the moment we're just providing an empty credentials object, which
will start to include both statically-configured and
helper-program-provided credentials sources in subsequent commits.
This uses the new diagnostics printer for config-related errors in the
main five commands that deal with config.
The immediate motivation for this is to allow HCL2-produced diagnostics
to be printed out in their full fidelity, though it also slightly changes
the presentation of other errors so that they are not presented in all
red text, which can be hard to read on some terminals.
This new method showDiagnostics takes any value that would be accepted by
tfdiags.Append and renders it to the UI.
This is intended to encourage consistent handling of the different kinds
of errors and diagnostics that can be produced, and allow richer error
objects like the HCL2 diagnostics to be easily unwrapped and shown in
their full-fidelity.
Previously we were using fmt.Sprintf and thus forcing the stringification
of the wrapped error.
Using errwrap allows us to unpack the original error at the top of the
stack, which is useful when the wrapped error is really a hcl.Diagnostics
containing potentially-multiple errors and possibly warnings.
This is a tough one to unit tests because the behavior is tangled up in
the code that hits releases.hashicorp.com, so we'll add this e2etest as
some extra insurance that this works end-to-end.
Since we now have a guide that recommends some specific ways to run
Terraform in automation, we can mimic those suggestions in an e2e test and
thus ensure they keep working.
Here we test the three different approaches suggested in the guide:
- init, plan, apply (main case)
- init, apply (e.g. for deploying to a QA/staging environment)
- init, plan (e.g. for verifying a pull request)
In 6712192724 we stopped counting data
source destroys in the destroy tally since they are an implementation
detail.
This caused this test to start failing, though since the new behavior is
correct here we just update the test to match.
Shell tab completion for all of the subcommands under
"terraform workspace", providing the appropriate kind of auto-complete for
each argument, along with completion for for any flags.
This helper is a Predictor for the "complete" package that tries to
auto-complete workspace names from the current backend, if it's
initialized and operable.
The predictors built in to the "complete" package assume that the same
type of argument is repeated indefinitely, but most Terraform commands
don't work like that, so this helper allows us to define a sequence of
predictors that apply to each argument in turn.
This new option allows importing without configuration present.
Configuration is required by default as a confirmation that the provided resource name is correct, but it can be useful to override this in tools that wrap Terraform to do more involved operations.
In #15884 we adjusted the plan output to give an explicit command to run
to apply a plan, whereas before this command was just alluded to in the
prose.
Since releasing that, we've got good feedback that it's confusing to
include such instructions when Terraform is running in a workflow
automation tool, because such tools usually abstract away exactly what
commands are run and require users to take different actions to
proceed through the workflow.
To accommodate such environments while retaining helpful messages for
normal CLI usage, here we introduce a new environment variable
TF_IN_AUTOMATION which, when set to a non-empty value, is a hint to
Terraform that it isn't being run in an interactive command shell and
it should thus tone down the "next steps" messaging.
The documentation for this setting is included as part of the "...in
automation" guide since it's not generally useful in other cases. We also
intentionally disclaim comprehensive support for this since we want to
avoid creating an extreme number of "if running in automation..."
codepaths that would increase the testing matrix and hurt maintainability.
The focus is specifically on the output of the three commands we give in
the automation guide, which at present means the following two situations:
* "terraform init" does not include the final paragraphs that suggest
running "terraform plan" and tell you in what situations you might need
to re-run "terraform init".
* "terraform plan" does not include the final paragraphs that either
warn about not specifying "-out=..." or instruct to run
"terraform apply" with the generated plan file.
In 3ea1592 the plan rendering was refactored to add an extra indirection
of producing a display-oriented plan object first and then rendering from
that object.
There was a logic error while adapting the existing plan rendering code
to use the new display-oriented object: the core InstanceDiff object sets
the "Destroy" flag (a boolean) for both DiffDestroy and DiffDestroyCreate,
and so this code previously checked r.Destroy to recognize the
"destroy-create" case. This was incorrectly adapted to a check for the
display action being DiffDestroy, when it should actually have been
DiffDestroyCreate.
The effect of this bug was to cause the "(forces new resource)"
annotations to not be displayed on attributes, though the resource-level
information still correctly reflected that a new resource was required.
This fix restores the attribute-level annotations.
The previous diff presentation was rather "wordy", and not very friendly
to those who can't see color either because they have color-blindness or
because they don't have a color-supporting terminal.
This new presentation uses the actual symbols used in the plan output
and tries to be more concise. It also uses some framing characters to
try to separate the different stages of "terraform plan" to make it
easier to visually navigate.
The apply command also adopts this new plan presentation, in preparation
for "terraform apply" (with interactive plan confirmation) becoming the
primary, safe workflow in the next major release.
Finally, we standardize on the terminology "perform" and "actions" rather
than "execute" and "changes" to reflect the fact that reading is now an
action and that isn't actually a _change_.
Previously the rendered plan output was constructed directly from the
core plan and then annotated with counts derived from the count hook.
At various places we applied little adjustments to deal with the fact that
the user-facing diff model is not identical to the internal diff model,
including the special handling of data source reads and destroys. Since
this logic was just muddled into the rendering code, it behaved
inconsistently with the tally of adds, updates and deletes.
This change reworks the plan formatter so that it happens in two stages:
- First, we produce a specialized Plan object that is tailored for use
in the UI. This applies all the relevant logic to transform the
physical model into the user model.
- Second, we do a straightforward visual rendering of the display-oriented
plan object.
For the moment this is slightly overkill since there's only one rendering
path, but it does give us the benefit of letting the counts be derived
from the same data as the full detailed diff, ensuring that they'll stay
consistent.
Later we may choose to have other UIs for plans, such as a
machine-readable output intended to drive a web UI. In that case, we'd
want the web UI to consume a serialization of the _display-oriented_ plan
so that it doesn't need to re-implement all of these UI special cases.
This introduces to core a new diff action type for "refresh". Currently
this is used _only_ in the UI layer, to represent data source reads.
Later it would be good to use this type for the core diff as well, to
improve consistency, but that is left for another day to keep this change
focused on the UI.
Previously we were checking required_version only during "real" operations, and not during initialization. Catching it during init is better because that's the first command users run on a new working directory.
Go 1.9 adds this new function which, when called, marks the caller as
being a "helper function". Helper function stack frames are then skipped
when trying to find a line of test code to blame for a test failure, so
that the code in the main test function appears in the test failure output
rather than a line within the helper function itself.
This covers many -- but probaly not all -- of our test helpers across
various packages.
Fix the -state and -state-out wording to be consistent with other
commands. Remove the erroneous reference to remote state in the website
version of the flag description.
While the `local.Local` backend is the only implementation of
`backend.Local`, creating the backend with `ForceLocal` bypasses the
`backend.Backend` in the `local.Local` causing a local state to be
implicitly created rather than using the configured state backend.
Add a test that imports into a configured backend (using the "local"
backend as a remote state proxy). This further confirms the confusing
nature of ForceLocal, as the backend _is_ local, but not from the
viewpoint of meta.Backend.
This restores the earlier behavior of the first positional argument to
terraform init in 0.9, but as a command line option.
The positional argument was removed to improve consistency with other
commands that take a working directory as their first positional argument.
It was originally intended that this functionality would return in a
later release along with some other general improvements to Terraform's
module handling, but we're introducing here an interim solution that
uses the existing module source concept, to allow for easier porting of
workflows that previously depended on the automatic copy behavior.
In a future release this feature may change again as the module
improvements design firms up, but we expect it to be broadly compatible
with this temporary state.
In order to use a backend for the state commands, we need an initialized
meta. Use a single Meta instance rather than temporary ones to make sure
the backends are initialized properly.
This e2etest runs an init, plan, apply, destroy sequence against a test
configuration using the real template and null providers downloaded from
the official repository.
This test _does_ trample a bit on the scope of some already-existing
tests, but this is mainly just to check our assumptions about how
Terraform behaves to ensure that we can reach our main conclusion here:
that the main Terraform workflow commands interact correctly with each
other in real use and we can complete the full workflow.
We already have good tests for the business logic around provider
installation, but the existing tests all stub out the main repository
server. This test completes that coverage by verifying that the installer
is able to run against the real repository and install an official release
of the template provider.
This basic test is here primarily because it's one of the few that can
run without reaching out to external services, and so it means our usual
test runs will catch situations where the main executable build is
somehow broken.
The version command itself is not very interesting to test, but it's
convenient in that its behavior is very predictable and self-contained.
Previously we had no automated testing of whether we can produce a
Terraform executable that actually works. Our various functional tests
have good coverage of specific Terraform features and whole operations,
but we lacked end-to-end testing of actual usage of the generated binary,
without any stubbing.
This package is intended as a vehicle for such end-to-end testing. When
run normally under "go test" it will produce a build of the main Terraform
binary and make it available for tests to execute. The harness exposes
a flag for whether tests are allowed to reach out to external network
services, controlled with our standard TF_ACC environment variable, so
that basic local tests can be safely run as part of "make test" while
more elaborate tests can be run easily when desired.
It also provides a separate mode of operation where the included script
make-archive.sh can be used to produce a self-contained test archive that
can be copied to another system to run the tests there. This is intended
to allow testing of cross-compiled binaries, by shipping them over to
the target OS and architecture to run without requiring a full Go compiler
installation on the target system.
The goal here is not to test again functionality that's already
well-covered by our existing tests, but rather to test chains of normal
operations against the build binary that are not otherwise tested
together.
The improved err scanner loop in meta causes these to race. There's no
need to write back to the same commands struct, so just use a new
instance in each iteration.
Meta.process was relying on the system readdir to order the arguments,
but readdir doesn't guarantee any ordering. Read the directory contents
as a whole and sort them in place before adding the tfvars files.
Previously the APIs for state persistence and management had some problematic cases where we depended on hidden mutations of the state structure as side-effects of otherwise-innocent-looking operations, which was a frequent cause of accidental regressions due to faulty assumptions.
This new model attempts to isolate certain state mutations to just within the state managers, and makes the state managers work on separated snapshots of the state rather than on the "live" object to reduce the risk of race conditions.
Due to how the state filter machinery works, passing no arguments is valid
and matches _all_ resources.
It is very unlikely that someone wants to remove everything from state, so
this ends up being a very dangerous default for the "terraform state rm"
command, and surprising for someone who perhaps runs it looking for the
usage information.
So we'll be pragmatic here and reject the no-arguments case for this
command, accepting that it makes the unlikely case of intentionally
deleting all resources harder in order to make it less likely that it
will happen _unintentionally_.
If someone does really want to remove all resources from the state, they
can provide an explicit empty string argument, but this isn't documented
because it's a weird case that doesn't seem worth mentioning.
This fixes#15283.
The state returned from the testState helper shouldn't rely on any
mutations caused by WriteState. The Init function (which is analogous to
NewState) shoudl set any required fields.
This command serves as an alternative to the human-oriented list of workspaces for scripting use-cases where it's useful to know the _current_ workspace name.
Previously we relied on a constellation of coincidences for everything to
work out correctly with state serials. In particular, callers needed to
be very careful about mutating states (or not) because many different bits
of code shared pointers to the same objects.
Here we move to a model where all of the state managers always use
distinct instances of state, copied when WriteState is called. This means
that they are truly a snapshot of the state as it was at that call, even
if the caller goes on mutating the state that was passed.
We also adjust the handling of serials so that the state managers ignore
any serials in incoming states and instead just treat each Persist as
the next version after what was most recently Refreshed.
(An exception exists for when nothing has been refreshed, e.g. because
we are writing a state to a location for the first time. In that case
we _do_ trust the caller, since the given state is either a new state
or it's a copy of something we're migrating from elsewhere with its
state and lineage intact.)
The intent here is to allow the rest of Terraform to not worry about
serials and state identity, and instead just treat the state as a mutable
structure. We'll just snapshot it occasionally, when WriteState is called,
and deal with serials _only_ at persist time.
This is intended as a more robust version of #15423, which was a quick
hotfix to an issue that resulted from our previous slopping handling
of state serials but arguably makes the problem worse by depending on
an additional coincidental behavior of the local backend's apply
implementation.
Skips checksum validation if the `TF_SKIP_PROVIDER_VERIFY` environment variable is set. Undocumented variable, as the primary goal is to significantly improve the local provider development workflow.
We need to release the lock just before deleting the state, in case the backend
can't remove the resource while holding the lock. This is currently true for
Windows local files.
TODO: While there is little safety in locking while deleting the state, it
might be nice to be able to coordinate processes around state deletion, i.e. in
a CI environment. Adding Delete() as a required method of States would allow
the removal of the resource to be delegated from the Backend to the State
itself.
A common reason to want to use `terraform plan` is to have a chance to
review and confirm a plan before running it. If in fact that is the
only reason you are running plan, this new `terraform apply -auto-approve=false`
flag provides an easier alternative to
P=$(mktemp -t plan)
terraform refresh
terraform plan -refresh=false -out=$P
terraform apply $P
rm $P
The flag defaults to true for now, but in a future version of Terraform it will
default to false.
The import command wasn't loading the full plugin path for discovery.
Run a basic plugin init sequence, and verify we can find a plugin (even
though the plugin is invalid and will fail).
The import command wasn't loading the plugin path at all, relying on the
local directory for binaries.
Load the plugin dir into Meta, and pass in ForceLocal for consistency.
The Backend returned was going to be a Local anyway, so the added check
wasn't ensuring anything.
The "confirm" method was directly checking the meta struct's input field,
but that only represents the -input command line flag, and doesn't
respect the TF_INPUT environment variable.
By calling the Input method instead, we check both.
This fixes#15338.
When using a `state` command, if the `-state` flag is provided we do not
want to modify the Backend state. In this case we should always create a
local state instance.
The backup flag was also being ignored, and some tests were relying on
that, which have been fixed.
If we provide a -state flag to a state command, we do not want terraform
to modify the backend state. This test fails since the state specified
in the backend doesn't exist
Remove "checksum" from the error, and only indicate that the plugin has
changed.
Always show requested versions even if it's "any", and found versions of
plugins.
This change makes various minor adjustments to the rendering of plans
in the output of "terraform plan":
- Resources are identified using the standard resource address syntax,
rather than exposing the legacy internal representation used in the
module diff resource keys. This fixes#8713.
- Subjectively, having square brackets in the addresses made it look more
visually "off" when the same name but with different indices were
shown together with differing-length "symbols", so the symbols are now
all padded and right-aligned to three characters for consistent layout
across all operations.
- The -/+ action is now more visually distinct, using several different
colors to help communicate what it will do and including a more obvious
"(new resource required)" marker to help draw attention to this not
being just an update diff. This fixes#15350.
- The resources are now sorted in a manner that sorts index [10] after
index [9], rather than after index [1] as we did before. This makes it
easier to scan the list and avoids the common confusion where it seems
that there are only 10 items when in fact there are 11-20 items with
all the tens hiding further up in the list.
Previously init would crash if given these options:
-backend=false -get-plugins=true
This is because the state is used as a source of provider dependency
information, and we need to instantiate the backend to get the state.
To avoid the crash, we now use the following adjusted behavior:
- if -backend=true, we behave as before
- if -backend=false, we instead try to instantiate the backend the same
way any other command would, without modifying its configuration
- if we're able to instantiate the backend, we use it to fetch state
for dependency resolution purposes
- if the backend is not instantiable then we assume it's not yet
configured and proceed with a nil state, which may cause us to see an
incomplete picture of the dependencies but still allows the install
to succeed. Subsequently running "terraform plan" will not work until
the backend is (re-)initialized, so the incomplete picture of required
plugins is safe.
This takes care of a few dangling cases where we were still stringifying
empty version constraints, which creates confusing error messages due to
it stringing as the empty string.
For the "no suitable versions available" message, we fall back on the
"provider not found" message if no versions were found even though it's
unconstrained. This should only happen in an edge case where the
provider's index page exists on the releases server but no versions are
yet present.
For the message about plugin protocol versions, this again is an edge
case since with no constraints this should happen only if we release
an incompatible Terraform version but don't release a new version of the
plugin that's compatible. In this case we just show the constraint as
"(any version)" to make sure we always show _something_.
Previously we only did this when _upgrading_, but that's unnecessarily
specific and confusing since e.g. plugins can get upgraded implicitly by
constraint changes, which would not then trigger the purge process.
Instead, we'll assume that the user is able to easily re-download plugins
that were purged here, or if they need more specific guarantees they will
manage manually a plugin directory and disable the auto-install behavior
using `-plugin-dir`.
Now we are able to recognize and handle a few special error situations
from plugin installation with more verbose error messages that give the
user better feedback on how to proceed.
The -plugin-dir option lets the user specify custom search paths for
plugins. This overrides all other plugin search paths, and prevents the
auto-installation of plugins.
We also make sure that the availability of plugins is always checked
during init, even if -get-plugins=false or -plugin-dir is set.
init should always write intternal data to the current directory, even
when a path is provided. The inherited behavior no longer applies to the
new use of init.
Now that init can take a directory for configuration, the old behavior
of writing the .terraform data directory into the target path no longer
makes sense. Don't change the dataDir field during init, and write to
the default location.
Clean up all references to Meta.dataDir, and only use the getter method
in case we chose to dynamically override this at some point.
Now when -upgrade is provided to "terraform init" (and plugin installation
isn't disabled) it will:
- ignore the contents of the auto-install plugin directory when deciding
what is "available", thus causing anything there to be reinstalled,
possibly at a newer version.
- if installation completes successfully, purge from the auto-install
plugin directory any plugin-looking files that aren't in the set of
chosen plugins.
As before, plugins outside of the auto-install directory are able to
take precedence over the auto-install ones, and these will never be
upgraded nor purged.
The thinking here is that the auto-install directory is an implementation
detail directly managed by Terraform, and so it's Terraform's
responsibility to automatically keep it clean as plugins are upgraded.
We don't yet have the -plugin-dir option implemented, but once it is it
should circumvent all of this behavior and just expect providers to be
already available in the given directory, meaning that nothing will be
auto-installed, -upgraded or -purged.
Previously we had a "getProvider" function type used to implement plugin
fetching. Here we replace that with an interface type, initially with
just a "Get" function.
For now this just simplifies the interface by allowing the target
directory and protocol version to be members of the struct rather than
passed as arguments.
A later change will extend this interface to also include a method to
purge unused plugins, so that upgrading frequently doesn't leave behind
a trail of unused executable files.
As of this commit this just upgrades modules, but this option will also
later upgrade plugins and indeed anything else that's being downloaded and
installed as part of the init.
Since there is little left that isn't core, remove the distinction for
now to reduce confusion, since a "core" binary will mostly work except
for provisioners.
We're shifting terminology from "environment" to "workspace". This takes
care of some of the main internal API surface that was using the old
terminology, though is not intended to be entirely comprehensive and is
mainly just to minimize the amount of confusion for maintainers as we
continue moving towards eliminating the old terminology.
Previously we just silently ignored warnings from validating the backend
config, but now that we have a deprecated argument it's important to print
these out so users can respond to the deprecation warning.
Feedback after 0.9 was that the term "environment" was confusing due to
it colliding with several other concepts, such as OS environment
variables, a non-aligned Terraform Enterprise concept, and differing ideas
of "environment" within various organizations.
This new term "workspace" is intended to ease some of that confusion. This
term is not used anywhere else in Terraform today, and we expect it to not
be used in a manner that would be confusing within user organizations.
This begins a deprecation cycle for the "terraform env" family of commands,
instead moving to an equivalent set of "terraform workspace" commands.
There are some remaining references to the old "environment" concept in
the code, which will be cleaned up in a separate change. This change is
instead focused on text visible in the UI and wording within code comments
for the benefit of human maintainers of the code.
This allows you to run multiple concurrent terraform operations against
different environments from the same source directory.
Fixes#14447.
Also removes some dead code which appears to do the same thing as the function I
modified.
When init was modified in 0.9 to initialize a terraform working
directory, the legacy behavior was kept to copy or fetch module sources.
This left the init command without the ability that the plan and apply
commands have to target a specific directory for the operation.
This commit removes the legacy behavior altogether, and allows init to
target a directory for initialization, bringing it into parity with plan
and apply. If one want to copy a module to the target or current
directory, that will have to be done manually before calling init. We
can later reintroduce fetching modules with init without breaking this
new behavior, by adding the source as an optional second argument.
The unit tests testing the copying of sources with init have been
removed, as well as some out of date (and commented out) init tests
regarding remote states.
ConstrainVersions was documented as returning nil, but it was instead
returning an empty set. Use the Count() method to check for nil or
empty. Add test to verify failed constraints will show up as missing.
"environment" is a very overloaded term, so here we prefer to use the
term "working directory" to talk about a local directory where operations
are executed on a given Terraform configuration.
Each provider plugin will take at least a few seconds to download, so
providing feedback about each one should make users feel less like
Terraform has hung.
Ideally we'd show ongoing progress during the download, but that's not
possible without re-working go-getter, so we'll accept this as an interim
solution for now.
This was added with the idea of using it to override the SHA256 hashes
to match those hypothetically stored in a plan, but we already have a
mechanism elsewhere for populating context fields from plan fields, so
this is not actually necessary.
When running "terraform init" with providers that are unconstrained, we
will now produce information to help the user update configuration to
constrain for the particular providers that were chosen, to prevent
inadvertently drifting onto a newer major release that might contain
breaking changes.
A ~> constraint is used here because pinning to a single specific version
is expected to create dependency hell when using child modules. By using
this constraint mode, which allows minor version upgrades, we avoid the
need for users to constantly adjust version constraints across many
modules, but make major version upgrades still be opt-in.
Any constraint at all in the configuration will prevent the display of
these suggestions, so users are free to use stronger or weaker constraints
if desired, ignoring the recommendation.
Once we've installed the necessary plugins, we'll do one more walk of
the available plugins and record the SHA256 hashes of all of the plugins
we select in the provider lock file.
The file we write here gets read when we're building ContextOpts to
initialize the main terraform context, so any command that works with
the context will then fail if any of the provider binaries change.
By reading our lock file and passing this into the context, we ensure that
only the plugins referenced in the lock file can be used. As of this
commit there is no way to create that lock file, but that will follow soon
as part of "terraform init".
We also provide a way to force a particular set of SHA256s. The main use
for this is to allow us to persist a set of plugins in the plan and
check the same plugins are used during apply, but it may also be useful
for automated tests.
As well as constraining plugins by version number, we also want to be
able to pin plugins to use specific executables so that we can detect
drift in available plugins between commands.
This commit allows such requirements to be specified, but doesn't yet
specify any such requirements, nor validate them.
Previously we encouraged users to import a resource and _then_ write the
configuration block for it. This ordering creates lots of risk, since
for various reasons users can end up subsequently running Terraform
without any configuration in place, which then causes Terraform to want
to destroy the resource that was imported.
Now we invert this and require a minimal configuration block be written
first. This helps ensure that the user ends up with a correlated resource
config and state, protecting against any inconsistency caused by typos.
This addresses #11835.
Previously we deferred validation of the resource address on the import
command until we were in the core guts, which caused the error responses
to be rather unhelpful.
By validating these things early we can give better feedback to the user.
For some reason there was a block of commented-out tests for the refresh
command in the test file for the import command. Here we remove them to
reduce the noise in this file.
Add discovery.GetProviders to fetch plugins from the relases site.
This is an early version, with no tests, that only (probably) fetches
plugins from the default location. The URLs are still subject to change,
and since there are no plugin releases, it doesn't work at all yet.
Instead of providing the a path in BackendOpts, provide a loaded
*config.Config instead. This reduces the number of places where
configuration is loaded.
This new command prints out the tree of modules annotated with their
associated required providers.
The purpose of this command is to help users answer questions such as
"why is this provider required?", "why is Terraform using an older version
of this provider?", and "what combination of modules is creating an
impossible provider version situation?"
For configurations using many modules this sort of question is likely to
come up a lot once we support versioned providers.
As a bonus use-case, this command also shows explicitly when a provider
configuration is being inherited from a parent module, to help users to
understand where the configuration is coming from for each module when
some child modules provide their own provider configurations.
We're going to use config to determine provider dependencies, so we need
to always provide a config when instantiating a context or we'll end up
loading no providers at all.
We previously had a test for running "terraform import -config=''" to
disable the config entirely, but this test is now removed because it makes
no sense. The actual functionality its testing still remains for now,
but it will be removed in a subsequent commit when we start requiring that
a resource to be imported must already exist in configuration.
Rather than providing an already-resolved map of plugins to core, we now
provide a "provider resolver" which knows how to resolve a set of provider
dependencies, to be determined later, and produce that map.
This requires the context to be instantiated in a different way, so this
very noisy diff is a mostly-mechanical update of all of the existing
places where contexts get created for testing, using some adapted versions
of the pre-existing utilities for passing in mock providers.
Previously the set of providers was fixed early on in the command package
processing. In order to be version-aware we need to defer this work until
later, so this interface exists so we can hold on to the possibly-many
versions of plugins we have available and then later, once we've finished
determining the provider dependencies, select the appropriate version of
each provider to produce the final set of providers to use.
This commit establishes the use of this new mechanism, and thus populates
the provider factory map with only the providers that result from the
dependency resolution process.
This disables support for internal provider plugins, though the
mechanisms for building and launching these are still here vestigially,
to be cleaned up in a subsequent commit.
This also adds a new awkward quirk to the "terraform import" workflow
where one can't import a resource from a provider that isn't already
mentioned (implicitly or explicitly) in config. We will do some UX work
in subsequent commits to make this behavior better.
This breaks many tests due to the change in interface, but to keep this
particular diff reasonably easy to read the test fixes are split into
a separate commit.
Currently this doesn't matter much, but we're about to start checking the
availability of providers early on and so we need to use the correct name
for the mock set of providers we use in command tests, which includes
only a provider named "test".
Without this change, the "push" tests will begin failing once we start
verifying this, since there's no "aws" provider available in the test
context.
Having this as a method of PluginMeta felt most natural, but unfortunately
that means that discovery must depend on plugin and plugin in turn
depends on core Terraform, thus making the discovery package hard to use
without creating dependency cycles.
To resolve this, we invert the dependency and make the plugin package be
responsible for instantiating clients given a meta, using a top-level
function.
Previously we did plugin discovery in the main package, but as we move
towards versioned plugins we need more information available in order to
resolve plugins, so we move this responsibility into the command package
itself.
For the moment this is just preserving the existing behavior as long as
there are only internal and unversioned plugins present. This is the
final state for provisioners in 0.10, since we don't want to support
versioned provisioners yet. For providers this is just a checkpoint along
the way, since further work is required to apply version constraints from
configuration and support additional plugin search directories.
The automatic plugin discovery behavior is not desirable for tests because
we want to mock the plugins there, so we add a new backdoor for the tests
to use to skip the plugin discovery and just provide their own mock
implementations. Most of this diff is thus noisy rework of the tests to
use this new mechanism.
Before this, invoking this codepath would print
Terraform has successfully migrated from legacy remote state to your
configured remote state.%!(EXTRA string=s3)
* core/providersplit: Split OPC Provider to separate repo
As we march towards Terraform 0.10.0, we are going to start building the
terraform providers as separate binaries - this will allow us to
continually release them. Before we go to 0.10.0, we need to be able to
continue building providers in the same manner, therefore, we have
hardcoded the path of the provider in the generate-plugins.go file
The interim solution will require us to vendor the opc provider and any
child dependencies, but when we get to 0.10.0, we will no longer have to
do this - the core will auto download the plugin binary. The plugin
package will have it's own dependencies vendored as well.
* core/providersplit: Removing the builtin version of OPC provider
* core/providersplit: Vendoring the OPC plugin
* core/providersplit: update internal plugin list
* core/providersplit: remove unused govendor item
Looking through the operations in node_resource_apply and
node_resource_destroy, there are multiple nil checks for diffApply, so
one we need to continue to assume that the diff can be nil through there
until we ensure that it is non-nill in all cases.
So regardless of how we came to get a nil diff in the UiHook PreApply
method, we need to check it.