Use the new StateLocker field to provide a wrapper for locking the state
during terraform.Context creation in the commands that directly
manipulate the state.
Simplify the use of clistate.Lock by creating a clistate.Locker
instance, which stores the context of locking a state, to allow unlock
to be called without knowledge of how the state was locked.
This alows the backend code to bring the needed UI methods to the point
where the state is locked, and still unlock the state from an outer
scope.
Provide a NoopLocker as well, so that callers can always call Unlock
without verifying the status of the lock.
Add the StateLocker field to the backend.Operation, so that the state
lock can be carried between the different function scopes of the backend
code. This will allow the backend context to lock the state before it's
read, while allowing the different operations to unlock the state when
they complete.
Use the new StateLocker field to provide a wrapper for locking the state
during terraform.Context creation in the commands that directly
manipulate the state.
Simplify the use of clistate.Lock by creating a clistate.Locker
instance, which stores the context of locking a state, to allow unlock
to be called without knowledge of how the state was locked.
This alows the backend code to bring the needed UI methods to the point
where the state is locked, and still unlock the state from an outer
scope.
The error was being silently dropped before.
There is an interpolation error, because the plan is canceled before
some of the resources can be evaluated. There might be a better way to
handle this in the walk cancellation, but the behavior has not changed.
Make the plan and apply shutdown match implementation-wise
If the user wishes to interrupt the running operation, only the first
interrupt was communicated to the operation by canceling the provided
context. A second interrupt would start the shutdown process, but not
communicate this to the running operation. This order of event could
cause partial writes of state.
What would happen is that once the command returns, the plugin system
would stop the provider processes. Once the provider processes dies, all
pending Eval operations would return return with an error, and quickly
cause the operation to complete. Since the backend code didn't know that
the process was shutting down imminently, it would continue by
attempting to write out the last known state. Under the right
conditions, the process would exit part way through the writing of the
state file.
Add Stop and Cancel CancelFuncs to the RunningOperation, to allow it to
easily differentiate between the two signals. The backend will then be
able to detect a shutdown and abort more gracefully.
In order to ensure that the backend is not in the process of writing the
state out, the command will always attempt to wait for the process to
complete after cancellation.
The plan shutdown test often fail on slow CI hosts, becase the plan
completes befor the main thread can cancel it. Since attempting to make
the MockProvider concurrent proved too invasive for now, just slow the
test down a bit to help ensure Stop gets called.
To avoid breaking automation where plugin-path was assumed to be set
permanently, only remove the plugin-path record if it was explicitly set
to and empty string.
The existing prompts were worded as if backend configurations were
named, but they can only really be referenced by their type. Change the
wording to reference them as type "X backend". When migrating state,
refer to the backends as the "previously configured" and "newly
configured", since they will often have the same type.
Rather than relying on interrupting Diff, just make sure Stop was called
on the provider. The DiffFn is protected by a mutex in the mock
provider, which means that the tests can't rely on concurent calls to
diff working.
There's no point in trying to track these, they're lost after each test.
Kill them after a short delay so we don't have goroutines from every single
command test to wade through if we have a stack dump.
Only check for input twice in the meta.confirm method. This prevents an
errant newline from aborting the run while allowing Terraform to exit if
there is no input available. We don't just check for a tty, since we
still rely on being able to pipe input in for testing.
Remove the redundant confirmation loops in the migration code, and only
use the confirm method.
Make sure the init inputFalse test actually errors from missing input,
since skipping input will still fail later during provider
initialization. We need to make sure there are two different states that
aren't a noop for migration, and reset the command struct for each run.
Also verify that we don't go into an infinite loop if there is no input.
The duplicate prompts can be confusing when the user confirms that a
migration should happen and we immediately prompt a second time for the
same thing with slightly different wording. The extra hand-holding that
this provides for legacy remote states is less critical now, since it's
been 2 major release cycles since those were removed.
The init command needs to parse the state to resolve providers, but
changes to the state format can cause that to fail with difficult to
understand errors. Check the terraform version during init and provide
the same error that would be returned by plan or apply.
If users run "terraform import" in a directory with no Terraform
configuration files, it's likely that they've made a mistake either by
being in the wrong directory or forgetting to use the -config option
on the command line.
To help users find their mistake in this case, we'll now produce a
specialized error message for this situation:
Error: No Terraform configuration files
The directory /home/user/example does not contain any Terraform
configuration files (.tf or .tf.json). To specify a different
configuration directory, use the -config="..." command line option.
While here, this also converts some of the other existing messages to
diagnostics so that we can show any configuration warnings along with
the error message, and move towards the new standard error presentation.
Previously we required callers to separately call .Validate on the root
module to determine if there were any value errors, but we did that
inconsistently and would thus see crashes in some cases where later code
would try to use invalid configuration as if it were valid.
Now we run .Validate automatically after config loading, returning the
resulting diagnostics. Since we return a diagnostics here, it's possible
to return both warnings and errors.
We return the loaded module even if it's invalid, so callers are free to
ignore returned errors and try to work with the config anyway, though they
will need to be defensive against invalid configuration themselves in
that case.
As a result of this, all of the commands that load configuration now need
to use diagnostic printing to signal errors. For the moment this just
allows us to return potentially-multiple config errors/warnings in full
fidelity, but also sets us up for later when more subsystems are able
to produce rich diagnostics so we can show them all together.
Finally, this commit also removes some stale, commented-out code for the
"legacy" (pre-0.8) graph implementation, which has not been available
for some time.
Now that the local backend can be cancelled during plan and refresh, we
don't really need the testShutdownHook. Simplify the tests by just
checking for Stop being called on the provider.
Add a shutdown hook to verify that a context has been correctly
cancelled, so we can remove the sleep and stop guessing.
Add a plan version of the shutdown test as well.
There was no cancellation context for a plan, so it would always have to
run to completion as SIGINT was being swallowed.
Move the shutdown channel to the command Meta since it's used in
multiple commands.
Validation is the best time to return detailed diagnostics
to the user since we're much more likely to have source
location information, etc than we are in later operations.
This change doesn't actually add any detail to the messages
yet, but it changes the interface so that we can gradually
introduce more detailed diagnostics over time.
While here there are some minor adjustments to some of the
messages to improve their consistency with terminology we
use elsewhere.
As part of the 0.10 core/provider split we moved this provider, along with
all the others, out into its own repository.
In retrospect, the "terraform" provider doesn't really make sense to be
separated since it's just a thin wrapper around some core code anyway,
and so re-integrating it into core avoids the confusion that results when
Terraform Core and the terraform provider have inconsistent versions of
the backend code and dependencies.
There is no good reason to use a different version of the backend code
in the provider than in core, so this new "internal provider" mechanism
is stricter than the old one: it's not possible to use an external build
of this provider at all, and version constraints for it are rejected as
a result.
This provider is also run in-process rather than in a child process, since
again it's just a very thin wrapper around code that's already running
in Terraform core anyway, and so the process barrier between the two does
not create enough advantage to warrant the additional complexity.