Currently the provisioner will fail if the `hab` user already exists on
the target system.
This adds a check to see if we need to create the user before trying to
add it.
Fixes#17159
Signed-off-by: Nolan Davidson <ndavidson@chef.io>
This change allows the Habitat supervisor service name to be
configurable. Currently it is hard coded to `hab-supervisor`.
Signed-off-by: Nolan Davidson <ndavidson@chef.io>
Moves the nested select statements for backend operations into a single
function. The only difference in this part was that apply called
PersistState, which should be harmless regardless of the type of
operation being run.
The error was being silently dropped before.
There is an interpolation error, because the plan is canceled before
some of the resources can be evaluated. There might be a better way to
handle this in the walk cancellation, but the behavior has not changed.
Make the plan and apply shutdown match implementation-wise
If the user wishes to interrupt the running operation, only the first
interrupt was communicated to the operation by canceling the provided
context. A second interrupt would start the shutdown process, but not
communicate this to the running operation. This order of event could
cause partial writes of state.
What would happen is that once the command returns, the plugin system
would stop the provider processes. Once the provider processes dies, all
pending Eval operations would return return with an error, and quickly
cause the operation to complete. Since the backend code didn't know that
the process was shutting down imminently, it would continue by
attempting to write out the last known state. Under the right
conditions, the process would exit part way through the writing of the
state file.
Add Stop and Cancel CancelFuncs to the RunningOperation, to allow it to
easily differentiate between the two signals. The backend will then be
able to detect a shutdown and abort more gracefully.
In order to ensure that the backend is not in the process of writing the
state out, the command will always attempt to wait for the process to
complete after cancellation.
Since an early version of Terraform, the `destroy` command has always
had the `-force` flag to allow an auto approval of the interactive
prompt. 0.11 introduced `-auto-approve` as default to `false` when using
the `apply` command.
The `-auto-approve` flag was introduced to reduce ambiguity of it's
function, but the `-force` flag was never updated for a destroy.
People often use wrappers when automating commands in Terraform, and the
inconsistency between `apply` and `destroy` means that additional logic
must be added to the wrappers to do similar functions. Both commands are
more or less able to run with similar syntax, and also heavily share
their code.
This commit updates the command in `destroy` to use the `-auto-approve` flag
making working with the Terraform CLI a more consistent experience.
We leave in `-force` in `destroy` for the time-being and flag it as
deprecated to ensure a safe switchover period.
The plan shutdown test often fail on slow CI hosts, becase the plan
completes befor the main thread can cancel it. Since attempting to make
the MockProvider concurrent proved too invasive for now, just slow the
test down a bit to help ensure Stop gets called.
Similar to NodeApplyableOuptut, NodeDestroyableOutputs also need to stay
in the graph if any ancestor nodes
Use the same GraphNodeTargetDownstream method to keep them from being
pruned, since they are dependent on the output node and all its
descendants.
github.com/joyent/triton-go replaced a bunch of other dependencies quite
some time ago, but the replaced dependencies were never removed. This
commit removes them from the vendor manifest and the vendor/ directory.
The id attribute can be missing during the destroy operation.
While the new destroy-time ordering of outputs and locals should prevent
resources from having their id attributes set to an empty string,
there's no reason to error out if we have the canonical ID field
available.
This still interrogates the attributes map first to retain any previous
behavior, but in the future we should settle on a single ID location.
Since outputs and local nodes are always evaluated, if the reference a
resource form the configuration that isn't in the state, the
interpolation could fail.
Prune any local or output values that have no references in the graph.