Currently the provisioner will fail if the `hab` user already exists on
the target system.
This adds a check to see if we need to create the user before trying to
add it.
Fixes#17159
Signed-off-by: Nolan Davidson <ndavidson@chef.io>
This change allows the Habitat supervisor service name to be
configurable. Currently it is hard coded to `hab-supervisor`.
Signed-off-by: Nolan Davidson <ndavidson@chef.io>
Moves the nested select statements for backend operations into a single
function. The only difference in this part was that apply called
PersistState, which should be harmless regardless of the type of
operation being run.
The error was being silently dropped before.
There is an interpolation error, because the plan is canceled before
some of the resources can be evaluated. There might be a better way to
handle this in the walk cancellation, but the behavior has not changed.
Make the plan and apply shutdown match implementation-wise
If the user wishes to interrupt the running operation, only the first
interrupt was communicated to the operation by canceling the provided
context. A second interrupt would start the shutdown process, but not
communicate this to the running operation. This order of event could
cause partial writes of state.
What would happen is that once the command returns, the plugin system
would stop the provider processes. Once the provider processes dies, all
pending Eval operations would return return with an error, and quickly
cause the operation to complete. Since the backend code didn't know that
the process was shutting down imminently, it would continue by
attempting to write out the last known state. Under the right
conditions, the process would exit part way through the writing of the
state file.
Add Stop and Cancel CancelFuncs to the RunningOperation, to allow it to
easily differentiate between the two signals. The backend will then be
able to detect a shutdown and abort more gracefully.
In order to ensure that the backend is not in the process of writing the
state out, the command will always attempt to wait for the process to
complete after cancellation.
The plan shutdown test often fail on slow CI hosts, becase the plan
completes befor the main thread can cancel it. Since attempting to make
the MockProvider concurrent proved too invasive for now, just slow the
test down a bit to help ensure Stop gets called.
Similar to NodeApplyableOuptut, NodeDestroyableOutputs also need to stay
in the graph if any ancestor nodes
Use the same GraphNodeTargetDownstream method to keep them from being
pruned, since they are dependent on the output node and all its
descendants.
github.com/joyent/triton-go replaced a bunch of other dependencies quite
some time ago, but the replaced dependencies were never removed. This
commit removes them from the vendor manifest and the vendor/ directory.
The id attribute can be missing during the destroy operation.
While the new destroy-time ordering of outputs and locals should prevent
resources from having their id attributes set to an empty string,
there's no reason to error out if we have the canonical ID field
available.
This still interrogates the attributes map first to retain any previous
behavior, but in the future we should settle on a single ID location.
Since outputs and local nodes are always evaluated, if the reference a
resource form the configuration that isn't in the state, the
interpolation could fail.
Prune any local or output values that have no references in the graph.
Now that outputs are always evaluated, we still need a way to remove
them from state when they are destroyed.
Previously, outputs were removed during destroy from the same
"Applyable" node type that evaluates them. Now that we need to possibly
both evaluate and remove output during an apply, we add a new node -
NodeDestroyableOutput.
This new node is added to the graph by the DestroyOutputTransformer,
which make the new destroy node depend on all descendants of the output
node. This ensures that the output remains in the state as long as
everything which may interpolate the output still exists.
Always evaluate outputs during destroy, just like we did for locals.
This breaks existing tests, which we will handle separately.
Don't reverse output/local node evaluation order during destroy, as they
are both being evaluated.
Add a complex destroy provisioner testcase using locals, outputs and
variables.
Add that pesky "id" attribute to the instance states for interpolation.
Destroy-time provisioners require us to re-evaluate during destroy.
Rather than destroying local values, which doesn't do much since they
aren't persisted to state, we always evaluate them regardless of the
type of apply. Since the destroy-time local node is no longer a
"destroy" operation, the order of evaluation need to be reversed. Take
the existing DestroyValueReferenceTransformer and change it to reverse
the outgoing edges, rather than in incoming edges. This makes it so that
any dependencies of a local or output node are destroyed after
evaluation.
Having locals evaluated during destroy failed one other test, but that
was the odd case where we need `id` to exist as an attribute as well as
a field.