Currently the provisioner will fail if the `hab` user already exists on
the target system.
This adds a check to see if we need to create the user before trying to
add it.
Fixes#17159
Signed-off-by: Nolan Davidson <ndavidson@chef.io>
This change allows the Habitat supervisor service name to be
configurable. Currently it is hard coded to `hab-supervisor`.
Signed-off-by: Nolan Davidson <ndavidson@chef.io>
First successful run with private origin and HAB_AUTH_TOKEN set
Update struct, schema, and decodeConfig names to more sensible versions
Cleaned up formatting
Update habitat provisioner docs
Remove unused unitstring
Previously the provisioner did not wait until the Salt operation had completed before returning, causing some operations not to be applied, and causing the output to get swallowed.
Now we wait until the remote work is complete, and copy output into the Terraform log in a similar way as is done for other provisioners.
Fixes#15921
When terraform re-creates an existing node/client with chef provisioner,
the already existing client (which has old keys) must be removed from
the vault items. Afterwards, the chef-vault will be updated with the
newly created client (which has the new keys). Therefore, the recreated
client will be able to decrypt the vault items properly.
In #15870 we got good feedback that it'd be more useful to have the
various filename-accepting arguments on this provisioner instead accept
strings that represent the contents of such files, so that they can be
generated from elsewhere in the Terraform config.
This change does not achieve that, but it does make room for doing this
later by renaming "minion_config" to "minion_config_file" so that we
can later add a "minion_config" option alongside that takes the file
content, and deprecate "minion_config_file".
Ideally we'd just implement the requested change immediately, but
unfortunately the release schedule doesn't have time for this so this is
a pragmatic change to allow us to make the full requested change at a
later date without backward incompatibilities.
This change is safe because the salt-masterless provisioner has not yet
been included in a release at the time of this commit.
The code here was previously assuming that d.State() was equivalent to
the schema.ProvRawStateKey due to them both being of type InstanceState,
but that is in fact not true since a state object contains some transient
information that is _not_ part of the persisted state, including the
connection information we need here.
Calling ResourceData.State() constructs a _new_ state based on its stored
values, so the constructed object is lacking this transient information.
We need to use the specific state object provided by the caller in order
to get access to the transient connection configuration.
Unfortunately there is no automated test coverage for this because we have
no good story for testing provisioners that use "communicator". While such
tests could potentially be written, we'd like to get this in somewhat
quickly to unblock a release, rather than delaying to design and implement
some sort of mocking system for this.
TestResourceProvider_stop uses a goroutine, which means that any function with *testing.T as its receiver within that goroutine will silently fail.
Now the test to accepts that an error that occurs within the goroutine is lost. It also adds some more verbose logs to explain what is happening.
It turns out that `d.GetOk` also returns `false` when the user _did_ actually supply a value for it in the config, but the value itself needs to be evaluated before it can be used.
So instead of passing a `ResourceData` we now pass a `ResourceConfig`
which makes much more sense for doing the validation anyway.
The tests did pass, but that was because they only tested part of the changes. By using the `schema.TestResourceDataRaw` function the schema and config are better tested and so they pointed out a problem with the schema of the Chef provisioner.
The `Elem` fields did not have a `*schema.Schema` but a `schema.Schema` and in an `Elem` schema only the `Type` field may (and must) be set. Any other fields like `Optional` are not allowed here.
Next to fixing that problem I also did a little refactoring and cleaning up. Mainly making the `ProvisionerS` private (`provisioner`) and removing the deprecated fields.
1. Migrate `chef` provisioner to `schema.Provisioner`:
* `chef.Provisioner` structure was renamed to `ProvisionerS`and now it's decoded from `schema.ResourceData` instead of `terraform.ResourceConfig` using simple copy-paste-based solution;
* Added simple schema without any validation yet.
2. Support `ValidateFunc` validate function : implemented in `file` and `chef` provisioners.
Storing error values to atomic.Value may fail if they have different
dynamic types. Wrap error value in a consistent struct type to avoid
panics.
Make sure we return a nil error on success
If the shell spawns a subprocess which doesn't close the output file
descriptors, the exec.Cmd will block on Wait() (see
golang.org/issue/18874). Use an os.Pipe to provide the command with a
real file descriptor so the exec package doesn't need to do the copy
manually. This in turn may block our own reading goroutine, but we can
select on that and leave it for cleanup later.
Fixes#10788
This checks `IsComputed` prior to attempting to use the JSON
configurations. Due to a change in 0.8, the prior check for simply map
existence would always succeed even with a computed value (as designed),
but we forgot to update provisioners to not do that.
There are other provisioners that also do this but to no ill effect
currently. I've only changed Chef since we know that is an issue.
This issue doesn't affect 0.9 due to helper/schema doing this
automatically for provisioners.
The provisioner collected all inline commands into a single script which meant
only the exit code of the last command was actually checked for an error.
* provisioner/chef: Support named run-lists for Policyfiles
Add an optional argument for overriding the Chef Client's initial
run with a named run-list specified by the Policyfile. This is useful
for bootstrapping a node with a one-time setup recipe that deviates
from a policy's normal run-list.
* Update chef client cmd building per review feedback.
Fixes#10463
I'm really surprised this flew under the radar for years...
By having unique PRNGs, the SSH communicator could and would
generate identical ScriptPaths and two provisioners running in parallel
could overwrite each other and execute the same script. This would
happen because they're both seeded by the current time which could
potentially be identical if done in parallel...
Instead, we share the rand now so that the sequence is guaranteed
unique. As an extra measure of robustness, we also multiple by the PID
so that we're also protected against two processes at the same time.
Fixes#3605 and adds the functionality suggested in PR #7440.
This PR is using a different appraoch that (IMHO) feels cleaner and (even more important) adds support for Windows at the same time.