Add a test to remote-exec to make sure the proper timeout is honored
during apply.
TODO: we need some test helpers for provisioners, so they can all be
verified.
The timeout for a provisioner is expected to only apply to the initial
connection. Keep the context for the communicator.Retry separate from
the global cancellation context.
* Updates the chef provisioner to allow specifying a channel
This also updates the omnitruck url to the current url.
Signed-off-by: Scott Hain <shain@chef.io>
* Update omnitruck URL
Signed-off-by: Scott Hain <shain@chef.io>
Use the new ExitStatus method, and also check the cmd.Err() method for
errors.
Remove leaks from the output goroutines in both provisioners by
deferring their cleanup, and returning early on all error conditions.
The timeout for the remote command was taken from the wrong config
field, and the connection timeout was being used which is 5 min. Any
remote command taking more than 5 min would be terminated by
disconnecting the communicator. Remove the timeout from the context, and
rely on the global timeout provided by terraform.
There was no way to get the error from the communicator previously, so
the broken connection was silently ignored and the provisioner returned
successfully. Now we can use the new cmd.Err() method to retrieve any
errors encountered during execution.
This new argument allows overriding of the working directory of the child process, with the default still being the working directory of Terraform itself.
There no reason to retry around the execution of remote scripts. We've
already established a connection, so the only that could happen here is
to continually retry uploading or executing a script that can't succeed.
This also simplifies the streaming output from the command, which
doesn't need such explicit synchronization. Closing the output pipes is
sufficient to stop the copyOutput functions, and they don't close around
any values that are accessed again after the command executes.
Currently the provisioner will fail if the `hab` user already exists on
the target system.
This adds a check to see if we need to create the user before trying to
add it.
Fixes#17159
Signed-off-by: Nolan Davidson <ndavidson@chef.io>
This change allows the Habitat supervisor service name to be
configurable. Currently it is hard coded to `hab-supervisor`.
Signed-off-by: Nolan Davidson <ndavidson@chef.io>
First successful run with private origin and HAB_AUTH_TOKEN set
Update struct, schema, and decodeConfig names to more sensible versions
Cleaned up formatting
Update habitat provisioner docs
Remove unused unitstring
Previously the provisioner did not wait until the Salt operation had completed before returning, causing some operations not to be applied, and causing the output to get swallowed.
Now we wait until the remote work is complete, and copy output into the Terraform log in a similar way as is done for other provisioners.
Fixes#15921
When terraform re-creates an existing node/client with chef provisioner,
the already existing client (which has old keys) must be removed from
the vault items. Afterwards, the chef-vault will be updated with the
newly created client (which has the new keys). Therefore, the recreated
client will be able to decrypt the vault items properly.
In #15870 we got good feedback that it'd be more useful to have the
various filename-accepting arguments on this provisioner instead accept
strings that represent the contents of such files, so that they can be
generated from elsewhere in the Terraform config.
This change does not achieve that, but it does make room for doing this
later by renaming "minion_config" to "minion_config_file" so that we
can later add a "minion_config" option alongside that takes the file
content, and deprecate "minion_config_file".
Ideally we'd just implement the requested change immediately, but
unfortunately the release schedule doesn't have time for this so this is
a pragmatic change to allow us to make the full requested change at a
later date without backward incompatibilities.
This change is safe because the salt-masterless provisioner has not yet
been included in a release at the time of this commit.
The code here was previously assuming that d.State() was equivalent to
the schema.ProvRawStateKey due to them both being of type InstanceState,
but that is in fact not true since a state object contains some transient
information that is _not_ part of the persisted state, including the
connection information we need here.
Calling ResourceData.State() constructs a _new_ state based on its stored
values, so the constructed object is lacking this transient information.
We need to use the specific state object provided by the caller in order
to get access to the transient connection configuration.
Unfortunately there is no automated test coverage for this because we have
no good story for testing provisioners that use "communicator". While such
tests could potentially be written, we'd like to get this in somewhat
quickly to unblock a release, rather than delaying to design and implement
some sort of mocking system for this.
TestResourceProvider_stop uses a goroutine, which means that any function with *testing.T as its receiver within that goroutine will silently fail.
Now the test to accepts that an error that occurs within the goroutine is lost. It also adds some more verbose logs to explain what is happening.
It turns out that `d.GetOk` also returns `false` when the user _did_ actually supply a value for it in the config, but the value itself needs to be evaluated before it can be used.
So instead of passing a `ResourceData` we now pass a `ResourceConfig`
which makes much more sense for doing the validation anyway.
The tests did pass, but that was because they only tested part of the changes. By using the `schema.TestResourceDataRaw` function the schema and config are better tested and so they pointed out a problem with the schema of the Chef provisioner.
The `Elem` fields did not have a `*schema.Schema` but a `schema.Schema` and in an `Elem` schema only the `Type` field may (and must) be set. Any other fields like `Optional` are not allowed here.
Next to fixing that problem I also did a little refactoring and cleaning up. Mainly making the `ProvisionerS` private (`provisioner`) and removing the deprecated fields.
1. Migrate `chef` provisioner to `schema.Provisioner`:
* `chef.Provisioner` structure was renamed to `ProvisionerS`and now it's decoded from `schema.ResourceData` instead of `terraform.ResourceConfig` using simple copy-paste-based solution;
* Added simple schema without any validation yet.
2. Support `ValidateFunc` validate function : implemented in `file` and `chef` provisioners.
Storing error values to atomic.Value may fail if they have different
dynamic types. Wrap error value in a consistent struct type to avoid
panics.
Make sure we return a nil error on success
If the shell spawns a subprocess which doesn't close the output file
descriptors, the exec.Cmd will block on Wait() (see
golang.org/issue/18874). Use an os.Pipe to provide the command with a
real file descriptor so the exec package doesn't need to do the copy
manually. This in turn may block our own reading goroutine, but we can
select on that and leave it for cleanup later.
Fixes#10788
This checks `IsComputed` prior to attempting to use the JSON
configurations. Due to a change in 0.8, the prior check for simply map
existence would always succeed even with a computed value (as designed),
but we forgot to update provisioners to not do that.
There are other provisioners that also do this but to no ill effect
currently. I've only changed Chef since we know that is an issue.
This issue doesn't affect 0.9 due to helper/schema doing this
automatically for provisioners.