Previously our error message here was confusing and redundant:
Error starting operation: provider.null: invalid version constraint "not valid": Malformed constraint: not valid
Instead, we'll generate a full HCL2 diagnostic here, which results in
something (subjectively) nicer:
Error: Invalid provider version constraint
The value "@ 1.0.0" given for provider.null is not a valid version
constraint.
At the moment this message is an outlier in that the other validation
errors are all still just plain Go errors, but over time we'll want to
adjust all of these to be full diagnostics so that we can embed source
range information in them to help the user find the offending
configuration.
Previously we required callers to separately call .Validate on the root
module to determine if there were any value errors, but we did that
inconsistently and would thus see crashes in some cases where later code
would try to use invalid configuration as if it were valid.
Now we run .Validate automatically after config loading, returning the
resulting diagnostics. Since we return a diagnostics here, it's possible
to return both warnings and errors.
We return the loaded module even if it's invalid, so callers are free to
ignore returned errors and try to work with the config anyway, though they
will need to be defensive against invalid configuration themselves in
that case.
As a result of this, all of the commands that load configuration now need
to use diagnostic printing to signal errors. For the moment this just
allows us to return potentially-multiple config errors/warnings in full
fidelity, but also sets us up for later when more subsystems are able
to produce rich diagnostics so we can show them all together.
Finally, this commit also removes some stale, commented-out code for the
"legacy" (pre-0.8) graph implementation, which has not been available
for some time.
This creates a unique bucket name for each test, so that the tests in
parallel don't collide, and buckets left over from interrupted tests
don't cause future failures.
Also make sure that buckets are removed, regardless of content.
The backend was creating bucket named in the configuration if it didn't
exist. We don't allow other backends to do this, because these are not
managed resources that terraform can control.
Previously there was a problem with double-locking when using the GCS backend with the terraform_remote_state data source.
Here we adjust the locking methodology to avoid that problem.
* Verify discovery works without trailing slash on discovery URL
* Update registry API docs with browse and search endpoints
* Add sample request/responses
* Add comment to test to indicate expecations
* Fix typo
* Remove trailing slash weirdness
It's important to match the version of local Terraform with the remote Terraform version in Terraform Enterprise when using the "terraform push" command, or else the uploaded configuration package may not be compatible.
Previously the provisioner did not wait until the Salt operation had completed before returning, causing some operations not to be applied, and causing the output to get swallowed.
Now we wait until the remote work is complete, and copy output into the Terraform log in a similar way as is done for other provisioners.
Now that the local backend can be cancelled during plan and refresh, we
don't really need the testShutdownHook. Simplify the tests by just
checking for Stop being called on the provider.
Add a shutdown hook to verify that a context has been correctly
cancelled, so we can remove the sleep and stop guessing.
Add a plan version of the shutdown test as well.