Looks like while we were checking errors correctly when ExpectError was
set, we weren't checking for the *absence* of an error, which is should
be checked as well (no error is still not the error we are looking for).
Added a few more tests for ExpectError as well.
Users commonly ask how the S3 backend can be used in an organization that
splits its infrastructure across many AWS accounts.
We've traditionally shied away from making specific recommendations here
because we can't possibly anticipate the different standards and
regulations that constrain each user. This new section attempts to
describe one possible approach that works well with Terraform's workflow,
with the goal that users make adjustments to it taking into account their
unique needs.
Since we are intentionally not being prescriptive here -- instead
considering this just one of many approaches -- it deviates from our usual
active writing style in several places to avoid giving the impression that
these are instructions to be followed exactly, which in some cases
requires the use of passive voice even though that is contrary to our
documentation style guide. For similar reasons, this section is also
light on specific code examples, since we do not wish to encourage users
to just copy-paste the examples without thinking through the consequences.
Previously our error message here was confusing and redundant:
Error starting operation: provider.null: invalid version constraint "not valid": Malformed constraint: not valid
Instead, we'll generate a full HCL2 diagnostic here, which results in
something (subjectively) nicer:
Error: Invalid provider version constraint
The value "@ 1.0.0" given for provider.null is not a valid version
constraint.
At the moment this message is an outlier in that the other validation
errors are all still just plain Go errors, but over time we'll want to
adjust all of these to be full diagnostics so that we can embed source
range information in them to help the user find the offending
configuration.
Previously we required callers to separately call .Validate on the root
module to determine if there were any value errors, but we did that
inconsistently and would thus see crashes in some cases where later code
would try to use invalid configuration as if it were valid.
Now we run .Validate automatically after config loading, returning the
resulting diagnostics. Since we return a diagnostics here, it's possible
to return both warnings and errors.
We return the loaded module even if it's invalid, so callers are free to
ignore returned errors and try to work with the config anyway, though they
will need to be defensive against invalid configuration themselves in
that case.
As a result of this, all of the commands that load configuration now need
to use diagnostic printing to signal errors. For the moment this just
allows us to return potentially-multiple config errors/warnings in full
fidelity, but also sets us up for later when more subsystems are able
to produce rich diagnostics so we can show them all together.
Finally, this commit also removes some stale, commented-out code for the
"legacy" (pre-0.8) graph implementation, which has not been available
for some time.
This creates a unique bucket name for each test, so that the tests in
parallel don't collide, and buckets left over from interrupted tests
don't cause future failures.
Also make sure that buckets are removed, regardless of content.
The backend was creating bucket named in the configuration if it didn't
exist. We don't allow other backends to do this, because these are not
managed resources that terraform can control.
Previously there was a problem with double-locking when using the GCS backend with the terraform_remote_state data source.
Here we adjust the locking methodology to avoid that problem.
* Verify discovery works without trailing slash on discovery URL
* Update registry API docs with browse and search endpoints
* Add sample request/responses
* Add comment to test to indicate expecations
* Fix typo
* Remove trailing slash weirdness
It's important to match the version of local Terraform with the remote Terraform version in Terraform Enterprise when using the "terraform push" command, or else the uploaded configuration package may not be compatible.
Previously the provisioner did not wait until the Salt operation had completed before returning, causing some operations not to be applied, and causing the output to get swallowed.
Now we wait until the remote work is complete, and copy output into the Terraform log in a similar way as is done for other provisioners.
Now that the local backend can be cancelled during plan and refresh, we
don't really need the testShutdownHook. Simplify the tests by just
checking for Stop being called on the provider.