Since bucket names must be *globally* unique. By including the project
ID in the bucket name we ensure that people don't step on each other's
feet when testing.
This calls backend.TestBackend() and remote.TestRemoteLocks() for
standardized acceptance tests. It removes custom listing tests since
those are performed by backend.TestBackend(), too.
Since each tests uses its own bucket, all tests can be run in parallel.
This resurrects the previously documented but unused "project" option.
This option is required to create buckets (so they are associated with the
right cloud project) but not to access the buckets later on (because their
names are globally unique).
The code is loosely based on state/remote/gcs_test.go. If the
GOOGLE_PROJECT environment variable is set, this test will
1) create a new bucket; error out if the bucket already exists.
2) create a new state
3) list states and ensure that the newly created state is listed
4) ensure that an object with the expected name exists
5) rum "state/remote".TestClient()
6) delete the state
The bucket is deleted when the test exits, though this may fail if the
bucket it not empty.
This config option was used by the legacy "gcs" client. If set, we're
using it for the default state -- all other states still use the
"state_dir" setting.
Calling context.Background() from outside the main() function is
discouraged. The configure functions are only called from
"…/helper/schema".Backend.Configure which provides the Background context,
i.e. a long-living context we can use for backend communication.
While #16243 added the ability to retry getting a state from S3, Put can
return the same InternalError status. Use the same retry logic when
uploading state to S3.
Add a way to inject network errors by setting an immediate deadline on
open consul connections. The consul client currently doesn't retry on
some errors, and will force us to lose our lock.
Once the consul api client is fixed, this test will fail.
The consul Client is analogous to an http.Client, and we really don't
need more than 1. Configure a single client and store it in the backend.
Replace the default Transport's Dialer to reduce the KeepAlive setting
from 30s to 17s. This avoids racing with the common network timeout
value of 30s, and is also coprime to other common intervals.
Internal errors from S3 are usually transient, and can be immediately retried.
Make 2 attempts at retreiving the state object before returning an error.
In #15884 we adjusted the plan output to give an explicit command to run
to apply a plan, whereas before this command was just alluded to in the
prose.
Since releasing that, we've got good feedback that it's confusing to
include such instructions when Terraform is running in a workflow
automation tool, because such tools usually abstract away exactly what
commands are run and require users to take different actions to
proceed through the workflow.
To accommodate such environments while retaining helpful messages for
normal CLI usage, here we introduce a new environment variable
TF_IN_AUTOMATION which, when set to a non-empty value, is a hint to
Terraform that it isn't being run in an interactive command shell and
it should thus tone down the "next steps" messaging.
The documentation for this setting is included as part of the "...in
automation" guide since it's not generally useful in other cases. We also
intentionally disclaim comprehensive support for this since we want to
avoid creating an extreme number of "if running in automation..."
codepaths that would increase the testing matrix and hurt maintainability.
The focus is specifically on the output of the three commands we give in
the automation guide, which at present means the following two situations:
* "terraform init" does not include the final paragraphs that suggest
running "terraform plan" and tell you in what situations you might need
to re-run "terraform init".
* "terraform plan" does not include the final paragraphs that either
warn about not specifying "-out=..." or instruct to run
"terraform apply" with the generated plan file.