* backend/remote: do not panic if PrepareConfig or Configure receive null
objects
If a user cancels (ctrl-c) terraform init while it is requesting missing
configuration options for the remote backend, the PrepareConfig and
Configure functions would receive a null cty.Value which would result in
panics. This PR adds a check for null objects to the two functions in
question.
Fixes#23992
The remote server might choose to skip running cost estimation for a
targeted plan, in which case we'll show a note about it in the UI and then
move on, rather than returning an "invalid status" error.
This new status isn't yet available in the go-tfe library as a constant,
so for now we have the string directly in our switch statement. This is
a pragmatic way to expedite getting the "critical path" of this feature
in place without blocking on changes to ancillary codebases. A subsequent
commit should switch this over to tfe.CostEstimateSkippedDueToTargeting
once that's available in a go-tfe release.
Previously we did not allow -target to be used with the remote backend
because there was no way to send the targets to Terraform Cloud/Enterprise
via the API.
There is now an attribute in the request for creating a plan that allows
us to send target addresses, so we'll remove that restriction and copy
the given target addresses into the API request.
This includes a new TargetAddrs field on both Run and RunCreateOptions
which we'll use to send resource addresses that were specified using
-target on the CLI command line when using the remote backend.
There were some unrelated upstream breaking changes compared to the last
version we had vendored, so this commit also includes some changes to the
backend/remote package to work with this new API, which now requires the
remote backend to be aware of the remote system's opaque workspace id.
Both differing serials and lineage protections should be bypassed
with the -force flag (in addition to resources).
Compared to other backends we aren’t just shipping over the state
bytes in a simple payload during the persistence phase of the push
command and the force flag added to the Go TFE client needs to be
specified at that time.
To prevent changing every method signature of PersistState of the
remote client I added an optional interface that provides a hook
to flag the Client as operating in a force push context. Changing
the method signature would be more explicit at the cost of not
being used anywhere else currently or the optional interface pattern
could be applied to the state itself so it could be upgraded to
support PersistState(force bool) only when needed.
Prior to this only the resources of the state were checked for
changes not the lineage or the serial. To bring this in line with
documented behavior noted above those attributes also have a “read”
counterpart just like state has. These are now checked along with
state to determine if the state as a whole is unchanged.
Tests were altered to table driven test format and testing was
expanded to include WriteStateForMigration and its interaction
with a ClientForcePusher type.
* backend/remote: Filter environment variables when loading context
Following up on #23122, the remote system (Terraform Cloud or
Enterprise) serves environment and Terraform variables using a single
type of object. We only should load Terraform variables into the
Terraform context.
Fixes https://github.com/hashicorp/terraform/issues/23283.
For remote operations, the remote system (Terraform Cloud or Enterprise)
writes the stored variable values into a .tfvars file before running the
remote copy of Terraform CLI.
By contrast, for operations that only run locally (like
"terraform import"), we fetch the stored variable values from the remote
API and add them into the set of available variables directly as part
of creating the local execution context.
Previously in the local-only case we were assuming that all stored
variables are strings, which isn't true: the Terraform Cloud/Enterprise UI
allows users to specify that a particular variable is given as an HCL
expression, in which case the correct behavior is to parse and evaluate
the expression to obtain the final value.
This also addresses a related issue whereby previously we were forcing
all sensitive values to be represented as a special string "<sensitive>".
That leads to type checking errors for any variable specified as having
a type other than string, so instead here we use an unknown value as a
placeholder so that type checking can pass.
Unpopulated sensitive values may cause errors downstream though, so we'll
also produce a warning for each of them to let the user know that those
variables are not available for local-only operations. It's a warning
rather than an error so that operations that don't rely on known values
for those variables can potentially complete successfully.
This can potentially produce errors in situations that would've been
silently ignored before: if a remote variable is marked as being HCL
syntax but is not valid HCL then it will now fail parsing at this early
stage, whereas previously it would've just passed through as a string
and failed only if the operation tried to interpret it as a non-string.
However, in situations like these the remote operations like
"terraform plan" would already have been failing with an equivalent
error message anyway, so it's unlikely that any existing workspace that
is being used for routine operations would have such a broken
configuration.
Some commands don't use variables at all or use them in a way that doesn't
require them to all be fully valid and consistent. For those, we don't
want to fetch variable values from the remote system and try to validate
them because that's wasteful and likely to cause unnecessary error
messages.
Furthermore, the variables endpoint in Terraform Cloud and Enterprise only
works for personal access tokens, so it's important that we don't assume
we can _always_ use it. If we do, then we'll see problems when commands
are run inside Terraform Cloud and Enterprise remote execution contexts,
where the variables map always comes back as empty.
The remote backend uses backend.ParseVariableValues locally only to decide
if the user seems to be trying to use -var or -var-file options locally,
since those are not supported for the remote backend.
Other than detecting those, we don't actually have any need to use the
results of backend.ParseVariableValues, and so it's better for us to
ignore any errors it produces itself and prefer to just send a
potentially-invalid request to the remote system and let the remote system
be responsible for validating it.
This then avoids issues caused by the fact that when remote operations are
in use the local system does not have all of the required context: it
can't see which environment variables will be set in the remote execution
context nor which variables the remote system will set using its own
generated -var-file based on the workspace stored variables.
Properly wait for cost estimation to finish running before outputting
the results. Waits 500 milliseconds between checks, rather than backing
off exponentially, because we are not in a run queue. At the point we're
waiting, we expect cost estimation to be run in a timely manner.
Previously, terraform was returning a potentially-misleading error
message in response to anything other than a 404 from the
b.client.Workspaces.Read operation. This PR simplifies Terraform's error
message with the intent of encouraging those who encounter it to focus
on the error message returned from the tfe client.
The added test is odd, and a bit hacky, and possibly overkill.
When a TFC workspace is configured without a VCS root, and with a
working directory, and a user is running `terraform init` from that same
directory, TFC uploads the entire configuration directory, not only the
user's cwd. This is not obvious to the user, so we are adding a descriptive
message explaining what is being uploaded, and why.
* backend/enhanced: start with absolute config path
We recently started normalizing the config path before all "command"
operations, which was necessary for consistency but had unexpected
consequences for remote backend operations, specifically when a vcs root
with a working directory are configured.
This PR de-normalizes the path back to an absolute path.
* Check the error and add a test
It turned out all required logic was already present, so I just needed to add a test for this specific use case.
When changes are made and we failed to upload the state, we should not
try to unlock the workspace. Leaving the workspace locked is a good
indication something went wrong and also prevents other changes from
being applied before the newest state is properly uploaded.
Additionally we now output the lock ID when a lock or force-unlock
action failed.