* terraform/context: use new addrs.Provider as map key in provider factories
* added NewLegacyProviderType and LegacyString funcs to make it explicit that these are temporary placeholders
This PR introduces a new concept, provider fully-qualified name (FQN), encapsulated by the `addrs.Provider` struct.
* backend/remote: Filter environment variables when loading context
Following up on #23122, the remote system (Terraform Cloud or
Enterprise) serves environment and Terraform variables using a single
type of object. We only should load Terraform variables into the
Terraform context.
Fixes https://github.com/hashicorp/terraform/issues/23283.
During the Terraform 0.12 work we briefly had a partial update of the old
Terraform 0.11 (and prior) diff renderer that could work with the new
plan structure, but could produce only partial results.
We switched to the new plan implementation prior to release, but the
"terraform show" command was left calling into the old partial
implementation, and thus produced incomplete results when rendering a
saved plan.
Here we instead use the plan rendering logic from the "terraform plan"
command, making the output of both identical.
Unfortunately, due to the current backend architecture that logic lives
inside the local backend package, and it contains some business logic
around state and schema wrangling that would make it inappropriate to move
wholesale into the command/format package. To allow for a low-risk fix to
the "terraform show" output, here we avoid some more severe refactoring by
just exporting the rendering functionality in a way that allows the
"terraform show" command to call into it.
In future we'd like to move all of the code that actually writes to the
output into the "command" package so that the roles of these components
are better segregated, but that is too big a change to block fixing this
issue.
For remote operations, the remote system (Terraform Cloud or Enterprise)
writes the stored variable values into a .tfvars file before running the
remote copy of Terraform CLI.
By contrast, for operations that only run locally (like
"terraform import"), we fetch the stored variable values from the remote
API and add them into the set of available variables directly as part
of creating the local execution context.
Previously in the local-only case we were assuming that all stored
variables are strings, which isn't true: the Terraform Cloud/Enterprise UI
allows users to specify that a particular variable is given as an HCL
expression, in which case the correct behavior is to parse and evaluate
the expression to obtain the final value.
This also addresses a related issue whereby previously we were forcing
all sensitive values to be represented as a special string "<sensitive>".
That leads to type checking errors for any variable specified as having
a type other than string, so instead here we use an unknown value as a
placeholder so that type checking can pass.
Unpopulated sensitive values may cause errors downstream though, so we'll
also produce a warning for each of them to let the user know that those
variables are not available for local-only operations. It's a warning
rather than an error so that operations that don't rely on known values
for those variables can potentially complete successfully.
This can potentially produce errors in situations that would've been
silently ignored before: if a remote variable is marked as being HCL
syntax but is not valid HCL then it will now fail parsing at this early
stage, whereas previously it would've just passed through as a string
and failed only if the operation tried to interpret it as a non-string.
However, in situations like these the remote operations like
"terraform plan" would already have been failing with an equivalent
error message anyway, so it's unlikely that any existing workspace that
is being used for routine operations would have such a broken
configuration.
Some commands don't use variables at all or use them in a way that doesn't
require them to all be fully valid and consistent. For those, we don't
want to fetch variable values from the remote system and try to validate
them because that's wasteful and likely to cause unnecessary error
messages.
Furthermore, the variables endpoint in Terraform Cloud and Enterprise only
works for personal access tokens, so it's important that we don't assume
we can _always_ use it. If we do, then we'll see problems when commands
are run inside Terraform Cloud and Enterprise remote execution contexts,
where the variables map always comes back as empty.
The remote backend uses backend.ParseVariableValues locally only to decide
if the user seems to be trying to use -var or -var-file options locally,
since those are not supported for the remote backend.
Other than detecting those, we don't actually have any need to use the
results of backend.ParseVariableValues, and so it's better for us to
ignore any errors it produces itself and prefer to just send a
potentially-invalid request to the remote system and let the remote system
be responsible for validating it.
This then avoids issues caused by the fact that when remote operations are
in use the local system does not have all of the required context: it
can't see which environment variables will be set in the remote execution
context nor which variables the remote system will set using its own
generated -var-file based on the workspace stored variables.
Terraform Core expects all variables to be set, but for some ancillary
commands it's fine for them to just be set to placeholders because the
variable values themselves are not key to the command's functionality
as long as the terraform.Context is still self-consistent.
For such commands, rather than prompting for interactive input for
required variables we'll just stub them out as unknowns to reflect that
they are placeholders for values that a user would normally need to
provide.
This achieves a similar effect to how these commands behaved before, but
without the tendency to produce a slightly invalid terraform.Context that
would fail in strange ways when asked to run certain operations.
During the 0.12 work we intended to move all of the variable value
collection logic into the UI layer (command package and backend packages)
and present them all together as a unified data structure to Terraform
Core. However, we didn't quite succeed because the interactive prompts
for unset required variables were still being handled _after_ calling
into Terraform Core.
Here we complete that earlier work by moving the interactive prompts for
variables out into the UI layer too, thus allowing us to handle final
validation of the variables all together in one place and do so in the UI
layer where we have the most context still available about where all of
these values are coming from.
This allows us to fix a problem where previously disabling input with
-input=false on the command line could cause Terraform Core to receive an
incomplete set of variable values, and fail with a bad error message.
As a consequence of this refactoring, the scope of terraform.Context.Input
is now reduced to only gathering provider configuration arguments. Ideally
that too would move into the UI layer somehow in a future commit, but
that's a problem for another day.
Previously we were using the experimental HCL 2 repository, but now we'll
shift over to the v2 import path within the main HCL repository as part of
actually releasing HCL 2.0 as stable.
This is a mechanical search/replace to the new import paths. It also
switches to the v2.0.0 release of HCL, which includes some new code that
Terraform didn't previously have but should not change any behavior that
matters for Terraform's purposes.
For the moment the experimental HCL2 repository is still an indirect
dependency via terraform-config-inspect, so it remains in our go.sum and
vendor directories for the moment. Because terraform-config-inspect uses
a much smaller subset of the HCL2 functionality, this does still manage
to prune the vendor directory a little. A subsequent release of
terraform-config-inspect should allow us to completely remove that old
repository in a future commit.
* backend/remote-state/s3/backend_state.go: Prior to this commit, the terraform s3 backend did
not paginate calls to s3 when finding workspaces, which resulted in workspaces 'disappearing'
once they are switched away from, even though the state file still exists. This is due to the
ListBucket operation defaulting MaxItems to 1000, so terraform s3 backends that contained
more then 1000 workspaces did not function as expected. This rectifies this situation by
paginating calls to s3 when finding workspaces.
Signed-off-by: Collin J. Doering <collin@rekahsoft.ca>
Properly wait for cost estimation to finish running before outputting
the results. Waits 500 milliseconds between checks, rather than backing
off exponentially, because we are not in a run queue. At the point we're
waiting, we expect cost estimation to be run in a timely manner.
faster
The acceptance tests for etcdv3, oss and manta were not validating
required env variablea, chosing to assume that if one was running
acceptance tests they had already configured the credentials.
It was not always clear if this was a bug in the tests or the provider,
so I opted to make the tests fail faster when required attributes were
unset (or "").
The documentation for the -target option warns that it's intended for
exceptional circumstances only and not for routine use, but that's not a
very prominent location for that warning and so some users miss it.
Here we make the warning more prominent by including it directly in the
Terraform output when -target is in use. We first warn during planning
that the plan might be incomplete, and then warn again after apply
concludes and direct the user to run "terraform plan" to make sure that
there are no further changes outstanding. The latter message is intended
to reinforce that -target should only be a one-off operation and that you
should always run without it soon after to ensure that the workspace is
left in a consistent, converged state.
Previously, terraform was returning a potentially-misleading error
message in response to anything other than a 404 from the
b.client.Workspaces.Read operation. This PR simplifies Terraform's error
message with the intent of encouraging those who encounter it to focus
on the error message returned from the tfe client.
The added test is odd, and a bit hacky, and possibly overkill.
When a TFC workspace is configured without a VCS root, and with a
working directory, and a user is running `terraform init` from that same
directory, TFC uploads the entire configuration directory, not only the
user's cwd. This is not obvious to the user, so we are adding a descriptive
message explaining what is being uploaded, and why.
* backend/enhanced: start with absolute config path
We recently started normalizing the config path before all "command"
operations, which was necessary for consistency but had unexpected
consequences for remote backend operations, specifically when a vcs root
with a working directory are configured.
This PR de-normalizes the path back to an absolute path.
* Check the error and add a test
It turned out all required logic was already present, so I just needed to add a test for this specific use case.
Support for cross-domain authentication has been added and mapping
environment variables to the correct domain settings has been
fixed.
In addition, support for clouds.yaml files has been added.
This unusual situation isn't supposed to arise in normal use, but it can
come up in practice in some edge-case scenarios where Terraform fails in
a severe way during a create_before_destroy.
Some earlier versions of Terraform also had bugs in their handling of
deposed objects, so this may also arise if upgrading from one of those
older versions with some leftover deposed objects in the state.
When changes are made and we failed to upload the state, we should not
try to unlock the workspace. Leaving the workspace locked is a good
indication something went wrong and also prevents other changes from
being applied before the newest state is properly uploaded.
Additionally we now output the lock ID when a lock or force-unlock
action failed.
When failing to write the state, the local backend writes the state to a local file called `errrored.tfstate`. Previously it would do so by creating a new state file which would use a new serial and lineage. By exorting the existing state file and directly assigning the new state, the serial and lineage are preserved.
For users who in previous versions have relied on our lack of checking for
whether variables are declared, they may previously have seen an
overwhelming number of warnings when running Terraform v0.12.
Here we cap that number at three specific warnings and then one general
warning, so we can still give a specific source location for the first
couple (for users who have genuinely made a typo) but summarize away a
large number for those who are seeing this because they've not yet
migrated to using environment variables.
This mirrors the change made for providers, so that default values can
be inserted into the config by the backend implementation. This is only
the interface and method name changes, it does not yet add any default
values.
Previously we checked can-update in order to determine if a user had the
required permissions to apply a run, but that wasn't sufficient. So we
added a new permission, can-queue-apply, that we now use instead.
The handling of slashes was broken around listing workspaces in
workspace_key_prefix. While it worked in most places by splitting an
extra time around the spurious slashes, it failed in the case that the
prefix ended with a slash of its own.
A test was temporarily added to verify that the backend works with the
unusual keys, but rather than risking silent breakage around prefixes
with trailing slashes, we also add validation to prevent users from
entering keys with trailing slashes at all.
The init error was output deep in the backend by detecting a
special ResourceProviderError and formatted directly to the CLI.
Create some Diagnostics closer to where the problem is detected, and
passed that back through the normal diagnostic flow. While the output
isn't as nice yet, this restores the helpful error message and makes the
code easier to maintain. Better formatting can be handled later.
The API surface area is much smaller when we use the remote backend for remote state only.
So in order to try and prevent any backwards incompatibilities when TF runs inside of TFE, we’ve split up the discovery services into `state.v2` (which can be used for remote state only configurations, so when running in TFE) and `tfe.v2.1` (which can be used for all remote configurations).
This changes the contract for `PlanResourceChange` so that the provider is now responsible
for populating all default values during plan, including inserting any unknown values for
defaults it will fill in at apply time.
We've changed the contract for PlanResourceChange to now require the
provider to populate any default values (including unknowns) it wants to
set for computed arguments, so our mock provider here now needs to be a
little more complex to deal with that.
This fixes several of the tests in this package. A minor change to
TestLocal_applyEmptyDirDestroy was required to make it properly configure
the mock provider so PlanResourceChange can access the schema.
In Terraform 0.11 and earlier we just silently ignored undeclared
variables in -var-file and the automatically-loaded .tfvars files. This
was a bad user experience for anyone who made a typo in a variable name
and got no feedback about it, so we made this an error for 0.12.
However, several users are now relying on the silent-ignore behavior for
automation scenarios where they pass the same .tfvars file to all
configurations in their organization and expect Terraform to ignore any
settings that are not relevant to a specific configuration. We never
intentionally supported that, but we don't want to immediately break that
workflow during 0.12 upgrade.
As a compromise, then, we'll make this a warning for v0.12.0 that contains
a deprecation notice suggesting to move to using environment variables
for this "cross-configuration variables" use-case. We don't produce errors
for undeclared variables in environment variables, even though that
potentially causes the same UX annoyance as ignoring them in vars files,
because environment variables are assumed to live in the user's session
and this it would be very inconvenient to have to unset such variables
when moving between directories. Their "ambientness" makes them a better
fit for these automatically-assigned general variable values that may or
may not be used by a particular configuration.
This can revert to being an error in a future major release, after users
have had the opportunity to migrate their automation solutions over to
use environment variables.
We don't seem to have any tests covering this specific situation right
now. That isn't ideal, but this change is so straightforward that it would
be relatively expensive to build new targeted test cases for it and so
I instead just hand-tested that it is indeed now producing a warning where
we were previously producing an error. Hopefully if there is any more
substantial work done on this codepath in future that will be our prompt
to add some unit tests for this.
The AWS Go SDK automatically provides a default request retryer with exponential backoff that is invoked via setting `MaxRetries` or leaving it `nil` will default to 3. The terraform-aws-provider `config.Client()` sets `MaxRetries` to 0 unless explicitly configured above 0. Previously, we were not overriding this behavior by setting the configuration and therefore not invoking the default request retryer.
The default retryer already handles HTTP error codes above 500, including S3's InternalError response, so the extraneous handling can be removed. This will also start automatically retrying many additional cases, such as temporary networking issues or other retryable AWS service responses.
Changes:
* s3/backend: Add `max_retries` argument
* s3/backend: Enhance S3 NoSuchBucket error to include additional information
* Upgrading to 2.0.0 of github.com/hashicorp/go-azure-helpers
* Support for authenticating using Azure CLI
* backend/azurerm: support for authenticating using the Azure CLI
This PR improves the error handling so we can provide better feedback about any service discovery errors that occured.
Additionally it adds logic to test for specific versions when discovering a service using `service.vN`. This will enable more informational errors which can indicate any version incompatibilities.
This change enables a few related use cases:
* AWS has partitions outside Commercial, GovCloud (US), and China, which are the only endpoints automatically handled by the AWS Go SDK. DynamoDB locking and credential verification can not currently be enabled in those regions.
* Allows usage of any DynamoDB-compatible API for state locking
* Allows usage of any IAM/STS-compatible API for credential verification
Use the entitlements to a) determine if the organization exists, and b) as a means to select which backend to use (the local backend with remote state, or the remote backend).
Variables values are marshalled with an explicit type of
cty.DynamicPseudoType, but were being decoded using `Implied Type` to
try and guess the type. This was causing errors because `Implied Type`
does not expect to find a late-bound value.
If an instance object in state has an earlier schema version number then
it is likely that the schema we're holding won't be able to decode the
raw data that is stored. Instead, we must ask the provider to upgrade it
for us first, which might also include translating it from flatmap form
if it was last updated with a Terraform version earlier than v0.12.
This ends up being a "seam" between our use of int64 for schema versions
in the providers package and uint64 everywhere else. We intend to
standardize on int64 everywhere eventually, but for now this remains
consistent with existing usage in each layer to keep the type conversion
noise contained here and avoid mass-updates to other Terraform components
at this time.
This also includes a minor change to the test helpers for the
backend/local package, which were inexplicably setting a SchemaVersion of
1 on the basic test state but setting the mock schema version to zero,
creating an invalid situation where the state would need to be downgraded.
Add support for the new `force-unlock` API and at the same time improve
performance a bit by reducing the amount of API calls made when using
the remote backend for state storage only.
Previously we were fetching these from the provider but then immediately
discarding the version numbers because the schema API had nowhere to put
them.
To avoid a late-breaking change to the internal structure of
terraform.ProviderSchema (which is constructed directly all over the
tests) we're retaining the resource type schemas in a new map alongside
the existing one with the same keys, rather than just switching to
using the providers.Schema struct directly there.
The methods that return resource type schemas now return two arguments,
intentionally creating a little API friction here so each new caller can
be reminded to think about whether they need to do something with the
schema version, though it can be ignored by many callers.
Since this was a breaking change to the Schemas API anyway, this also
fixes another API wart where there was a separate method for fetching
managed vs. data resource types and thus every caller ended up having a
switch statement on "mode". Now we just accept mode as an argument and
do the switch statement within the single SchemaForResourceType method.
* backend/azurerm: removing the `arm_` prefix from keys
* removing the deprecated fields test because the deprecation makes it fail
* authentication: support for custom resource manager endpoints
* Adding debug prefixes to the log statements
* adding acceptance tests for msi auth
* including the resource group name in the tests
* backend/azurerm: support for authenticating using a SAS Token
* resolving merge conflicts
* moving the defer to prior to the error
* backend/azurerm: support for authenticating via msi
* adding acceptance tests for msi auth
* including the resource group name in the tests
* support for using the test client via msi
* vendor updates
- updating to v21.3.0 of github.com/Azure/azure-sdk-for-go
- updating to v10.15.4 of github.com/Azure/go-autorest
- vendoring github.com/hashicorp/go-azure-helpers @ 0.1.1
* backend/azurerm: refactoring to use the new auth package
- refactoring the backend to use a shared client via the new auth package
- adding tests covering both Service Principal and Access Key auth
- support for authenticating using a proxy
- rewriting the backend documentation to include examples of both authentication types
* switching to use the build-in logging function
* documenting it's also possible to retrieve the access key from an env var
In order to support free organizations, we need a way to load the `remote` backend and then, depending on the used offering/plan, enable or disable remote operations.
In other words, we should be able to dynamically fall back to the `local` backend if needed, after first configuring the `remote` backend.
To make this works we need to change the way this was done previously when the env var `TF_FORCE_LOCAL_BACKEND` was set. The clear difference of course being that the env var would be available on startup, while the used offering/plan is only known after being able to connect to TFE.
The changes to how we handle setting the state path on the local backend
broke the heuristic we were using here for detecting migration from one
local backend to another with the same state path, which would by default
end up deleting the state altogether after migration.
We now use the StatePaths method to do this, which takes into account
both the default values and any settings that have been set.
Additionally this addresses a flaw in the old method which could
potentially have deleted all non-default workspace state files if the
"path" setting were changed without also changing the "workspace_dir"
setting. This new approach is conservative because it will preserve all
of the files if any one overlaps.
This was failing because we now handle the settings for the local backend
a little differently as a result of decoding it with the HCL2 machinery.
Specifically, the backend.State* fields are now assumed to be what is
given in configuration, and any CLI overrides are maintained separately
in OverrideState* fields so that they can be imposed "just in time" in
StatePaths.
This is particularly important because OverrideStatePath (when set) is
used regardless of workspace name, while StatePath is a suitable value
only for the "default" workspace, with others needing to be constructed
from StateWorkspaceDir instead.
Newer versions of the retryablehttp package use a context, so we need to
add that in our custom `CheckRetry` function.
In addition I removed the `return true, nil` to continue retrying in
case of an error, and instead directly call the `DefaultRetryPolicy`.
This is because the `DefaultRetryPolicy` will now also take the context
into consideration.
This new source type should be used for variables loaded from .tfvars files that were explicitly passed as command line arguments (e.g. -var-file=foo.tfvars)
This work was done against APIs that were already changed in the branch
before work began, and so it doesn't apply to the v0.12 development work.
To allow v0.12 to merge down to master, we'll revert this work out for now
and then re-introduce equivalent functionality in later commits that works
against the new APIs.
There are several steps here and a number of them can include reaching out
to remote servers or executing local processes, so it's helpful to have
some trace logs to better narrow down causes of errors and hangs during
this step.
In earlier refactoring we skipped implementing prior state safety checks,
propagating the target addresses from plan, and verifying that all of
the providers are exactly the same from the plan being created.
This change reinstates those checks, including a new error message for
the "stale plan" situation.
If we don't do this, we can't produce any output when applying a saved
plan file.
Here we also introduce a check to the local backend's ReportResult
function so that it won't panic if CLI init is skipped, although that
will no longer happen in the apply-from-file case due to the change
described in the previous paragraph.
We can't generate a valid plan file without a backend configuration to
write into it, but it's the responsibility of the caller (the command
package) to manage the backend configuration mechanism, so we require it
to tell us what to write here.
This feels a little strange because the backend in principle knows its
own config, but in practice the backend only knows the _processed_ version
of the config, not the raw configuration value that was used to configure
it.
converted the existing testPlanState() from terraform.State to
states.State to fix various plan tests.
reverted the "bandaid" in plans/planfile/tfplan.go - at this moment the
backend tests do not include backend configuration, and so the planfile
package can write the plan file but not read it back in. That will be
revisted in a separate track of work.
I have no confidence in the change to plans/planfile/tfplan.go. The
tests were passing an empty backend config, which planfile was able to
write to a file but not read from the same file. This change let me move
past that and it did not break any tests in the planfile package, but I
am concerned that it introduces undesired behavior.
The state manager refactoring in an earlier commit was reflected in the
implementations of these backends, but not in their tests. This gets us
back to a state where the backend tests will compile, and gets _most_ of
them passing again, with a few exceptions that will be addressed in a
subsequent commit.
incoming values
Addresses an odd state where the priorV of an object to be changed is
known but null.
While this situation should not happen, it seemed prudent to ensure that
core is resilient to providers sending incorrect values (which might
also occur with manually edited state).
Previously we used a single plan action "Replace" to represent both the
destroy-before-create and the create-before-destroy variants of replacing.
However, this forces the apply graph builder to jump through a lot of
hoops to figure out which nodes need it forced on and rebuild parts of
the graph to represent that.
If we instead decide between these two cases at plan time, the actual
determination of it is more straightforward because each resource is
represented by only one node in the plan graph, and then we can ensure
we put the right nodes in the graph during DiffTransformer and thus avoid
the logic for dealing with deposed instances being spread across various
different transformers and node types.
As a nice side-effect, this also allows us to show the difference between
destroy-then-create and create-then-destroy in the rendered diff in the
CLI, although this change doesn't fully implement that yet.
We're not yet showing outputs in the rendered diff, so it doesn't make
sense to count them for the purpose of deciding which change action
symbols to include in the legend.
This is a light adaptation of our earlier prototype of structural diff
rendering, as a starting point for what we'll actually ship. This is not
consistent with the latest mocks, so will need some additional work before
it is ready, but integrating this allows us to at least see the plan
contents while fixing up remaining issues elsewhere.
Due to how often the state and plan types are referenced throughout
Terraform, there isn't a great way to switch them out gradually. As a
consequence, this huge commit gets us from the old world to a _compilable_
new world, but still has a large number of known test failures due to
key functionality being stubbed out.
The stubs here are for anything that interacts with providers, since we
now need to do the follow-up work to similarly replace the old
terraform.ResourceProvider interface with its replacement in the new
"providers" package. That work, along with work to fix the remaining
failing tests, will follow in subsequent commits.
The aim here was to replace all references to terraform.State and its
downstream types with states.State, terraform.Plan with plans.Plan,
state.State with statemgr.State, and switch to the new implementations of
the state and plan file formats. However, due to the number of times those
types are used, this also ended up affecting numerous other parts of core
such as terraform.Hook, the backend.Backend interface, and most of the CLI
commands.
Just as with 5861dbf3fc49b19587a31816eb06f511ab861bb4 before, I apologize
in advance to the person who inevitably just found this huge commit while
spelunking through the commit history.
The "config" package is no longer used and will be removed as part
of the 0.12 release cleanup. Since configschema is part of the
"new world" of configuration modelling, it makes more sense for
it to live as a subdirectory of the newer "configs" package.
Due to how deeply the configuration types go into Terraform Core, there
isn't a great way to switch out to HCL2 gradually. As a consequence, this
huge commit gets us from the old state to a _compilable_ new state, but
does not yet attempt to fix any tests and has a number of known missing
parts and bugs. We will continue to iterate on this in forthcoming
commits, heading back towards passing tests and making Terraform
fully-functional again.
The three main goals here are:
- Use the configuration models from the "configs" package instead of the
older models in the "config" package, which is now deprecated and
preserved only to help us write our migration tool.
- Do expression inspection and evaluation using the functionality of the
new "lang" package, instead of the Interpolator type and related
functionality in the main "terraform" package.
- Represent addresses of various objects using types in the addrs package,
rather than hand-constructed strings. This is not critical to support
the above, but was a big help during the implementation of these other
points since it made it much more explicit what kind of address is
expected in each context.
Since our new packages are built to accommodate some future planned
features that are not yet implemented (e.g. the "for_each" argument on
resources, "count"/"for_each" on modules), and since there's still a fair
amount of functionality still using old-style APIs, there is a moderate
amount of shimming here to connect new assumptions with old, hopefully in
a way that makes it easier to find and eliminate these shims later.
I apologize in advance to the person who inevitably just found this huge
commit while spelunking through the commit history.
This is a rather-messy, complex change to get the "command" package
building again against the new backend API that was updated for
the new configuration loader.
A lot of this is mechanical rewriting to the new API, but
meta_config.go and meta_backend.go in particular saw some major
changes to interface with the new loader APIs and to deal with
the change in order of steps in the backend API.
The new config loader requires some steps to happen in a different
order, particularly in regard to knowing the schema in order to
decode the configuration.
Here we lean directly on the configschema package, rather than
on helper/schema.Backend as before, because it's generally
sufficient for our needs here and this prepares us for the
helper/schema package later moving out into its own repository
to seed a "plugin SDK".
If we get a diagnostic message that references a source range, and if the
source code for the referenced file is available, we'll show a snippet of
the source code with the source range highlighted.
At the moment we have no cache of source code, so in practice this
codepath can never be visited. Callers to format.Diagnostic will be
gradually updated in subsequent commits.
In some cases this is needed to keep the UX clean and to make sure any remote exit codes are passed through to the local process.
The most obvious example for this is when using the "remote" backend. This backend runs Terraform remotely and stream the output back to the local terminal.
When an error occurs during the remote execution, all the needed error information will already be in the streamed output. So if we then return an error ourselves, users will get the same errors twice.
By allowing the backend to specify the correct exit code, the UX remains the same while preserving the correct exit codes.
This is a bit of a hack to support the `-no-color` flag while we don’t have an option to set run variables.
That is also the reason why the orginal method is commented out instead of deleted. This will be reverted when the TFE starts supporting run variables.
If the policy passes, only show that instead of the full check output to prevent cluttering the output. So a passing policy will only show:
-----------------------------------------------
Organization policy check: passed
-----------------------------------------------
This commit adds:
- support for `-lock-timeout`
- custom error message when a 404 is received
- canceling a pending run when TF is Ctrl-C’ed
- discard a run when the apply is not approved
The pagination info of a list call that returns an empty list contains:
```go
CurrentPage: 1
TotalPages: 0
```
So checking if we have seen all pages using `CurrentPage == TotalPages` will not work and will result in an endless loop.
The tests are updated so they will fail (timeout after 1m) if this is handled incorreclty.
To prevent making unnecessary heavy calls to the backend, we should use a search query to limit the result.
But even if we use a search query, we should still use the pagination details to make sure we retrieved all items.
In TFE you can configure a workspace to use a custom working directory. When determining which directory that needs to be uploaded to TFE, this working directory should be taken into account to make sure we are uploading the correct root directory for the workspace.
Certain backends (currently only the `remote` backend) do not support using both the default and named workspaces at the same time.
To make the migration easier for users that currently use both types of workspaces, this commit adds logic to ask the user for a new workspace name during the migration process.