A more native integration for Terraform Cloud and its CLI-driven run workflow.
Instead of a backend, users declare a special block in the top-level terraform settings
block to configure Terraform Cloud, then run terraform init.
Full documentation will follow in later commits.
When the 'select the exact version if possible' behavior was added, the
version check below it was never updated to take the newly updated
version in to account, resulting in a failed version check even as the
remote workspace updated to the correct version necessary.
E2E tests including cost estimation should indeed be added, but the
default case should be disabled; lots of cycles lost to pointless cost
estimates on null and random resources.
The delete + assign at the end of `Update` and `UpdateByID` are meant to handle
renaming a workspace — (remove old name), (insert new name).
However, `UpdateByID` was doing (remove new name), (insert new name) and leaving
the old name in place. This commit changes it to match `Update` by grabbing the
original name off the workspace object _before_ potentially renaming it.
Alas, there's not a very good way to test the message we're supposed to print to
the console in this situation; we just don't appear to have a mock terminal that
the test can read from. But we can at least test that the function returns
without erroring under the exact conditions where it was erroring before.
Note that the behaviors of mc.Workspaces.Update and UpdateByID were already
starting to drift, so I consolidated their actual attribute update logic into a
helper function before they drifted much further.
Previously, if the remote TFC/TFE instance doesn't happen to have a tool_version
record whose name exactly matches the value of `tfversion.String()`, Terraform
would be completely blocked from using the `terraform workspace new` command
(when configured with the tags strategy) — the API would give a 422 to the
whole create request.
This commit changes the StateMgr() function to do the work in two passes; first
create the workspace (which should work fine regardless), THEN update the
Terraform version and print a warning to the terminal if it fails (which 99% of
the time is a benign failure with little impact on your future CLI usage).
There are actually a few different ways to get to this message.
1. Blank state — no previous terraform applied. Start with a cloud block.
1. Implicit local — start with no backend specified. This actually goes
through the same code execution path as the first scenario.
1. Explicit local — start with a backend local block that has been
applied, then change from the local backend to a cloud block. This
will recognize the state, and is a different path through the code in
the meta backend.
This commit handles the last case. The messaging has also been tweaked.
End to end test included as well.
Explicit version strings are actually also version constraints! And the special
comparisons we were doing to allow a range of compatible versions can also be
expressed as version constraints.
Bonus: also simplify the way we handle version check errors, by composing the
messages inline and only extracting the repetitive parts into a function.
The cloud backend (and remote before it) previously expected a TFC workspace's
`terraform-version` attribute to be either the magic string `"latest"` or an
explicit semver value. But a workspace might have a version constraint instead
(like `~> 1.1.0`), in which case the version check would blow up.
This commit checks whether `terraform-version` is a valid version constraint
before erroring out, and if so, returns success if the local version meets the
constraint.
Because it's not practical to deeply introspect the slice of version space
defined by a constraint, this check is slightly less robust than the version
comparisons below it:
- It can give a false OK on open-ended constraints like `>= 1.1.0`. Say you're
running 1.3.0, it changed the state format, and the TFE instance admin has
not yet added any 1.3.x Terraform versions; your workspace will now break.
- It will give a false not-OK when using different minor versions within a range
that we know to be compatible, e.g. remote constraint of `~> 0.15.0` and local
version of 1.1.0.
- This would be totally useless with the pre-0.14 versions of Terraform, where
patch releases could change state format... but we're not going back in time
to add this feature to them anyway.
Still, in the most common likely case (`~> x.y.z`), it'll complain at you (with
an error you can choose to override) if you're not using the same minor version,
and that seems proportionate, useful, and expected.
When a user runs `terraform refresh` we give them an error message that
tells them to run `terraform apply -refresh-state`. We could just run
that command for them, though. That is what this PR does.
1. ParseDeclaredValues: parses unparsed variables into terraform.InputValues
2. ProbeUndeclaredVariableValues: compares variable declarations with unparsed values to warn/error about undeclared variables
* determining source or destination to cloud
* handling single to single state migrations to cloud,
using a name strategy or a tags strategy
* Add end-to-end tests for state migration.
These changes remove all of the preexisting version checking for
individual features, wiping the slate clean with an overall minimum
requirement of a future TFP-API-Version 2.5, which at the time of this
writing is expected to be TFE v202112-1.
It also actually provides that expected TFE version as an actionable
error message, rather than generically saying that it isn't supported or
using the somewhat opaque API version header.
The 'tfe' service was appended to with various versions to denote a new
'feature' implemented by a new 'service'. This quickly proved to not be
scalable, as adding an entry to the discovery document from every
feature is bad.
The new mechanism added was checking the TFP-API-Version header on
requests for a version, instead.
So we'll remove the separation here between different tfe service
'versions' and the separate 'state' service and Just Use TFE, as well as
the TFP-API-Version header for all feature versioning., as well as the
TFP-API-Version header for all feature versioning.
The previous conservative guarantee that we would not make backwards
incompatible changes to the state file format until at least Terraform
1.1 can now be extended. Terraform 0.14 through 1.1 will be able to
interoperably use state files, so we can update the remote backend
version compatibility check accordingly.
This is a port of https://github.com/hashicorp/terraform/pull/29645
This changes the 'name' strategy to always align the local configured
workspace name and the remote Terraform Cloud workspace, rather than the
implicit use of the 'default' unnamed workspace being used instead.
What this essentially means is that the Cloud integration does not fully
support workspaces when configured for a single TFC workspace (as was
the case with the 'remote' backend), but *does* use the
backend.Workspaces() interface to allow for normal local behaviors like
terraform.workspace to resolve to the correct name. It does this by
always setting the local workspace name when the 'name' strategy is
used, as a part of initialization.
Part of the diff here is exporting all the previously unexported types
for mapping workspaces. The command package (and init in particular)
needs to be able to handle setting the local workspace in this
particular scenario.
Implementing this test was quite a rabbithole, as in order to satisfy
backendTestBackendStates() the workspaces returned from
backend.Workspaces() must match exactly, and the shortcut taken to test
pagination in 3cc58813f0 created an
impossible circumstance that got plastered over with the fact that
prefix filtering is done clientside, not by the API as it should be.
Tagging does not rely on clientside filtering, and expects that the
request made to the TFC API returns exactly those workspaces with the
given tags.
These changes include a better way to test pagination, wherein we
actually create over a page worth of valid workspaces in the mock client
and implement a simplified pagination behavior to match how the TFC API
actually works.
A mostly cosemetic change; The fields 'workspace' and 'prefix' don't
really describe well what they are from a caller, so change these to use
a workspaceMapping struct to convey they are for implementing workspace
mapping strategies from CLI -> TFC
These changes include additions to fulfill the interface for the client
mock, plus moving all that logic (which needn't be duplicated across
both the remote and cloud packages) over to the cloud package under a
dedicated mock client file.
These changes allow cloud blocks to be overridden by backend blocks and
vice versa; the logic follows the current backend behavior of a block
overriding a preceding block in full, with no merges.
This restriction is temporary. Overrides should be allowed, but have the
added complexity of needing to also override a 'backend' block, so this
work is being deferred for now.
With the alternative block introduced in 7bf9b2c7b, this removes the
ability to explicitly declare the 'cloud' backend. The literal backend
interface is an implementation detail and no longer a user-level
concept when using Terraform Cloud.
This is a replacement declaration for using Terraform Cloud as a remote
backend, leaving the literal backend as an implementation detail and not
a user-level concept.
The cloud package intends to implement a new integration for
Terraform Cloud/Enterprise. The purpose of this integration is to better
support TFC users; it will shed some overly generic UX and architecture,
behavior changes that are otherwise backwards incompatible in the remote
backend, and technical debt - all of which are vestiges from before
Terraform Cloud existed.
This initial commit is largely a porting of the existing 'remote'
backend, which will serve as an underlying implementation detail and not
be a typical user-level backend. This is because to re-implement the
literal backend interface is orthogonal to the purpose of this
integration, and can always be migrated away from later.
As this backend is considered an implementation detail, it will not be
registered as a declarable backend. Within these changes it is, for easy
of initial development and a clean diff.
When running `terraform init` against a backend with multiple
workspaces, none of which are the currently indicated local workspace,
Terraform prompts the user to choose a workspace from the list. In
automation, using the `-input=false` argument should disable asking for
input, but previously would hang instead.
When an explicit backend is configured with a configuration which has
not yet been initialized, running `terraform init` performs a state
migration to fetch the remotely stored state in order to operate on it.
Like the previous bug introduced by the recent provider diagnostics
change, this code path was not correctly configured to enable init mode
for the backend, which resulted in a fatal error during init when the
cache dir is deleted.
Setting the `Init` backend option allows this code path to continue
without error when first initializing the backend for state migration.
The new e2e test fails without this change.
When migrating state to an existing Terraform Cloud workspace using the
remote backend, we check the remote version is compatible with the local
one by default.
This commit fixes two bugs in this code:
- If using the "name" strategy for the remote backend, the list of
destination workspaces is empty. This resulted in no version checking
of the remote workspace, and we fell back to the string equality
check.
- The user-specified CLI flag `-ignore-remote-version` was not being
applied for the state migration version checking.
The init command needs to initialize a backend, in order to access
state, in turn to derive provider requirements from state. The backend
initialization step requires building provider factories, which
previously would fail if a lockfile was present without a corresponding
local provider cache.
This commit ensures that in this situation only, errors with the
provider factories are temporarily ignored. This allows us to continue
to initialize the backend, fetch providers, and then report any errors
as necessary.
We test that a deleted provider cache results in an error when running
terraform plan, but previously did not test that running init (as
instructed) would resolve the issue. This (failing) e2e test adds that
step.
We introduced this experiment to gather feedback, and the feedback we saw
led to us deciding to do another round of design work before we move
forward with something to meet this use-case.
In addition to being experimental, this has only been included in alpha
releases so far, and so on both counts it is not protected by the
Terraform v1.0 Compatibility Promises.
The -lock and -lock-timeout flags were removed prior to the release of
1.0 as they were thought to have no effect. This is not true in the case
of state migrations when changing backends. This commit restores these
flags, and adds test coverage for locking during backend state
migration.
Also update the help output describing other boolean flags, showing the
argument as the user would type it rather than the default behavior.
There is a race between the MockSource and ShutdownCh which sometimes
causes this test to fail. Add a HangingSource implementation of Source
which hangs until the context is cancelled, so that there is always time
for a user-initiated shutdown to trigger the cancellation code path
under test.
We don't use this library anywhere else in Terraform, and this backend was
using it only for trivial helpers that are easy to express inline anyway.
The new direct code is also type-checkable, whereas these helper functions
seem to be written using reflection.
This gives us one fewer dependency to worry about and makes the test code
for this backend follow a similar assertions style as the rest of this
codebase.
Ensure that we still check for a stale plan even when it was created
with no previous state.
Create separate errors for incorrect lineage vs incorrect serial.
To prevent confusion when applying a first plan multiple times, only
report it as a stale plan rather than different lineage.
Previously we would reject attempts to delete a workspace if its state
contained any resources at all, even if none of the resources had any
resource instance objects associated with it.
Nowadays there isn't any situation where the normal Terraform workflow
will leave behind resource husks, and so this isn't as problematic as it
might've been in the v0.12 era, but nonetheless what we actually care
about for this check is whether there might be any remote objects that
this state is tracking, and for that it's more precise to look for
non-nil resource instance objects, rather than whole resources.
This also includes some adjustments to our error messaging to give more
information about the problem and to use terminology more consistent with
how we currently talk about this situation in our documentation and
elsewhere in the UI.
We were also using the old State.HasResources method as part of some of
our tests. I considered preserving it to avoid changing the behavior of
those tests, but the new check seemed close enough to the intent of those
tests that it wasn't worth maintaining this method that wouldn't be used
in any main code anymore. I've therefore updated those tests to use
the new HasResourceInstanceObjects method instead.
When a test uses multiple instances of the same provider, we may need to
have separate objects to prevent overwriting of the MockProvider state.
Create a completely new MockProvider in each factory function call
rather than re-using the original provider value.
Running the tool this way ensures that we'll always run the version
selected by our go.mod file, rather than whatever happened to be available
in $GOPATH/bin on the system where we're running this.
This change caused some contexts to now be using a newer version of
staticcheck with additional checks, and so this commit also includes some
changes to quiet the new warnings without any change in overall behavior.
A snapshotDir tracks its current position as part of its state, so we need
to use it via pointer rather than value so that Readdirnames can actually
update that position, or else we'll just get stuck at position zero.
In practice this wasn't hurting anything because we only call Readdir once
on our snapshots, to read the whole directory at once. Still nice to fix
to avoid a gotcha for future maintenence, though.
Make the state match the fixture config. The old test was not
technically invalid, but because it caused multiple instances of the
provider to be created, they were backed by the same MockProvider value
resulting in the `*Called` fields interfering.
The destroy plan should not require a configured provider (the complete
configuration is not evaluated, so they cannot be configured).
Deposed instances were being refreshed during the destroy plan, because
this instance type is only ever destroyed and shares the same
implementation between plan and walkPlanDestroy. Skip refreshing during
walkPlanDestroy.
Have the MockProvider ensure that Configure is always called before any
methods that may require a configured provider.
Ensure the MockProvider *Called fields are zeroed out when re-using the
provider instance.
We have various mechanisms that aim to ensure that the installed provider
plugins are consistent with the lock file and that the lock file is
consistent with the provider requirements, and we do have existing unit
tests for them, but all of those cases mock our fake out at least part of
the process and in the past that's caused us to miss usability
regressions, where we still catch the error but do so at the wrong layer
and thus generate error message lacking useful additional context.
Here we'll add some new end-to-end tests to supplement the existing unit
tests, making sure things work as expected when we assemble the system
together as we would in a release. These tests cover a number of different
ways in which the plugin selections can grow inconsistent.
These new tests all run only when we're in a context where we're allowed
to access the network, because they exercise the real plugin installer
codepath. We could technically build this to use a local filesystem mirror
or other such override to avoid that, but the point here is to make sure
we see the expected behavior in the main case, and so it's worth the
small additional cost of downloading the null provider from the real
registry.
In the original incarnation of Meta.providerFactories we were returning
into a Meta.contextOpts whose signature didn't allow it to return an
error directly, and so we had compromised by making the provider factory
functions themselves return errors once called.
Subsequent work made Meta.contextOpts need to return an error anyway, but
at the time we neglected to update our handling of the providerFactories
result, having it still defer the error handling until we finally
instantiate a provider.
Although that did ultimately get the expected result anyway, the error
ended up being reported from deep in the guts of a Terraform Core graph
walk, in whichever concurrently-visited graph node happened to try to
instantiate the plugin first. This meant that the exact phrasing of the
error message would vary between runs and the reporting codepath didn't
have enough context to given an actionable suggestion on how to proceed.
In this commit we make Meta.contextOpts pass through directly any error
that Meta.providerFactories produces, and then make Meta.providerFactories
produce a special error type so that Meta.Backend can ultimately return
a user-friendly diagnostic message containing a specific suggestion to
run "terraform init", along with a short explanation of what a provider
plugin is.
The reliance here on an implied contract between two functions that are
not directly connected in the callstack is non-ideal, and so hopefully
we'll revisit this further in future work on the overall architecture of
the CLI layer. To try to make this robust in the meantime though, I wrote
it to use the errors.As function to potentially unwrap a wrapped version
of our special error type, in case one of the intervening layers is
changed at some point to wrap the downstream error before returning it.
The codepath for AllAttributesNull was not correct for any nested object
types with collections, and should create single null values for the
correct NestingMode rather than a single object with null attributes.
Since there is no reason to descend into nested object types to create
nullv alues, we can drop the AllAttributesNull function altogether and
create null values as needed during ProposedNew.
The corresponding AllBlockAttributesNull was only called internally in 1
location, and simply delegated to schema.EmptyValue. We can reduce the
package surface area by dropping that function too and calling
EmptyValue directly.
In historical versions of Terraform the responsibility to check this was
inside the terraform.NewContext function, along with various other
assorted concerns that made that function particularly complicated.
More recently, we reduced the responsibility of the "terraform" package
only to instantiating particular named plugins, assuming that its caller
is responsible for selecting appropriate versions of any providers that
_are_ external. However, until this commit we were just assuming that
"terraform init" had correctly selected appropriate plugins and recorded
them in the lock file, and so nothing was dealing with the problem of
ensuring that there haven't been any changes to the lock file or config
since the most recent "terraform init" which would cause us to need to
re-evaluate those decisions.
Part of the game here is to slightly extend the role of the dependency
locks object to also carry information about a subset of provider
addresses whose lock entries we're intentionally disregarding as part of
the various little edge-case features we have for overridding providers:
dev_overrides, "unmanaged providers", and the testing overrides in our
own unit tests. This is an in-memory-only annotation, never included in
the serialized plan files on disk.
I had originally intended to create a new package to encapsulate all of
this plugin-selection logic, including both the version constraint
checking here and also the handling of the provider factory functions, but
as an interim step I've just made version constraint consistency checks
the responsibility of the backend/local package, which means that we'll
always catch problems as part of preparing for local operations, while
not imposing these additional checks on commands that _don't_ run local
operations, such as "terraform apply" when in remote operations mode.
We recently removed the legacy way we used to track the SHA256 hashes of
individual provider executables as part of a plans.Plan, because these
days we want to track the checksums of entire provider packages rather
than just the executable.
In order to achieve that new goal, we can save a copy of the dependency
lock information inside the plan file. This follows our existing precedent
of using exactly the same serialization formats we'd normally use for
this information, and thus we can reuse the existing models and
serializers and be confident we won't lose any detail in the round-trip.
As of this commit there's not yet anything actually making use of this
mechanism. In a subsequent commit we'll teach the main callers that write
and read plan files to include and expect (respectively) dependency
information, verifying that the available providers still match by the
time we're applying the plan.
Previously the planfile.Create function had accumulated probably already
too many positional arguments, and I'm intending to add another one in
a subsequent commit and so this is preparation to make the callsites more
readable (subjectively) and make it clearer how we can extend this
function's arguments to include further components in a plan file.
There's no difference in observable functionality here. This is just
passing the same set of arguments in a slightly different way.
Historically the responsibility for making sure that all of the available
providers are of suitable versions and match the appropriate checksums has
been split rather inexplicably over multiple different layers, with some
of the checks happening as late as creating a terraform.Context.
We're gradually iterating towards making that all be handled in one place,
but in this step we're just cleaning up some old remnants from the
main "terraform" package, which is now no longer responsible for any
version or checksum verification and instead just assumes it's been
provided with suitable factory functions by its caller.
We do still have a pre-check here to make sure that we at least have a
factory function for each plugin the configuration seems to depend on,
because if we don't do that up front then it ends up getting caught
instead deep inside the Terraform runtime, often inside a concurrent
graph walk and thus it's not deterministic which codepath will happen to
catch it on a particular run.
As of this commit, this actually does leave some holes in our checks: the
command package is using the dependency lock file to make sure we have
exactly the provider packages we expect (exact versions and checksums),
which is the most crucial part, but we don't yet have any spot where
we make sure that the lock file is consistent with the current
configuration, and we are no longer preserving the provider checksums as
part of a saved plan.
Both of those will come in subsequent commits. While it's unusual to have
a series of commits that briefly subtracts functionality and then adds
back in equivalent functionality later, the lock file checking is the only
part that's crucial for security reasons, with everything else mainly just
being to give better feedback when folks seem to be using Terraform
incorrectly. The other bits are therefore mostly cosmetic and okay to be
absent briefly as we work towards a better design that is clearer about
where that responsibility belongs.
Only depends_on ancestors for transitive dependencies when we're not
pointed directly at a resource. We can't be much more precise here,
since in order to maintain our guarantee that data sources will wait for
explicit dependencies, if those dependencies happen to be a module,
output, or variable, we have to find some upstream managed resource in
order to check for a planned change.
We must ensure that the terraform required_version is checked as early
as possible, so that new configuration constructs don't cause init to
fail without indicating the version is incompatible.
The loadConfig call before the earlyconfig parsing seems to be unneeded,
and we can delay that to de-tangle it from installing the modules which
may have their own constraints.
TODO: it seems that loadConfig should be able to handle returning the
version constraints in the same manner as loadSingleModule.
Our current implementation of destroy planning includes secretly running a
normal plan first, in order to get its effect of refreshing the state.
Previously our warning about colliding moves would betray that
implementation detail because we'd return it from both of our planning
operations here and thus show the message twice. That would also have
happened in theory for any other warnings emitted by both plan operations,
but it's the move collision warning that made it immediately visible.
We'll now only return warnings from the initial plan if we're also
returning errors from that plan, and thus the warnings of both plans can
never mix together into the same diags and thus we'll avoid duplicating
any warnings.
This does mean that we'd lose any warnings which might hypothetically
emerge only from the hidden normal plan and not from the subsequent
destroy plan, but we'll accept that as an okay tradeoff here because those
warnings are likely to not be super relevant to the destroy case anyway,
or else we'd emit them from the destroy-plan walk too.