When migrating from an explicit local backend to Terraform Cloud, we ask
if you want to migrate the state. If there is no state to migrate we
should not ask if they want to migrate the emptiness.
When going from a local backend to Terraform Cloud, if you skip the
`terraform init` and run `terraform apply` this will give the user more
clear instructions.
When terraform detects that a user has no workspaces that map to their current configuration, it will prompt the user to create a new workspace and enter a value name. If the user ignores the prompt and exits it, the legacy backend (terraform.tfstate) will be left in a awkward state:
1. This saved backend config will show a diff for the JSON attributes "serial", "tags" and "hash"
2. "Terraform workspace list" will show an empty list
3. "Terraform apply" will run successfully using the previous workspace, from the previous config, not the one from the current saved backend config
4. The cloud config is not reflective of the current working directory
Solution: If the user exits the prompt, the saved backend config should not be updated because they did not select a new workspace. They are back at the beginning where they are force to re run the init cmd again before proceeding with new changes.
Previously we ended up losing all of the error message detail produced by
the registry address parser, because we treated any registry address
failure as cause to parse the address as a go-getter-style remote address
instead.
That led to terrible feedback in the situation where the user _was_
trying to write a module address but it was invalid in some way.
Although we can't really tighten this up in the default case due to our
compatibility promises, it's never been valid to use the "version"
argument with anything other than a registry address and so as a
compromise here we'll use the presence of "version" as a heuristic for
user intent to parse the source address as a registry address, and thus
we can return a registry-address-specific error message in that case and
thus give more direct feedback about what was wrong.
This unfortunately won't help someone trying to install from the registry
_without_ a version constraint, but I didn't want to let perfect be the
enemy of the good here, particularly since we recommend using version
constraints with registry modules anyway; indeed, that's one of the main
benefits of using a registry rather than a remote source directly.
Object values returned from providers have their attributes marked as
sensitive based on the provider schema. This was not fully implemented
for nested attribute types, which is corrected in this commit.
Resource instances removed from the configuration would previously use
the implied provider address. This is correct for default providers, but
incorrect for those from other namespaces or hosts. The fix here is to
use the stored provider config if it is present.
We cannot programmatically migrate workspaces to Terraform Cloud without
prompts, so `-input=false` should not be allowed in those cases.
There are 4 scenarios where we need input from a user to complete
migrating workspaces to Terraform Cloud.
1.) Migrate from a single local workspace to Terraform Cloud
* Terraform config for a local backend. Implicit local (no backend
specified) is fine.
* `terraform init` and `terraform apply`
* Change the Terraform config to use the cloud block
* `terraform init -input=false`
* You should now see an error message
2.) Migrate from a remote backend with a prefix to Terraform Cloud with
tags
* Create a workspace in Terraform Cloud manually. The name should
include a prefix, like "app-one"
* Have the terraform config use `backend "remote"` with a prefix set to
"app-"
* `terraform init` and `terraform apply`
* Update the Terraform config to use a cloud block with `tags
= ["app"]`. There should not be a prefix defined in the config now.
* `terraform init -input=false`
* You should now see an error message
3.) Migrate from multiple local workspaces to a single Terraform Cloud
workspace
* Create one or many local workspaces
* `terraform init` and `terraform apply` in each
* Change the Terraform config to use the cloud block
* `terraform init -input=false`
* You should now see an error message
4.) Migrate to Terraform Cloud and ask for a workspace name
* Create several local workspaces
* `terraform init` and `terraform apply` in each
* Change the Terraform config to use the cloud block with tags
* `terraform init -input=false`
* You should now see an error message
* Create a function for logic that assigns value to initReason var after changing backend configuration
Create func determineInitReason() for logic block that assigns value to initReason var after changing backend/cloud configuration block or migrating to a different type of backend configuration. Also clarify 'cloud' configuration block message to say 'Terraform Cloud configuration block has changed' instead of 'Terraform Cloud configuration has changed'.
Some of the wording here needed adjusting with the change that backends
largely reflect state snapshot storage (removing 'enhanced'
designation), and that a 'backend' is not necessarily always present.
This fixes an issue where a user could not disable initialization of the
'cloud' configuration block (As is possible with -backend=false), as
well as add some syntactic sugar around -backend by adding a mutually
exclusive -cloud alias.
Unchanged elements in nested attributes backed by sets were previously
misrendered as empty objects. This commit removes the additional
brackets and adds a count of unchanged elements.
The specialized Terraform Cloud migration process asks right up top
whether the user wants to migrate state, because there are various other
questions contingent on that answer.
Therefore we ought to just honor their earlier answer when we get to the
point of actually doing the state migration, rather than prompting again.
This is tricky because we're otherwise just reusing a codepath that's
common to both modes. Hopefully we can find a better way to do this in
a later commit, but for the moment our main motivation is minimizing risk
to the very next release.
There are a few command line options for "terraform init" which are only
relevant when working with traditional backends, with the Cloud
integration previously just mostly ignoring them, or sometimes misbehaving
slightly due to them creating an unreasonable situation.
Now we'll catch these and return explicit errors, in order to be clear
that these options are not needed nor supported in Cloud mode.
This just gives a little extra information to work with when trying to
understand why a test failed. It doesn't change what any of the tests are
actually trying to test.
This aims to encapsulate the somewhat-weird logic we currently use to
distinguish between the various "terraform init" situations involving
Terraform Cloud mode, in the hope of making codepaths that branch based
on this slightly easier to read.
This isn't yet used, but uses of it will follow in subsequent commits.
This pull request focuses on removing the prompt to rename the default
workspace when it is empty. Functionality already exists to not migrate
an empty workspace. This commit adds some clarifying language in the
comment where we do the evaluation to know whether to ask for a new name
or not. I also added an end to end test, which I should have added to
begin with.
Given: You have multiple explicit local workspaces, and the `default`
workspace is empty.
When: You migrate the workspaces to Terraform Cloud.
Then: Terraform should _not_ ask for a workspace to migrate the
`default` workspace to in Terraform Cloud.
All run variables remain encoded as strings in the API but will now be expressed as an HCL value to be evaluated correctly by the remote terraform. Previously, only strings were supported.
Examples:
string: `"quoted literal"` (strings must be quoted)
map: `{ foo = "bar" }`
list: `["foo", "bar"]`
bool: `true`
null: `null`
number: `0.0001`
This requires the API to anticipate that all run variables will be HCL values
Error diags from c.installModules() no longer cause getModules() to exit early.
Whether installModules completed successfully, errored, or was cancelled, we
try to update the manifest as best we can, preferring incomplete information
to none.
Earlier work to make "terraform init" interruptible made the getproviders
package context-aware in order to allow provider installation to be cancelled.
Here we make a similar change for module installation, which is now also
cancellable with SIGINT. This involves plumbing context through initwd and
getmodules. Functions which can make network requests now include a context
parameter whose cancellation cancels those requests.
Since the module installation code is shared, "terraform get" is now
also interruptible during module installation.
* convert uses of worspaces.operations into workspaces.executionMode
The cloud package currently uses a deprecated API on workspaces to determine a workspace's execution mode.
Deprecated: Operations (boolean)
New hotness: Execution mode (string - "local", "remote", or "agent")
More details: https://www.terraform.io/docs/cloud/api/workspaces.html#request-body
All uses of Operations field coming from the client (within the cloud package) should be converted to the appropriate ExecutionMode equivalent.
Also, we need to update all acknowledgment of operations field on the tests that are testing the behavior of workspaces.
Co-authored-by: Nick Fagerlund <nick.fagerlund@gmail.com>
Co-authored-by: Nick Fagerlund <nick.fagerlund@gmail.com>
For Terraform Cloud users using the 'remote' backend, the existing
'pattern' prompt should work just fine - but because their workspaces
are already present in TFC, the 'migration' here is really just
realigning their local workspaces with Terraform Cloud. Instead of
forcing users to do the mental gymnastics of what it means to migrate
from 'prefix' - and because their remote workspaces probably already exist and
already conform to Terraform Cloud's naming concerns - streamline the
process for them and calculate the necessary pattern to migrate as-is,
without any user intervention necessary.
After migrating to TFC with renamed workspaces, automatically select
what was the previous current workspace on behalf of the user. We don't
need to make the user reselect.
Note these change do break the internal/cloud/e2e tests; they are in a
sad state that needs adjusting anyway, so I'm not updating them for
these changes at this time.
The Meta.backend_C_r_S_unchanged() method was sadly a bit of a mess.
It seems to have originally been used as a method to be called
when the backend is not changing, with an extra assumption that if the
configured backend's hash doesn't match the one in state, surely the
hash should just be updated as an option might have been moved to
command line flags.
However, this function was used throughout this file as 'the method to
load the initialized (but not necessarily configured) backend',
regardless of whether or not it is the same (unchanged). This is in
addition to Meta.backendFromState(), which is used to load the same
thing except in the main codepath of 'init -backend=false'.
These changes separate the concerns of backend_C_r_S_unchanged() by
1) Fetching the saved backend (savedBackend())
2) Updating the hash value in the backend cache when appropriate (either
by leaving it to the caller to do themselves or by calling
updateSavedupdateSavedBackendHash())
This allows migration codepaths to *not* update the hash value until
after a migration has successfully taken place.
Allow `GetResource` to return correct types values during validation,
rather than relying on `cty.DynamicVal` as a placeholder. This allows
other dependent expressions to be more correctly evaluated.
* command/format: fix list nested attr diff rendering
Previously, diffs only rendered correctly if all changed elements
appeared before all unchanged elements. Once an unchanged element was
found, all remaining elements were skipped. This usually led to the
output being an empty list with a weird amount of space between the
brackets.
* command/format: improve list nested attr rendering
This makes several changes that make diffs for lists clearer and more
consistent:
* Separate items with brackets instead of only new lines. This better
matches the input syntax and avoids confusion from the first and
last brackets implying there is a single item.
* Render an action symbol for each element of the list
* Use the correct action symbol for new and deleted items
* Fix the alignment of opening and closing brackets
I also refactored the structure so it is similar to the set and map
cases to minimize duplication of the new prints.
* Fix re-use of blockBodyDiffResult struct
Add the `-parallel N` switch to tell the tests to run in N processes:
```
TFE_TOKEN=$TFE_TOKEN TFE_HOSTNAME=$TFE_HOSTNAME TF_ACC=1 go test -v \
-tags=e2e ./internal/cloud/e2e/... -parallel 4
```
Previously, `terraform init` was throwing an error if you configured the cloud
block with `tags` and there weren't any tagged workspaces yet. Confusing and
alienating, since that that's a fairly normal situation! Basically TFC was
handling an empty list of workspaces worse than other backends, because it
doesn't support an unnamed default workspace.
This commit catches that condition during `Meta.selectBackend()` and asks the
user to pick a name for their first tagged workspace. If they cancel out, we
still error, but if we know what name they want, we can handle it the same way
as a nonexistent workspace specified in `name` -- just pass it to
`Meta.SetWorkspace()`, and let the workspace get implicitly created when
`InitCommand.Run()` eventually calls `StateMgr()`.
Based on feedback during earlier alpha releases, we've decided to move
forward with the current design for the first phase of config-driven
refactoring.
Therefore here we've marked the experiment as concluded with no changes
to the most recent incarnation of the functionality. The other changes
here are all just updating test fixtures to no longer declare that they
are using experimental features.
When using the Terraform Cloud integration - like the 'remote'
backend - resource count output should be suppressed if those counts are
being rendered remotely. This generalizes this to the shared
BackendWithRemoteTerraformVersion interface.
The current behavior of module input variables is to allow users to
override a default by assigning `null`, which works contrary to the
behavior of resource attributes, and prevents explicitly accepting a
default when the input must be defined in the configuration.
Add a new variable attribute called `nullable` will allow explicitly
defining when a variable can be set to null or not. The current default
behavior is that of `nullable=true`.
Setting `nullable=false` in a variable block indicates that the variable
value can never be null. This either requires a non-null input value, or
a non-null default value. In the case of the latter, we also opt-in to
the new behavior of a `null` input value taking the default rather than
overriding it.
In a future language edition where we make `nullable=false` the default,
setting `nullable=true` will allow the legacy behavior of `null`
overriding a default value. The only future configuration in which this
would be required even if the legacy behavior were not desired is when
setting an optional+nullable value. In that case `default=null` would
also be needed and we could therefor imply `nullable=true` without
requiring it in the configuration.
A more native integration for Terraform Cloud and its CLI-driven run workflow.
Instead of a backend, users declare a special block in the top-level terraform settings
block to configure Terraform Cloud, then run terraform init.
Full documentation will follow in later commits.
When the 'select the exact version if possible' behavior was added, the
version check below it was never updated to take the newly updated
version in to account, resulting in a failed version check even as the
remote workspace updated to the correct version necessary.
E2E tests including cost estimation should indeed be added, but the
default case should be disabled; lots of cycles lost to pointless cost
estimates on null and random resources.
The delete + assign at the end of `Update` and `UpdateByID` are meant to handle
renaming a workspace — (remove old name), (insert new name).
However, `UpdateByID` was doing (remove new name), (insert new name) and leaving
the old name in place. This commit changes it to match `Update` by grabbing the
original name off the workspace object _before_ potentially renaming it.
Alas, there's not a very good way to test the message we're supposed to print to
the console in this situation; we just don't appear to have a mock terminal that
the test can read from. But we can at least test that the function returns
without erroring under the exact conditions where it was erroring before.
Note that the behaviors of mc.Workspaces.Update and UpdateByID were already
starting to drift, so I consolidated their actual attribute update logic into a
helper function before they drifted much further.
Previously, if the remote TFC/TFE instance doesn't happen to have a tool_version
record whose name exactly matches the value of `tfversion.String()`, Terraform
would be completely blocked from using the `terraform workspace new` command
(when configured with the tags strategy) — the API would give a 422 to the
whole create request.
This commit changes the StateMgr() function to do the work in two passes; first
create the workspace (which should work fine regardless), THEN update the
Terraform version and print a warning to the terminal if it fails (which 99% of
the time is a benign failure with little impact on your future CLI usage).
There are actually a few different ways to get to this message.
1. Blank state — no previous terraform applied. Start with a cloud block.
1. Implicit local — start with no backend specified. This actually goes
through the same code execution path as the first scenario.
1. Explicit local — start with a backend local block that has been
applied, then change from the local backend to a cloud block. This
will recognize the state, and is a different path through the code in
the meta backend.
This commit handles the last case. The messaging has also been tweaked.
End to end test included as well.
Explicit version strings are actually also version constraints! And the special
comparisons we were doing to allow a range of compatible versions can also be
expressed as version constraints.
Bonus: also simplify the way we handle version check errors, by composing the
messages inline and only extracting the repetitive parts into a function.
The cloud backend (and remote before it) previously expected a TFC workspace's
`terraform-version` attribute to be either the magic string `"latest"` or an
explicit semver value. But a workspace might have a version constraint instead
(like `~> 1.1.0`), in which case the version check would blow up.
This commit checks whether `terraform-version` is a valid version constraint
before erroring out, and if so, returns success if the local version meets the
constraint.
Because it's not practical to deeply introspect the slice of version space
defined by a constraint, this check is slightly less robust than the version
comparisons below it:
- It can give a false OK on open-ended constraints like `>= 1.1.0`. Say you're
running 1.3.0, it changed the state format, and the TFE instance admin has
not yet added any 1.3.x Terraform versions; your workspace will now break.
- It will give a false not-OK when using different minor versions within a range
that we know to be compatible, e.g. remote constraint of `~> 0.15.0` and local
version of 1.1.0.
- This would be totally useless with the pre-0.14 versions of Terraform, where
patch releases could change state format... but we're not going back in time
to add this feature to them anyway.
Still, in the most common likely case (`~> x.y.z`), it'll complain at you (with
an error you can choose to override) if you're not using the same minor version,
and that seems proportionate, useful, and expected.
When a user runs `terraform refresh` we give them an error message that
tells them to run `terraform apply -refresh-state`. We could just run
that command for them, though. That is what this PR does.
1. ParseDeclaredValues: parses unparsed variables into terraform.InputValues
2. ProbeUndeclaredVariableValues: compares variable declarations with unparsed values to warn/error about undeclared variables
* determining source or destination to cloud
* handling single to single state migrations to cloud,
using a name strategy or a tags strategy
* Add end-to-end tests for state migration.
These changes remove all of the preexisting version checking for
individual features, wiping the slate clean with an overall minimum
requirement of a future TFP-API-Version 2.5, which at the time of this
writing is expected to be TFE v202112-1.
It also actually provides that expected TFE version as an actionable
error message, rather than generically saying that it isn't supported or
using the somewhat opaque API version header.
The 'tfe' service was appended to with various versions to denote a new
'feature' implemented by a new 'service'. This quickly proved to not be
scalable, as adding an entry to the discovery document from every
feature is bad.
The new mechanism added was checking the TFP-API-Version header on
requests for a version, instead.
So we'll remove the separation here between different tfe service
'versions' and the separate 'state' service and Just Use TFE, as well as
the TFP-API-Version header for all feature versioning., as well as the
TFP-API-Version header for all feature versioning.