We now have RunningInAutomation has a general concern in views.View, so
we no longer need to specify it for each command-specific constructor
separately.
For this initial change I focused only on changing the exported interface
of the views package and let the command-specific views go on having their
own unexported fields containing a copy of the flag because it made this
change less invasive and I wasn't feeling sure yet about whether we
ought to have code within command-specific views directly access the
internals of views.View. However, maybe we'll simplify this further in
a later commit if we conclude that these copies of the flag are
burdensome.
The general version of this gets set directly inside the main package,
which might at some future point allow us to make the command package
itself unaware of this "running in automation" idea and thus reinforce
that it's intended as a presentation-only thing rather than as a
behavioral thing, but we'll save more invasive refactoring for another
day.
This "running in automation" idea is a best effort thing where we try to
avoid printing out specific suggestions of commands to run in the main
workflow when the user is running Terraform inside a wrapper script or
other automation, because they probably don't want to bypass that
automation.
This just makes that information available to the main views.View type,
so we can then make use of it in the implementation of more specialized
view types that embed views.View.
However, nothing is using it as of this commit. We'll use it in later
commits.
This pattern follows as a natural consequence of how for_each is defined,
but I've noticed from community forum Q&A that newcomers often don't
immediately notice the connection between what for_each expects as input
and what a for_each resource produces as a result, so my aim here is to
show a short example of that in the hope of helping folks see the link
here and get ideas on how to employ the technique in other situations.
In practice the current implementation isn't actually using this, and if
we need access to states in future we can access them in either the
plan.PriorState or plan.PrevRunState fields, depending on which stage we
want a state snapshot of.
Previously in refresh-only mode we were skipping making any updates to the
working state at all. That's not correct, though: if the state upgrade or
refresh steps detected changes then we need to at least commit _those_ to
the working state, because those can then be detected by downstream
objects like output values.
Our model for plans/planfile has unfortunately grown inconsistent with
changes to our modeling of plans.Plan.
Originally we considered the plan "header" and the planned changes as an
entirely separate artifact from the prior state, but we later realized
that carrying the prior state around with the plan is important to
ensuring we always have enough context to faithfully render a plan to the
user, and so we added the prior state as a field of plans.Plan.
More recently we've also added the "previous run state" to plans.Plan for
similar reasons.
Unfortunately as a result of that modeling drift our ReadPlan method was
silently producing an incomplete plans.Plan object, causing use-cases like
"terraform show" to produce slightly different results due to the
plan object not round-tripping completely.
As a short-term tactical fix, here we add state snapshot reading into the
ReadPlan function. This is not an ideal solution because it means that
in the case of applying a plan, where we really do need access to the
state _file_, we'll end up reading the prior state file twice. However,
the goal here is only to heal the modelling quirk with as little change
as possible, because we're not currently at a point where we'd be willing
to risk regressions from a larger refactoring.
The connection block schema defines the bastion_port argument as a
number, but we were incorrectly trying to convert it from a string. This
commit fixes that by attempting to convert the cty.Number to the int
result type, returning the error on failure.
An alternative approach would be to change the bastion_port argument in
the schema to be a string, matching the port argument. I'm less sure
about the secondary effects of that change, though.
Several changes to lookup to improve how we handle marked values:
- If the entire collection is marked, preserve the marks on any result
(whether successful or fallback)
- If a returned value from the collection is marked, preserve the marks
from only that value, combined with any overall collection marks
- Retain marks on the fallback value when it is returned, combined with
any overall collection marks
- Include marks on the key in the result, as otherwise the result it
ends up selecting could imply what the sensitive value was
- Retain collection marks when returning an unknown value for a not
wholly-known collection
See also https://github.com/zclconf/go-cty/pull/98
Similar to cty's implementation, we only need to preserve marks from the
value itself, not any nested values it may contain. This means that
taking the length of an umarked list with marked elements results in an
unmarked number.
If we don't do this then we can create a situation where refresh detects
that an object already doesn't exist but we plan to destroy it anyway,
rather than returning "no changes" as expected.
The "previous run state" is our record of what the previous run of
Terraform considered to be its outcome, but in order to do anything useful
with that we must ensure that the data inside conforms to the current
resource type schemas, which might be different than the schemas that were
current during the previous run if the relevant provider has since been
upgraded.
For that reason then, we'll start off with the previous run state set
exactly equal to what was saved in the prior snapshot (modulo any changes
that happened during a state file format upgrade) but then during our
planning operation we'll overwrite individual resource instance objects
with the result of upgrading, so that in a situation where we successfully
run plan to completion the previous run state should always have a
compatible schema with the "prior state" (the result of refreshing) for
managed resources, and thus the caller can meaningfully compare the two
in order to detect and describe any out-of-band changes that occurred
since the previous run.
Until now we've not really cared much about the state snapshot produced
by the previous Terraform operation, except to use it as a jumping-off
point for our refresh step.
However, we'd like to be able to report to an end-user whenever Terraform
detects a change that occurred outside of Terraform, because that's often
helpful context for understanding why a plan contains changes that don't
seem to have corresponding changes in the configuration.
As part of reporting that we'll need to keep track of the state as it
was before we did any refreshing work, so we can then compare that against
the state after refreshing. To retain enough data to achieve that, the
existing Plan field State is now two fields: PrevRunState and PriorState.
This also includes a very shallow change in the core package to make it
populate something somewhat-reasonable into this field so that integration
tests can function reasonably. However, this shallow implementation isn't
really sufficient for real-world use of PrevRunState because we'll
actually need to update PrevRunState as part of planning in order to
incorporate the results of any provider-specific state upgrades to make
the PrevRunState objects compatible with the current provider schema, or
else our diffs won't be valid. This deeper awareness of PrevRunState in
Terraform Core will follow in a subsequent commit, prior to anything else
making use of Plan.PrevRunState.
The set of paths which caused a resource update to require replacement
has been stored in the plan since 0.15.0 (#28201). This commit adds a
simple JSON representation of these paths, allowing consumers of this
format to determine exactly which paths caused the resource to be
replaced.
This representation is intentionally more loosely encoded than the JSON
state serialization of paths used for sensitive attributes. Instead of a
path step being represented by an object with type and value, we use a
more-JavaScripty heterogenous array of numbers and strings. Any
practical consumer of this format will likely traverse an object tree
using the index operator, which should work more easily with this
format. It also allows easy prefix comparison for consumers which are
tracking paths.
While updating the documentation to include this new field, I noticed
that some others were missing, so added them too.
Passing a provider into a module requires that it be named within the
module. This would previously pass validation, however core would fail
to resolve the provider resulting in an unclear "provider not found"
error.
writeNestedAttrDiff and writeAttrDiff were both printing the "unchanged attribute" message. This removes one of the redundant prints.
Fixing this led me (in a very roundabout way) to realize that NestedType attributes were printing a sum total of unchanged attributes, including those in entirely unchanged elements, while *not* printing the total of unchanged elements. I added the necessary logic to count and print the number of unchanged elements for maps and lists.
If a JSON diagnostic value has a highlight end offset which is before
the highlight start offset, this would previously panic. This commit
adds a normalization step to prevent the crash.
Some diagnostic sources (I'm looking at you, HCL) fail to set the end of
the subject range. This is a bug in those code paths, but we can ensure
that we generate valid JSON diagnostics by checking for it here.
By doing so before the range normalization, we ensure that we generate a
unit width highlight whenever possible, so that at least something
useful is displayed.
If the user gives an index-less address for a resource that expects
instance keys then previously we would've emitted one error per declared
instance of the resource, which is overwhelming and not especially
helpful.
Instead, we'll deal with that check prior to expanding resources into
resource instances, and thus we can report a single error which talks
about all of the instances at once.
This does unfortunately come at the expense of splitting the logic for
dealing with the "force replace" addresses into two places, which will
likely make later maintenance harder. In an attempt to mitigate that,
I've included a comment in each place that mentions the other place, which
hopefully future maintainers will keep up-to-date if that situation
changes.
This allows a similar effect to pre-tainting an object but does the action
within the context of a normal plan and apply, avoiding the need for an
intermediate state where the old object still exists but is marked as
tainted.
The core functionality for this was already present, so this commit is
just the UI-level changes to make that option available for use and to
explain how it contributed to the resulting plan in Terraform's output.
If a root modules declares a required_provider but has no configuration,
add a graph node for the provider as if there were an empty
configuration. This will allow the provider to be referenced by name in
module call provider maps, so that a module can pass a default provider
by name to a submodule.
Normally these nodes are added by the MissingProviderTransformer, but
they need to be in place earlier to resolve any possible "proxy provider
nodes" within modules.
When rendering a plan diff, sensitive resource attributes would
previously omit the "forces replacement" comment, which can lead to
confusion when the only reason for a resource being replaced is a
sensitive attribute.
It's been a long time since we gave this page an overhaul, and with our
ongoing efforts to make plan and apply incorporate all of the side-effects
that might need to be done against a configuration it seems like a good
time for some restructuring in that vein.
The starting idea here is to formally split the many "terraform plan"
options into a few different categories:
- Planning modes
- Planning options
- Other options
The planning modes and options are the subset that are also accepted by
"terraform apply" when it's running in its default mode of generating a
plan and then prompting for interactive approval of it. This then allows
us to avoid duplicating all of that information on the "terraform apply"
page, and thus allows us to spend more words discussing each of them.
This set of docs is intended as a fresh start into which we'll be able to
more surgically add in the information about -refresh-only and -replace=...
once we have those implemented. Consequently there are some parts of this
which may seem a little overwraught for what it's currently describing;
that's a result of my having prepared this by just deleting the
-refresh-only and -replace=... content from our initial docs draft and
submitted the result, in anticipation of re-adding the parts I've deleted
here in the very near future in other commits.
This is to make it more obvious at all uses of this field that it's not
something to be used for anything other than UI decisions, hopefully
prompting a reader of code elsewhere to refer to the comments to
understand why it has this unusual prefix and thus see what its intended
purpose is.
This only includes the internal mechanisms to make it work, and not any
of the necessary UI changes to "terraform plan" and "terraform apply" to
activate it yet.
The force-replace options are ultimately handled inside the
NodeAbstractResourceInstance.plan method, at the same place we handle the
similar situation of the provider indicating that replacement is needed,
and so the rest of the changes here are just to propagate the settings
through all of the layers in order to reach that point.
This only includes the core mechanisms to make it work. There's not yet
any way to turn this mode on as an end-user, because we have to do some
more work at the UI layer to present this well before we could include it
as an end-user-visible feature in a release.
At the lowest level of abstraction inside the graph nodes themselves, this
effectively mirrors the existing option to disable refreshing with a new
option to disable change-planning, so that either "half" of the process
can be disabled. As far as the nodes are concerned it would be possible
in principle to disable _both_, but the higher-level representation of
these modes prevents that combination from reaching Terraform Core in
practice, because we block using -refresh-only and -refresh=false at the
same time.
Previously we were repeating some logic in the UI layer in order to
recover relevant additional context about a change to report to a user.
In order to help keep things consistent, and to have a clearer path for
adding more such things in the future, here we capture this user-facing
idea of an "action reason" within the plan model, and then use that
directly in order to decide how to describe the change to the user.
For the moment the "tainted" situation is the only one that gets a special
message, matching what we had before, but we can expand on this in future
in order to give better feedback about the other replace situations too.
This also preemptively includes the "replacing by request" reason, which
is currently not reachable but will be used in the near future as part of
implementing the -replace=... plan command line option to allow forcing
a particular object to be replaced.
So far we don't have any special reasons for anything other than replacing,
which makes sense because replacing is the only one that is in a sense
a special case of another action (Update), but this could expand to
other kinds of reasons in the future, such as explaining which of the
few different reasons a data source read might be deferred until the
apply step.