Commit Graph

114 Commits

Author SHA1 Message Date
Martin Atkins bee7403f3e command/workspace_delete: Allow deleting a workspace with empty husks
Previously we would reject attempts to delete a workspace if its state
contained any resources at all, even if none of the resources had any
resource instance objects associated with it.

Nowadays there isn't any situation where the normal Terraform workflow
will leave behind resource husks, and so this isn't as problematic as it
might've been in the v0.12 era, but nonetheless what we actually care
about for this check is whether there might be any remote objects that
this state is tracking, and for that it's more precise to look for
non-nil resource instance objects, rather than whole resources.

This also includes some adjustments to our error messaging to give more
information about the problem and to use terminology more consistent with
how we currently talk about this situation in our documentation and
elsewhere in the UI.

We were also using the old State.HasResources method as part of some of
our tests. I considered preserving it to avoid changing the behavior of
those tests, but the new check seemed close enough to the intent of those
tests that it wasn't worth maintaining this method that wouldn't be used
in any main code anymore. I've therefore updated those tests to use
the new HasResourceInstanceObjects method instead.
2021-10-13 13:54:11 -07:00
James Bardin 87e852c832
Merge pull request #29741 from hashicorp/jbardin/test-fix
fix test fixture with multiple provider instances
2021-10-13 09:53:02 -04:00
James Bardin 73b1263a86 wrap multiple provider creations into a factory fn
When a test uses multiple instances of the same provider, we may need to
have separate objects to prevent overwriting of the MockProvider state.
Create a completely new MockProvider in each factory function call
rather than re-using the original provider value.
2021-10-12 17:47:50 -04:00
James Bardin 903ae5edfd fix test fixtures with multiple providers
Allow these to share the same backing MockProvider.
2021-10-12 14:34:34 -04:00
James Bardin 656f03b250 fix test fixture had the instance in the wrong mod
Make the state match the fixture config. The old test was not
technically invalid, but because it caused multiple instances of the
provider to be created, they were backed by the same MockProvider value
resulting in the `*Called` fields interfering.
2021-10-12 13:53:02 -04:00
James Bardin 3f4b680f1c Check for nil change during apply
Because NodeAbstractResourceInstance.readDiff can return a nil change,
we must check for that in all callers.
2021-10-08 16:46:29 -04:00
James Bardin a036109bc1 add comment about when we call ConfigureProvider 2021-10-08 15:23:36 -04:00
James Bardin 03f71c2f06 fixup tests for MockProvider changes
Resetting the *Called fields and enforcing configuration broke a few
tests.
2021-10-08 08:42:06 -04:00
James Bardin 22b400b8de skip refreshing deposed during destroy plan
The destroy plan should not require a configured provider (the complete
configuration is not evaluated, so they cannot be configured).

Deposed instances were being refreshed during the destroy plan, because
this instance type is only ever destroyed and shares the same
implementation between plan and walkPlanDestroy. Skip refreshing during
walkPlanDestroy.
2021-10-07 16:51:48 -04:00
James Bardin ad9944e523 test that providers are configured for calls
Have the MockProvider ensure that Configure is always called before any
methods that may require a configured provider.

Ensure the MockProvider *Called fields are zeroed out when re-using the
provider instance.
2021-10-07 16:48:56 -04:00
James Bardin c68422d125
Merge pull request #29682 from hashicorp/jbardin/less-data-depends_on
Refine data depends_on dependency checks
2021-10-04 11:39:21 -04:00
Martin Atkins 6a98e4720c plans/planfile: Create takes most arguments via a struct type
Previously the planfile.Create function had accumulated probably already
too many positional arguments, and I'm intending to add another one in
a subsequent commit and so this is preparation to make the callsites more
readable (subjectively) and make it clearer how we can extend this
function's arguments to include further components in a plan file.

There's no difference in observable functionality here. This is just
passing the same set of arguments in a slightly different way.
2021-10-01 14:43:58 -07:00
Martin Atkins 8d193ad268 core: Simplify and centralize plugin availability checks
Historically the responsibility for making sure that all of the available
providers are of suitable versions and match the appropriate checksums has
been split rather inexplicably over multiple different layers, with some
of the checks happening as late as creating a terraform.Context.

We're gradually iterating towards making that all be handled in one place,
but in this step we're just cleaning up some old remnants from the
main "terraform" package, which is now no longer responsible for any
version or checksum verification and instead just assumes it's been
provided with suitable factory functions by its caller.

We do still have a pre-check here to make sure that we at least have a
factory function for each plugin the configuration seems to depend on,
because if we don't do that up front then it ends up getting caught
instead deep inside the Terraform runtime, often inside a concurrent
graph walk and thus it's not deterministic which codepath will happen to
catch it on a particular run.

As of this commit, this actually does leave some holes in our checks: the
command package is using the dependency lock file to make sure we have
exactly the provider packages we expect (exact versions and checksums),
which is the most crucial part, but we don't yet have any spot where
we make sure that the lock file is consistent with the current
configuration, and we are no longer preserving the provider checksums as
part of a saved plan.

Both of those will come in subsequent commits. While it's unusual to have
a series of commits that briefly subtracts functionality and then adds
back in equivalent functionality later, the lock file checking is the only
part that's crucial for security reasons, with everything else mainly just
being to give better feedback when folks seem to be using Terraform
incorrectly. The other bits are therefore mostly cosmetic and okay to be
absent briefly as we work towards a better design that is clearer about
where that responsibility belongs.
2021-10-01 14:43:58 -07:00
James Bardin 618e9cf8ec test for unexpected data reads 2021-09-30 17:13:33 -04:00
James Bardin 016463ea9c don't check all ancestors for data depends_on
Only depends_on ancestors for transitive dependencies when we're not
pointed directly at a resource. We can't be much more precise here,
since in order to maintain our guarantee that data sources will wait for
explicit dependencies, if those dependencies happen to be a module,
output, or variable, we have to find some upstream managed resource in
order to check for a planned change.
2021-09-30 16:43:09 -04:00
Martin Atkins f60d55d6ad core: Emit only one warning for move collisions in destroy-plan mode
Our current implementation of destroy planning includes secretly running a
normal plan first, in order to get its effect of refreshing the state.

Previously our warning about colliding moves would betray that
implementation detail because we'd return it from both of our planning
operations here and thus show the message twice. That would also have
happened in theory for any other warnings emitted by both plan operations,
but it's the move collision warning that made it immediately visible.

We'll now only return warnings from the initial plan if we're also
returning errors from that plan, and thus the warnings of both plans can
never mix together into the same diags and thus we'll avoid duplicating
any warnings.

This does mean that we'd lose any warnings which might hypothetically
emerge only from the hidden normal plan and not from the subsequent
destroy plan, but we'll accept that as an okay tradeoff here because those
warnings are likely to not be super relevant to the destroy case anyway,
or else we'd emit them from the destroy-plan walk too.
2021-09-27 15:46:36 -07:00
Alisdair McDiarmid f1e9d88ddc
Merge pull request #29640 from hashicorp/alisdair/fix-refresh-only-with-orphans
core: Fix refresh-only interaction with orphans
2021-09-24 09:25:46 -04:00
Martin Atkins d97ef10bb8 core: Don't return other errors if move statements are invalid
Because our validation rules depend on some dynamic results produced by
actually running the plan, we deal with moves in a "backwards" order where
we try to apply them first -- ignoring anything strange we might find --
and then validate the original statements only after planning.

An unfortunate consequence of that approach is that when the move
statements are invalid it's likely that move execution will not fully
complete, and so the generated plan is likely to be incorrect and might
well include errors resulting from the unresolved moves.

To mitigate that, here we let any move validation errors supersede all
other diagnostics that the plan phase might've generated, in the hope that
it'll help the user focus on fixing the incorrect move statements without
creating confusing by reporting errors that only appeared as a quick of
how Terraform worked around the invalid move statements earlier.
2021-09-23 14:37:08 -07:00
Martin Atkins 1bff623fd9 core: Report a warning if any moves get blocked
In most cases Terraform will be able to automatically fully resolve all
of the pending move statements before creating a plan, but there are some
edge cases where we can end up wanting to move one object to a location
where another object is already declared.

One relatively-obvious example is if someone uses "terraform state mv" in
order to create a set of resource instance bindings that could never have
arising in normal Terraform use.

A less obvious example arises from the interactions between moves at
different levels of granularity. If we are both moving a module to a new
address and moving a resource into an instance of the new module at the
same time, the old module might well have already had a resource of the
same name and so the resource move will be unresolvable.

In these situations Terraform will move the objects as far as possible,
but because it's never valid for a move "from" address to still be
declared in the configuration Terraform will inevitably always plan to
destroy the objects that didn't find a final home. To give some additional
explanation for that result, here we'll add a warning which describes
what happened.

This is not a particularly actionable warning because we don't really
have enough information to guess what the user intended, but we do at
least prompt that they might be able to use the "terraform state" family
of subcommands to repair the ambiguous situation before planning, if they
want a different result than what Terraform proposed.
2021-09-23 14:37:08 -07:00
Martin Atkins a1a713cf28 core: Report ActionReasons when we plan to delete "orphans"
There are a few different reasons why a resource instance tracked in the
prior state might be considered an "orphan", but previously we reported
them all identically in the planned changes.

In order to help users understand the reason for a surprising planned
delete, we'll now try to specify an additional reason for the planned
deletion, covering all of the main reasons why that could happen.

This commit only introduces the new detail to the plans.Changes result,
though it also incidentally exposes it as part of the JSON plan result
in order to keep that working without returning errors in these new
cases. We'll expose this information in the human-oriented UI output in
a subsequent commit.
2021-09-23 14:37:08 -07:00
Alisdair McDiarmid ceb580ec40 core: Fix refresh-only interaction with orphans
When planning in refresh-only mode, we must not remove orphaned
resources due to changed count or for_each values from the planned
state. This was previously happening because we failed to pass through
the plan's skip-plan-changes flag to the instance orphan node.
2021-09-23 16:38:08 -04:00
Martin Atkins 83f0376673 refactoring: ApplyMoves new return type
When we originally stubbed ApplyMoves we didn't know yet how exactly we'd
be using the result, so we made it a double-indexed map allowing looking
up moves in both directions.

However, in practice we only actually need to look up old addresses by new
addresses, and so this commit first removes the double indexing so that
each move is only represented by one element in the map.

We also need to describe situations where a move was blocked, because in
a future commit we'll generate some warnings in those cases. Therefore
ApplyMoves now returns a MoveResults object which contains both a map of
changes and a map of blocks. The map of blocks isn't used yet as of this
commit, but we'll use it in a later commit to produce warnings within
the "terraform" package.
2021-09-22 09:01:10 -07:00
Martin Atkins f0034beb33 core: refactoring.ImpliedMoveStatements replaces NodeCountBoundary
Going back a long time we've had a special magic behavior which tries to
recognize a situation where a module author either added or removed the
"count" argument from a resource that already has instances, and to
silently rename the zeroth or no-key instance so that we don't plan to
destroy and recreate the associated object.

Now we have a more general idea of "move statements", and specifically
the idea of "implied" move statements which replicates the same heuristic
we used to use for this behavior, we can treat this magic renaming rule as
just another "move statement", special only in that Terraform generates it
automatically rather than it being written out explicitly in the
configuration.

In return for wiring that in, we can now remove altogether the
NodeCountBoundary graph node type and its associated graph transformer,
CountBoundaryTransformer. We handle moves as a preprocessing step before
building the plan graph, so we no longer need to include any special nodes
in the graph to deal with that situation.

The test updates here are mainly for the graph builders themselves, to
acknowledge that indeed we're no longer inserting the NodeCountBoundary
vertices. The vertices that NodeCountBoundary previously depended on now
become dependencies of the special "root" vertex, although in many cases
here we don't see that explicitly because of the transitive reduction
algorithm, which notices when there's already an equivalent indirect
dependency chain and removes the redundant edge.

We already have plenty of test coverage for these "count boundary" cases
in the context tests whose names start with TestContext2Plan_count and
TestContext2Apply_resourceCount, all of which continued to pass here
without any modification and so are not visible in the diff. The test
functions particularly relevant to this situation are:
 - TestContext2Plan_countIncreaseFromNotSet
 - TestContext2Plan_countDecreaseToOne
 - TestContext2Plan_countOneIndex
 - TestContext2Apply_countDecreaseToOneCorrupted

The last of those in particular deals with the situation where we have
both a no-key instance _and_ a zero-key instance in the prior state, which
is interesting here because to exercises an intentional interaction
between refactoring.ImpliedMoveStatements and refactoring.ApplyMoves,
where we intentionally generate an implied move statement that produces
a collision and then expect ApplyMoves to deal with it in the same way as
it would deal with all other collisions, and thus ensure we handle both
the explicit and implied collisions in the same way.

This does affect some UI-level tests, because a nice side-effect of this
new treatment of this old feature is that we can now report explicitly
in the UI that we're assigning new addresses to these objects, whereas
before we just said nothing and hoped the user would just guess what had
happened and why they therefore weren't seeing a diff.

The backend/local plan tests actually had a pre-existing bug where they
were using a state with a different instance key than the config called
for but getting away with it because we'd previously silently fix it up.
That's still fixed up, but now done with an explicit mention in the UI
and so I made the state consistent with the configuration here so that the
tests would be able to recognize _real_ differences where present, as
opposed to the errant difference caused by that inconsistency.
2021-09-20 09:06:22 -07:00
Alisdair McDiarmid 638784b195 cli: Omit move-only drift, except for refresh-only
The set of drifted resources now includes move-only changes, where the
object value is identical but a move has been executed. In normal
operation, we previousl displayed these moves twice: once as part of
drift output, and once as part of planned changes.

As of this commit we omit move-only changes from drift display, except
for refresh-only plans. This fixes the redundant output.
2021-09-17 14:47:00 -04:00
Alisdair McDiarmid 61b2d8e3fe core: Plan drift includes move-only changes
Previously, drifted resources included only updates and deletes. To
correctly display the full changes which would result as part of a
refresh-only apply, the drifted resources must also include move-only
changes.
2021-09-17 14:47:00 -04:00
Alisdair McDiarmid d425c26d77
Merge pull request #29589 from hashicorp/alisdair/planfile-drifted-resources
core: Compute resource drift during plan phase, store in plan file
2021-09-17 14:23:04 -04:00
James Bardin 1bd5987a8c
Merge pull request #29573 from hashicorp/renamed-typo
Correct terraform.env deprecation message typo
2021-09-16 17:02:20 -04:00
Alisdair McDiarmid bebf1ad23a core: Compute resource drift after plan walk
Rather than delaying resource drift detection until it is ready to be
presented, here we perform that computation after the plan walk has
completed. The resulting drift is represented like planned resource
changes, using a slice of ResourceInstanceChangeSrc values.
2021-09-16 15:22:37 -04:00
Martin Atkins e6a76d8ba0 core: Fail if a moved resource instance is excluded by -target
Because "moved" blocks produce changes that span across more than one
resource instance address at the same time, we need to take extra care
with them during planning.

The -target option allows for restricting Terraform's attention only to
a subset of resources when planning, as an escape hatch to recover from
bugs and mistakes.

However, we need to avoid any situation where only one "side" of a move
would be considered in a particular plan, because that'd create a new
situation that would be otherwise unreachable and would be difficult to
recover from.

As a compromise then, we'll reject an attempt to create a targeted plan if
the plan involves resolving a pending move and if the source address of
that move is not included in the targets.

Our error message offers the user two possible resolutions: to create an
untargeted plan, thus allowing everything to resolve, or to add additional
-target options to include just the existing resource instances that have
pending moves to resolve.

This compromise recognizes that it is possible -- though hopefully rare --
that a user could potentially both be recovering from a bug or mistake at
the same time as processing a move, if e.g. the bug was fixed by upgrading
a module and the new version includes a new "moved" block. In that edge
case, it might be necessary to just add the one additional address to
the targets rather than removing the targets altogether, if creating a
normal untargeted plan is impossible due to whatever bug they're trying to
recover from.
2021-09-16 08:57:59 -07:00
Chris Arcand 172200808b Correct terraform.env deprecation message typo 2021-09-13 14:21:26 -05:00
James Bardin 8ed9a270e5
Merge pull request #29559 from hashicorp/jbardin/optional-attrs
Prevent `ObjectWithOptionalAttrs` from escaping
2021-09-13 08:58:11 -04:00
James Bardin 7f26531d4f variable types should always be populated 2021-09-13 08:51:32 -04:00
James Bardin 53a73a8ab6 configs: add ConstraintType to config.Variable
In order to handle optional attributes, the Variable type needs to keep
track of the type constraint for decoding and conversion, as well as the
concrete type for creating values and type comparison.

Since the Type field is referenced throughout the codebase, and for
future refactoring if the handling of optional attributes changes
significantly, the constraint is now loaded into an entirely new field
called ConstraintType. This prevents types containing
ObjectWithOptionalAttrs from escaping the decode/conversion codepaths
into the rest of the codebase.
2021-09-13 08:51:32 -04:00
Martin Atkins 343279110a core: Graph walk loads plugin schemas opportunistically
Previously our graph walker expected to recieve a data structure
containing schemas for all of the provider and provisioner plugins used in
the configuration and state. That made sense back when
terraform.NewContext was responsible for loading all of the schemas before
taking any other action, but it no longer has that responsiblity.

Instead, we'll now make sure that the "contextPlugins" object reaches all
of the locations where we need schema -- many of which already had access
to that object anyway -- and then load the needed schemas just in time.

The contextPlugins object memoizes schema lookups, so we can safely call
it many times with the same provider address or provisioner type name and
know that it'll still only load each distinct plugin once per Context
object.

As of this commit, the Context.Schemas method is now a public interface
only and not used by logic in the "terraform" package at all. However,
that does leave us in a rather tenuous situation of relying on the fact
that all practical users of terraform.Context end up calling "Schemas" at
some point in order to verify that we have all of the expected versions
of plugins. That's a non-obvious implicit dependency, and so in subsequent
commits we'll gradually move all responsibility for verifying plugin
versions into the caller of terraform.NewContext, which'll heal a
long-standing architectural wart whereby the caller is responsible for
installing and locating the plugin executables but not for verifying that
what's installed is conforming to the current configuration and dependency
lock file.
2021-09-10 14:56:49 -07:00
Martin Atkins 38ec730b0e core: Opportunistic schema loading during graph construction
Previously the graph builders all expected to be given a full manifest
of all of the plugin component schemas that they could need during their
analysis work. That made sense when terraform.NewContext would always
proactively load all of the schemas before doing any other work, but we
now have a load-as-needed strategy for schemas.

We'll now have the graph builders use the contextPlugins object they each
already hold to retrieve individual schemas when needed. This avoids the
need to prepare a redundant data structure to pass alongside the
contextPlugins object, and leans on the memoization behavior inside
contextPlugins to preserve the old behavior of loading each provider's
schema only once.
2021-09-10 14:56:49 -07:00
Martin Atkins a59c2fe1b9 core: EvalContextBuiltin no longer has a "Schemas"
By tolerating ProviderSchema and ProvisionerSchema potentially returning
errors, we can slightly simplify EvalContextBuiltin by having it retrieve
individual schemas when needed directly from the "Plugins" object.

EvalContextBuiltin already needs to be holding a contextPlugins instance
for other reasons anyway, so this allows us to get the same result with
fewer moving parts.
2021-09-10 14:56:49 -07:00
Martin Atkins 2bf1de1f5d core: Context.Schemas in terms of contextPlugins methods
The responsibility for actually instantiating a single plugin and reading
out its schema now belongs to the contextPlugins type, which memoizes the
results by each plugin's unique identifier so that we can avoid retrieving
the same schemas multiple times when working with the same context.

This doesn't change the API of Context.Schemas but it does restore the
spirit of an earlier version of terraform.Context which did all of the
schema loading proactively inside terraform.NewContext. In an earlier
commit we reduced the scope of terraform.NewContext, making schema loading
a separate step, but in the process of doing that removed the effective
memoization of the schema results that terraform.NewContext was providing.

The memoization here will play out in a different way than before, because
we'll be treating each plugin call as separate rather than proactively
loading them all up front, but this is effectively the same because all
of our operation methods on Context call Context.Schemas early in their
work and thus end up forcing all of the necessary schemas to load up
front nonetheless.
2021-09-10 14:56:49 -07:00
Martin Atkins 80b3fcf93e core: Replace contextComponentFactory with contextPlugins
In the v0.12 timeframe we made contextComponentFactory an interface with
the expectation that we'd write mocks of it for tests, but in practice we
ended up just always using the same "basicComponentFactory" implementation
throughout.

In the interests of simplification then, here we replace that interface
and its sole implementation with a new concrete struct type
contextPlugins.

Along with the general benefit that this removes an unneeded indirection,
this also means that we can add additional methods to the struct type
without the usual restriction that interface types prefer to be small.
In particular, in a future commit I'm planning to add methods for loading
provider and provisioner schemas, working with the currently-unused new
fields this commit has included in contextPlugins, as compared to its
predecessor basicComponentFactory.
2021-09-10 14:56:49 -07:00
Martin Atkins dcfa077adf core: contextComponentFactory doesn't need to enumerate components
In earlier Terraform versions we used the set of all available plugins of
each type to make graph-building decisions, but in modern Terraform we
make those decisions based entirely on the configuration.

Consequently, we no longer need the methods which can enumerate all of the
known plugin components of a given type. Instead, we just try to
instantiate each of the plugins that the configuration refers to and then
handle the error when that fails, which typically means that the user
needs to run "terraform init" to install some new plugins.
2021-09-10 14:56:49 -07:00
Martin Atkins d51921f085 core: Provider transformers don't use the set of all available providers
In earlier incarnations of these transformers we used the set of all
available providers for tasks such as generating implied provider
configuration nodes.

However, in modern Terraform we can extract all of the information we need
from the configuration itself, and so these transformers weren't actually
using this set of provider addresses.

These also ended up getting left behind as sets of string rather than sets
of addrs.Provider in our earlier refactoring work, which didn't really
matter because the result wasn't used anywhere anyway.

Rather than updating these to use addrs.Provider instead, I've just
removed the unused arguments entirely in the hope of making it easier to
see what inputs these transformers use to make their decisions.
2021-09-10 14:56:49 -07:00
Martin Atkins feb219a9c4 core: Un-export the LoadSchemas function
The public interface for loading schemas is Context.Schemas, which can
take into account the context's records of which plugin versions and
checksums we're expecting. loadSchemas is an implementation detail of
that, representing the part we run only after we've verified all of the
plugins.
2021-09-10 14:56:49 -07:00
James Bardin 863963e7a6 de-linting 2021-09-01 11:36:21 -04:00
Martin Atkins 7803f69d42 core: Enable TestContext2Plan_movedResourceBasic
This is the first test exercising the basic functionality of config-driven
move. We previously had it skipped because Terraform's previous design
of treating all three of the state artifacts as mutable attributes of
terraform.Context meant that it was too late during planning to deal with
the move operations, and thus this test was failing.

Thanks to the previous commit, which changes the terraform.Context API
such that we can defer creating the three state artifacts until we're
already doing planning, this test now works and shows Terraform correctly
handling a resource that was formerly called "a" and is now called "b",
with a "moved" block recording that renaming.
2021-08-30 13:59:14 -07:00
Martin Atkins 89b05050ec core: Functional-style API for terraform.Context
Previously terraform.Context was built in an unfortunate way where all of
the data was provided up front in terraform.NewContext and then mutated
directly by subsequent operations. That made the data flow hard to follow,
commonly leading to bugs, and also meant that we were forced to take
various actions too early in terraform.NewContext, rather than waiting
until a more appropriate time during an operation.

This (enormous) commit changes terraform.Context so that its fields are
broadly just unchanging data about the execution context (current
workspace name, available plugins, etc) whereas the main data Terraform
works with arrives via individual method arguments and is returned in
return values.

Specifically, this means that terraform.Context no longer "has-a" config,
state, and "planned changes", instead holding on to those only temporarily
during an operation. The caller is responsible for propagating the outcome
of one step into the next step so that the data flow between operations is
actually visible.

However, since that's a change to the main entry points in the "terraform"
package, this commit also touches every file in the codebase which
interacted with those APIs. Most of the noise here is in updating tests
to take the same actions using the new API style, but this also affects
the main-code callers in the backends and in the command package.

My goal here was to refactor without changing observable behavior, but in
practice there are a couple externally-visible behavior variations here
that seemed okay in service of the broader goal:
 - The "terraform graph" command is no longer hooked directly into the
   core graph builders, because that's no longer part of the public API.
   However, I did include a couple new Context functions whose contract
   is to produce a UI-oriented graph, and _for now_ those continue to
   return the physical graph we use for those operations. There's no
   exported API for generating the "validate" and "eval" graphs, because
   neither is particularly interesting in its own right, and so
   "terraform graph" no longer supports those graph types.
 - terraform.NewContext no longer has the responsibility for collecting
   all of the provider schemas up front. Instead, we wait until we need
   them. However, that means that some of our error messages now have a
   slightly different shape due to unwinding through a differently-shaped
   call stack. As of this commit we also end up reloading the schemas
   multiple times in some cases, which is functionally acceptable but
   likely represents a performance regression. I intend to rework this to
   use caching, but I'm saving that for a later commit because this one is
   big enough already.

The proximal reason for this change is to resolve the chicken/egg problem
whereby there was previously no single point where we could apply "moved"
statements to the previous run state before creating a plan. With this
change in place, we can now do that as part of Context.Plan, prior to
forking the input state into the three separate state artifacts we use
during planning.

However, this is at least the third project in a row where the previous
API design led to piling more functionality into terraform.NewContext and
then working around the incorrect order of operations that produces, so
I intend that by paying the cost/risk of this large diff now we can in
turn reduce the cost/risk of future projects that relate to our main
workflow actions.
2021-08-30 13:59:14 -07:00
Martin Atkins 4faac6ee43 core: Record move result information in the plan
Here we wire through the "move results" into the graph walk data
structures so that all of the the nodes which produce
plans.ResourceInstanceChange values can capture the "PrevRunAddr" for
each resource instance.

This doesn't actually quite work yet, because the logic in Context.Plan
isn't actually correct and so the updated state from
refactoring.ApplyMoves isn't actually visible as the "previous run state".
For that reason, the context test in this commit is currently skipped,
with the intent of re-enabling it once the updated state is properly
propagating into the plan graph walk and thus we can actually react to
the result of the move while choosing actions for those addresses.
2021-08-30 13:59:14 -07:00
Martin Atkins 22b36d1f4c Field for the previous address of each resource instance in the plan
In order to expose the effect of any relevant "moved" statements we dealt
with prior to creating the plan, we'll record with each
ResourceInstanceChange both is current address and the address it was
tracked at for the previous run.

To save consumers of these objects from having to special-case the
situation where there _was_ no previous run (e.g. because this is a Create
change), we'll just pretend the previous run address was the same as the
current address in that case, the same as for an update without any
renaming in effect.

This includes a breaking change to the plan file format, but one that
doesn't require a version number increment because there is no ambiguity
between the two formats and so mismatched parsers will already fail with
an error message.

As of this commit we've just added the new field but not yet populated it
with any useful information: it always just matches Addr. A future commit
will wire this up to the result of applying the moves so that we can
populate it correctly. We also don't yet expose this new information
anywhere in the UI layer.
2021-08-30 13:59:14 -07:00
Martin Atkins f76a3467dc core: Call into refactoring.ValidateMoves after creating a plan
As of this commit, refactoring.ValidateMoves doesn't actually do anything
yet (always returns nil) but the goal here is to wire in the set of all
declared instances so that refactoring.ValidateMoves will then have all
of the information it needs to encapsulate our validation rules.

The actual implementation of refactoring.ValidateMoves will follow in
subsequent commits.
2021-07-28 13:54:10 -07:00
James Bardin 07ee689eff update comment and extend test 2021-07-20 16:09:46 -04:00
James Bardin 66a5950acb ignored value in this test should not be marked 2021-07-19 16:42:26 -04:00
James Bardin e574f73f61 test with marked, unknown, planned values 2021-07-19 16:42:26 -04:00
James Bardin ea077cbd90 handle marks within ignore_changes
Up until now marks were not considered by `ignore_changes`, that however
means changes to sensitivity within a configuration cannot ignored, even
though they are planned as changes.

Rather than separating the marks and tracking their paths, we can easily
update the processIgnoreChanges routine to handle the marked values
directly. Moving the `processIgnoreChanges` call also cleans up some of
the variable naming, making it more consistent through the body of the
function.
2021-07-19 16:42:26 -04:00
Martin Atkins 3e5bfa7364 refactoring: Stubbing of the logic for handling moves
This is a whole lot of nothing right now, just stubbing out some control
flow that ultimately just leads to TODOs that cause it to do nothing at
all.

My intent here is to get this cross-cutting skeleton in place and thus
make it easier for us to collaborate on adding the meat to it, so that
it's more likely we can work on different parts separately and still get
a result that tessellates.
2021-07-14 17:37:48 -07:00
Martin Atkins ab350289ab addrs: Rename AbsModuleCallOutput to ModuleCallInstanceOutput
The previous name didn't fit with the naming scheme for addrs types:
The "Abs" prefix typically means that it's an addrs.ModuleInstance
combined with whatever type name appears after "Abs", but this is instead
a ModuleCallOutput combined with an InstanceKey, albeit structured the
other way around for convenience, and so the expected name for this would
be the suffix "Instance".

We don't have an "Abs" type corresponding with this one because it would
represent no additional information than AbsOutputValue.
2021-07-01 08:28:02 -07:00
James Bardin c687ebeaf1
Merge pull request #29039 from hashicorp/jbardin/sensitive
New marks.Sensitive type, and audit of sensitive marks usage
2021-06-25 17:11:59 -04:00
James Bardin 55ebb2708c remove IsMarked and ContainsMarked calls
Make sure sensitivity checks are looking for specific marks rather than
any marks at all.
2021-06-25 14:17:06 -04:00
James Bardin d9dfd451ea update to use typed sensitive marks 2021-06-25 12:49:07 -04:00
James Bardin 2c493e38c7 marks package
marks.Sensitive
2021-06-25 12:35:51 -04:00
Kristin Laemmert 096010600d
terraform: use hcl.MergeBodies instead of configs.MergeBodies for pro… (#29000)
* terraform: use hcl.MergeBodies instead of configs.MergeBodies for provider configuration

Previously, Terraform would return an error if the user supplied provider configuration via interactive input iff the configuration provided on the command line was missing any required attributes - even if those attributes were already set in config.

That error came from configs.MergeBody, which was designed for overriding valid configuration. It expects that the first ("base") body has all required attributes. However in the case of interactive input for provider configuration, it is perfectly valid if either or both bodies are missing required attributes, as long as the final body has all required attributes. hcl.MergeBodies works very similarly to configs.MergeBodies, with a key difference being that it only checks that all required attributes are present after the two bodies are merged.

I've updated the existing test to use interactive input vars and a schema with all required attributes. This test failed before switching from configs.MergeBodies to hcl.MergeBodies.

* add a command package test that shows that we can still have providers with dynamic configuration + required + interactive input merging

This test failed when buildProviderConfig still used configs.MergeBodies instead of hcl.MergeBodies
2021-06-25 08:48:47 -04:00
Martin Atkins e5f52e56f8 addrs: Rename DefaultRegistryHost to DefaultProviderRegistryHost
As the comment notes, this hostname is the default for provide source
addresses. We'll shortly be adding some address types to represent module
source addresses too, and so we'll also have DefaultModuleRegistryHost
for that situation.

(They'll actually both contain the the same hostname, but that's a
coincidence rather than a requirement.)
2021-06-03 08:50:34 -07:00
James Bardin 99d0266585 return error for invalid resource import
Most legacy provider resources do not implement any import functionality
other than returning an empty object with the given ID, relying on core
to later read that resource and obtain the complete state. Because of
this, we need to check the response from ReadResource for a null value,
and use that as an indication the import id was invalid.
2021-05-25 17:13:49 -04:00
James Bardin 95e788f13b deposed instances should not be counted as orphans
Do not add orphan nodes for resources that only contain deposed
instances.
2021-05-20 09:36:45 -04:00
James Bardin aaaeeffa81 a noop change with no state has no private data
There is no change here to use the private data, and currentState may be
nil.
2021-05-20 09:34:25 -04:00
James Bardin bafa44b958 return diagnostics from provisioners
Do not convert provisioner diagnostics to errors so that users can get
context from provisioner failures.

Return diagnostics from the builtin provisioners that can be annotated
with configuration context and instance addresses.
2021-05-19 11:24:54 -04:00
Martin Atkins 36d0a50427 Move terraform/ to internal/terraform/
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.

If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
2021-05-17 14:09:07 -07:00