The destroy plan should not require a configured provider (the complete
configuration is not evaluated, so they cannot be configured).
Deposed instances were being refreshed during the destroy plan, because
this instance type is only ever destroyed and shares the same
implementation between plan and walkPlanDestroy. Skip refreshing during
walkPlanDestroy.
Have the MockProvider ensure that Configure is always called before any
methods that may require a configured provider.
Ensure the MockProvider *Called fields are zeroed out when re-using the
provider instance.
We have various mechanisms that aim to ensure that the installed provider
plugins are consistent with the lock file and that the lock file is
consistent with the provider requirements, and we do have existing unit
tests for them, but all of those cases mock our fake out at least part of
the process and in the past that's caused us to miss usability
regressions, where we still catch the error but do so at the wrong layer
and thus generate error message lacking useful additional context.
Here we'll add some new end-to-end tests to supplement the existing unit
tests, making sure things work as expected when we assemble the system
together as we would in a release. These tests cover a number of different
ways in which the plugin selections can grow inconsistent.
These new tests all run only when we're in a context where we're allowed
to access the network, because they exercise the real plugin installer
codepath. We could technically build this to use a local filesystem mirror
or other such override to avoid that, but the point here is to make sure
we see the expected behavior in the main case, and so it's worth the
small additional cost of downloading the null provider from the real
registry.
In the original incarnation of Meta.providerFactories we were returning
into a Meta.contextOpts whose signature didn't allow it to return an
error directly, and so we had compromised by making the provider factory
functions themselves return errors once called.
Subsequent work made Meta.contextOpts need to return an error anyway, but
at the time we neglected to update our handling of the providerFactories
result, having it still defer the error handling until we finally
instantiate a provider.
Although that did ultimately get the expected result anyway, the error
ended up being reported from deep in the guts of a Terraform Core graph
walk, in whichever concurrently-visited graph node happened to try to
instantiate the plugin first. This meant that the exact phrasing of the
error message would vary between runs and the reporting codepath didn't
have enough context to given an actionable suggestion on how to proceed.
In this commit we make Meta.contextOpts pass through directly any error
that Meta.providerFactories produces, and then make Meta.providerFactories
produce a special error type so that Meta.Backend can ultimately return
a user-friendly diagnostic message containing a specific suggestion to
run "terraform init", along with a short explanation of what a provider
plugin is.
The reliance here on an implied contract between two functions that are
not directly connected in the callstack is non-ideal, and so hopefully
we'll revisit this further in future work on the overall architecture of
the CLI layer. To try to make this robust in the meantime though, I wrote
it to use the errors.As function to potentially unwrap a wrapped version
of our special error type, in case one of the intervening layers is
changed at some point to wrap the downstream error before returning it.
The codepath for AllAttributesNull was not correct for any nested object
types with collections, and should create single null values for the
correct NestingMode rather than a single object with null attributes.
Since there is no reason to descend into nested object types to create
nullv alues, we can drop the AllAttributesNull function altogether and
create null values as needed during ProposedNew.
The corresponding AllBlockAttributesNull was only called internally in 1
location, and simply delegated to schema.EmptyValue. We can reduce the
package surface area by dropping that function too and calling
EmptyValue directly.
In historical versions of Terraform the responsibility to check this was
inside the terraform.NewContext function, along with various other
assorted concerns that made that function particularly complicated.
More recently, we reduced the responsibility of the "terraform" package
only to instantiating particular named plugins, assuming that its caller
is responsible for selecting appropriate versions of any providers that
_are_ external. However, until this commit we were just assuming that
"terraform init" had correctly selected appropriate plugins and recorded
them in the lock file, and so nothing was dealing with the problem of
ensuring that there haven't been any changes to the lock file or config
since the most recent "terraform init" which would cause us to need to
re-evaluate those decisions.
Part of the game here is to slightly extend the role of the dependency
locks object to also carry information about a subset of provider
addresses whose lock entries we're intentionally disregarding as part of
the various little edge-case features we have for overridding providers:
dev_overrides, "unmanaged providers", and the testing overrides in our
own unit tests. This is an in-memory-only annotation, never included in
the serialized plan files on disk.
I had originally intended to create a new package to encapsulate all of
this plugin-selection logic, including both the version constraint
checking here and also the handling of the provider factory functions, but
as an interim step I've just made version constraint consistency checks
the responsibility of the backend/local package, which means that we'll
always catch problems as part of preparing for local operations, while
not imposing these additional checks on commands that _don't_ run local
operations, such as "terraform apply" when in remote operations mode.
We recently removed the legacy way we used to track the SHA256 hashes of
individual provider executables as part of a plans.Plan, because these
days we want to track the checksums of entire provider packages rather
than just the executable.
In order to achieve that new goal, we can save a copy of the dependency
lock information inside the plan file. This follows our existing precedent
of using exactly the same serialization formats we'd normally use for
this information, and thus we can reuse the existing models and
serializers and be confident we won't lose any detail in the round-trip.
As of this commit there's not yet anything actually making use of this
mechanism. In a subsequent commit we'll teach the main callers that write
and read plan files to include and expect (respectively) dependency
information, verifying that the available providers still match by the
time we're applying the plan.
Previously the planfile.Create function had accumulated probably already
too many positional arguments, and I'm intending to add another one in
a subsequent commit and so this is preparation to make the callsites more
readable (subjectively) and make it clearer how we can extend this
function's arguments to include further components in a plan file.
There's no difference in observable functionality here. This is just
passing the same set of arguments in a slightly different way.
Historically the responsibility for making sure that all of the available
providers are of suitable versions and match the appropriate checksums has
been split rather inexplicably over multiple different layers, with some
of the checks happening as late as creating a terraform.Context.
We're gradually iterating towards making that all be handled in one place,
but in this step we're just cleaning up some old remnants from the
main "terraform" package, which is now no longer responsible for any
version or checksum verification and instead just assumes it's been
provided with suitable factory functions by its caller.
We do still have a pre-check here to make sure that we at least have a
factory function for each plugin the configuration seems to depend on,
because if we don't do that up front then it ends up getting caught
instead deep inside the Terraform runtime, often inside a concurrent
graph walk and thus it's not deterministic which codepath will happen to
catch it on a particular run.
As of this commit, this actually does leave some holes in our checks: the
command package is using the dependency lock file to make sure we have
exactly the provider packages we expect (exact versions and checksums),
which is the most crucial part, but we don't yet have any spot where
we make sure that the lock file is consistent with the current
configuration, and we are no longer preserving the provider checksums as
part of a saved plan.
Both of those will come in subsequent commits. While it's unusual to have
a series of commits that briefly subtracts functionality and then adds
back in equivalent functionality later, the lock file checking is the only
part that's crucial for security reasons, with everything else mainly just
being to give better feedback when folks seem to be using Terraform
incorrectly. The other bits are therefore mostly cosmetic and okay to be
absent briefly as we work towards a better design that is clearer about
where that responsibility belongs.
Only depends_on ancestors for transitive dependencies when we're not
pointed directly at a resource. We can't be much more precise here,
since in order to maintain our guarantee that data sources will wait for
explicit dependencies, if those dependencies happen to be a module,
output, or variable, we have to find some upstream managed resource in
order to check for a planned change.
We must ensure that the terraform required_version is checked as early
as possible, so that new configuration constructs don't cause init to
fail without indicating the version is incompatible.
The loadConfig call before the earlyconfig parsing seems to be unneeded,
and we can delay that to de-tangle it from installing the modules which
may have their own constraints.
TODO: it seems that loadConfig should be able to handle returning the
version constraints in the same manner as loadSingleModule.
Our current implementation of destroy planning includes secretly running a
normal plan first, in order to get its effect of refreshing the state.
Previously our warning about colliding moves would betray that
implementation detail because we'd return it from both of our planning
operations here and thus show the message twice. That would also have
happened in theory for any other warnings emitted by both plan operations,
but it's the move collision warning that made it immediately visible.
We'll now only return warnings from the initial plan if we're also
returning errors from that plan, and thus the warnings of both plans can
never mix together into the same diags and thus we'll avoid duplicating
any warnings.
This does mean that we'd lose any warnings which might hypothetically
emerge only from the hidden normal plan and not from the subsequent
destroy plan, but we'll accept that as an okay tradeoff here because those
warnings are likely to not be super relevant to the destroy case anyway,
or else we'd emit them from the destroy-plan walk too.
The extra feedback information for why resource instance deletion is
planned is now included in the streaming JSON UI output.
We also add an explicit case for no-op actions to switch statements in
this package to ensure exhaustiveness, for future linting.
Minor version increase to deprecate min_items and max_items in nested
types.
Nested types have MinItems and MaxItems fields that were inherited from
the block implementation, but were never validated by Terraform, and are
not supported by the HCL decoder validations. Mark these fields as
deprecated, indicating that the SDK should handle the required
validation.
The previous conservative guarantee that we would not make backwards
incompatible changes to the state file format until at least Terraform
1.1 can now be extended. Terraform 0.14 through 1.1 will be able to
interoperably use state files, so we can update the remote backend
version compatibility check accordingly.
Because our validation rules depend on some dynamic results produced by
actually running the plan, we deal with moves in a "backwards" order where
we try to apply them first -- ignoring anything strange we might find --
and then validate the original statements only after planning.
An unfortunate consequence of that approach is that when the move
statements are invalid it's likely that move execution will not fully
complete, and so the generated plan is likely to be incorrect and might
well include errors resulting from the unresolved moves.
To mitigate that, here we let any move validation errors supersede all
other diagnostics that the plan phase might've generated, in the hope that
it'll help the user focus on fixing the incorrect move statements without
creating confusing by reporting errors that only appeared as a quick of
how Terraform worked around the invalid move statements earlier.
In most cases Terraform will be able to automatically fully resolve all
of the pending move statements before creating a plan, but there are some
edge cases where we can end up wanting to move one object to a location
where another object is already declared.
One relatively-obvious example is if someone uses "terraform state mv" in
order to create a set of resource instance bindings that could never have
arising in normal Terraform use.
A less obvious example arises from the interactions between moves at
different levels of granularity. If we are both moving a module to a new
address and moving a resource into an instance of the new module at the
same time, the old module might well have already had a resource of the
same name and so the resource move will be unresolvable.
In these situations Terraform will move the objects as far as possible,
but because it's never valid for a move "from" address to still be
declared in the configuration Terraform will inevitably always plan to
destroy the objects that didn't find a final home. To give some additional
explanation for that result, here we'll add a warning which describes
what happened.
This is not a particularly actionable warning because we don't really
have enough information to guess what the user intended, but we do at
least prompt that they might be able to use the "terraform state" family
of subcommands to repair the ambiguous situation before planning, if they
want a different result than what Terraform proposed.
The core runtime is now able to specify a reason for some situations when
Terraform plans to delete a resource instance.
This commit makes that information visible in the human-oriented UI. A
previous commit already made the underlying data informing these new hints
visible as part of the machine-oriented (JSON) plan output.
This also removes the bold formatting from the existing "has moved to"
hints, because subjectively it seemed like the result was emphasizing too
many parts of the output and thus somewhat defeating the benefit of the
emphasis in trying to create additional visual hierarchy for sighted users
running Terraform in a terminal. Now only the first line containing the
main action statement will be in bold, and all of the parenthesized
follow-up notes will be unformatted.
There are a few different reasons why a resource instance tracked in the
prior state might be considered an "orphan", but previously we reported
them all identically in the planned changes.
In order to help users understand the reason for a surprising planned
delete, we'll now try to specify an additional reason for the planned
deletion, covering all of the main reasons why that could happen.
This commit only introduces the new detail to the plans.Changes result,
though it also incidentally exposes it as part of the JSON plan result
in order to keep that working without returning errors in these new
cases. We'll expose this information in the human-oriented UI output in
a subsequent commit.
Our previous rule for implicitly moving from IntKey(0) to NoKey would
apply that move even when the current resource configuration uses
for_each, because we were only considering whether "count" were set.
Previously this was relatively harmless because the resource instance in
question would end up planned for deletion anyway: neither an IntKey nor
a NoKey are valid keys for for_each.
Now that we're going to be announcing these moves explicitly in the UI,
it would be confusing to see Terraform report that IntKey moved to NoKey
in a situation where the config changed from count to for_each, so to
address that we'll only generate the implied statement if neither
repetition argument is set.
When planning in refresh-only mode, we must not remove orphaned
resources due to changed count or for_each values from the planned
state. This was previously happening because we failed to pass through
the plan's skip-plan-changes flag to the instance orphan node.
When initializing a backend, if the currently selected workspace does
not exist, the user is prompted to select from the list of workspaces
the backend provides.
Instead, we should automatically select the only workspace available
_if_ that's all that's there.
Although with being a nice bit of polish, this enables future
improvments with Terraform Cloud in potentially removing the implicit
depenency on always using the 'default' workspace when the current
configuration is mapped to a single TFC workspace.
We can also rule out some attribute types as indicating something other
than the legacy SDK.
- Tuple types were not generated at all.
- There were no single objects types, the convention was to use a block
list or set of length 1.
- Maps of objects were not possible to generate, since named blocks were
not implemented.
- Nested collections were not supported, but when they were generated they
would have primitive types.
If structural types are being used, we can be assured that the legacy
SDK SchemaConfigModeAttr is not being used, and the fixup is not needed.
This prevents inadvertent mapping of blocks to structural attributes,
and allows us to skip the fixup overhead when possible.
When we originally stubbed ApplyMoves we didn't know yet how exactly we'd
be using the result, so we made it a double-indexed map allowing looking
up moves in both directions.
However, in practice we only actually need to look up old addresses by new
addresses, and so this commit first removes the double indexing so that
each move is only represented by one element in the map.
We also need to describe situations where a move was blocked, because in
a future commit we'll generate some warnings in those cases. Therefore
ApplyMoves now returns a MoveResults object which contains both a map of
changes and a map of blocks. The map of blocks isn't used yet as of this
commit, but we'll use it in a later commit to produce warnings within
the "terraform" package.
The whole point of UniqueKey is to deal with the fact that we have some
distinct address types which have an identical string representation, but
unfortunately that fact caused us to not notice that we'd incorrectly
made AbsResource.UniqueKey return a no-key instance UniqueKey instead of
its own distinct unique key type.
Remove answers from testInputResponse as they are given, and raise an
error during cleanup if any answers remain unused.
This enables tests to ensure that the expected mock answers are actually
used in a test; previously, an entire branch of code including an input
sequence could be omitted and the test(s) would not fail.
The only test that had unused answers in this map is one leftover from
legacy state migrations, a prompt that was removed in
7c93b2e5e6
Add previous address information to the `planned_change` and
`resource_drift` messages for the streaming JSON UI output of plan and
apply operations.
Here we also add a "move" action value to the `change` object of these
messages, to represent a move-only operation.
As part of this work we also simplify this code to use the plan's
DriftedResources values instead of recomputing the drift from state.
Configuration-driven moves are represented in the plan file by setting
the resource's `PrevRunAddr` to a different value than its `Addr`. For
JSON plan output, we here add a new field to resource changes,
`previous_address`, which is present and non-empty only if the resource
is planned to be moved.
Like the CLI UI, refresh-only plans will include move-only changes in
the resource drift JSON output. In normal plan mode, these are elided to
avoid redundancy with planned changes.
Going back a long time we've had a special magic behavior which tries to
recognize a situation where a module author either added or removed the
"count" argument from a resource that already has instances, and to
silently rename the zeroth or no-key instance so that we don't plan to
destroy and recreate the associated object.
Now we have a more general idea of "move statements", and specifically
the idea of "implied" move statements which replicates the same heuristic
we used to use for this behavior, we can treat this magic renaming rule as
just another "move statement", special only in that Terraform generates it
automatically rather than it being written out explicitly in the
configuration.
In return for wiring that in, we can now remove altogether the
NodeCountBoundary graph node type and its associated graph transformer,
CountBoundaryTransformer. We handle moves as a preprocessing step before
building the plan graph, so we no longer need to include any special nodes
in the graph to deal with that situation.
The test updates here are mainly for the graph builders themselves, to
acknowledge that indeed we're no longer inserting the NodeCountBoundary
vertices. The vertices that NodeCountBoundary previously depended on now
become dependencies of the special "root" vertex, although in many cases
here we don't see that explicitly because of the transitive reduction
algorithm, which notices when there's already an equivalent indirect
dependency chain and removes the redundant edge.
We already have plenty of test coverage for these "count boundary" cases
in the context tests whose names start with TestContext2Plan_count and
TestContext2Apply_resourceCount, all of which continued to pass here
without any modification and so are not visible in the diff. The test
functions particularly relevant to this situation are:
- TestContext2Plan_countIncreaseFromNotSet
- TestContext2Plan_countDecreaseToOne
- TestContext2Plan_countOneIndex
- TestContext2Apply_countDecreaseToOneCorrupted
The last of those in particular deals with the situation where we have
both a no-key instance _and_ a zero-key instance in the prior state, which
is interesting here because to exercises an intentional interaction
between refactoring.ImpliedMoveStatements and refactoring.ApplyMoves,
where we intentionally generate an implied move statement that produces
a collision and then expect ApplyMoves to deal with it in the same way as
it would deal with all other collisions, and thus ensure we handle both
the explicit and implied collisions in the same way.
This does affect some UI-level tests, because a nice side-effect of this
new treatment of this old feature is that we can now report explicitly
in the UI that we're assigning new addresses to these objects, whereas
before we just said nothing and hoped the user would just guess what had
happened and why they therefore weren't seeing a diff.
The backend/local plan tests actually had a pre-existing bug where they
were using a state with a different instance key than the config called
for but getting away with it because we'd previously silently fix it up.
That's still fixed up, but now done with an explicit mention in the UI
and so I made the state consistent with the configuration here so that the
tests would be able to recognize _real_ differences where present, as
opposed to the errant difference caused by that inconsistency.
Per our rule that the content of the state can never make a move statement
invalid, our behavior for two objects trying to occupy the same address
will be to just ignore that and let the object already at the address
take priority.
For the moment this is silent from an end-user perspective and appears
only in our internal logs. However, I'm hoping that our future planned
adjustment to the interface of this function will include some way to
allow reporting these collisions in some end-user-visible way, either as
a separate warning per collision or as a single warning that collects
together all of the collisions into a single message somehow.
This situation can arise both because the previous run state already
contained an object at the target address of a move and because more than
one move ends up trying to target the same location. In the latter case,
which one "wins" is decided by our depth-first traversal order, which is
in turn derived from our chaining and nesting rules and is therefore
arbitrary but deterministic.
This new function complements the existing function FindMoveStatements
by potentially generating additional "implied" move statements that aren't
written explicit in the configuration but that we'll infer by comparing
the configuration and te previous run state.
The goal here is to infer only enough to replicate the effect of the
"count boundary fixup" graph node (terraform.NodeCountBoundary) that we
currently use to deal with this concern of preserving the zero-instance
when switching between "count" and not "count".
This is just dead code for now. A subsequent commit will introduce this
into the "terraform" package while also removing
terraform.NodeCountBoundary, thus achieving the same effect as before but
in a way that'll get reported in the UI as a move, using the same language
that we'd use for an explicit move statement.
This is similar to the existing SelectsModule method, returning true if
the reciever selects either a particular resource as a whole or any of the
instances of that resource.
We don't need this test in the normal case, but we will need it in a
subsequent commit when we'll be possibly generating _implied_ move
statements between instances of resources, but only if there aren't
explicit move statements mentioning those resources already.
The set of drifted resources now includes move-only changes, where the
object value is identical but a move has been executed. In normal
operation, we previousl displayed these moves twice: once as part of
drift output, and once as part of planned changes.
As of this commit we omit move-only changes from drift display, except
for refresh-only plans. This fixes the redundant output.
Previously, drifted resources included only updates and deletes. To
correctly display the full changes which would result as part of a
refresh-only apply, the drifted resources must also include move-only
changes.
Colorizing the result of an interpolated string can result in
incorrect output, if the values used to generate the string happen to
include color codes such as `[red]` or `[bold]`. Instead we should
always colorize the format string before calling functions like
`Sprintf`. This commit fixes all instances in this file.
Rather than delaying resource drift detection until it is ready to be
presented, here we perform that computation after the plan walk has
completed. The resulting drift is represented like planned resource
changes, using a slice of ResourceInstanceChangeSrc values.
Because "moved" blocks produce changes that span across more than one
resource instance address at the same time, we need to take extra care
with them during planning.
The -target option allows for restricting Terraform's attention only to
a subset of resources when planning, as an escape hatch to recover from
bugs and mistakes.
However, we need to avoid any situation where only one "side" of a move
would be considered in a particular plan, because that'd create a new
situation that would be otherwise unreachable and would be difficult to
recover from.
As a compromise then, we'll reject an attempt to create a targeted plan if
the plan involves resolving a pending move and if the source address of
that move is not included in the targets.
Our error message offers the user two possible resolutions: to create an
untargeted plan, thus allowing everything to resolve, or to add additional
-target options to include just the existing resource instances that have
pending moves to resolve.
This compromise recognizes that it is possible -- though hopefully rare --
that a user could potentially both be recovering from a bug or mistake at
the same time as processing a move, if e.g. the bug was fixed by upgrading
a module and the new version includes a new "moved" block. In that edge
case, it might be necessary to just add the one additional address to
the targets rather than removing the targets altogether, if creating a
normal untargeted plan is impossible due to whatever bug they're trying to
recover from.
When originally filling out these test cases we didn't yet have the logic
in place to detect chained moves and so this test couldn't succeed in
spite of being correct.
We now have chain-detection implemented and so consequently we can also
detect cyclic chains. This commit largely just enables the original test
unchanged, although it does include the text of the final error message
for reporting cyclic move chains which wasn't yet finalized when we were
stubbing out this test case originally.
The CoerceValue code was not updated to handle NestedTypes, and while
none of the new codepaths make use of this method, there are still some
internal uses.
The original intent of this test was to verify that we properly release
the state lock if terraform.NewContext fails. This was in response to a
bug in an earlier version of Terraform where that wasn't true.
In the recent refactoring that made terraform.NewContext no longer
responsible for provider constraint/checksum verification, this test began
testing a failed plan operation instead, which left the error return path
from terraform.NewContext untested.
An invalid parallelism value is the one remaining case where
terraform.NewContext can return an error, so as a localized fix for this
test I've switched it to just intentionally set an invalid parallelism
value. This is still not ideal because it's still testing an
implementation detail, but I've at least left a comment inline to try to
be clearer about what the goal is here so that we can respond in a more
appropriate way if future changes cause this test to fail again.
In the long run I'd like to move this last remaining check out to be the
responsibility of the CLI layer, with terraform.NewContext either just
assuming the value correct or panicking when it isn't, but the handling
of this CLI option is currently rather awkwardly spread across the
command and backend packages so we'll save that refactoring for a later
date.
The presence of this field was confusing because in practice the local
backend doesn't use it for anything and the remote backend was using it
only to return an error if it's set to anything other than the default,
under the assumption that it would always match ContextOpts.Parallelism.
The "command" package is the one actually responsible for handling this
option, and it does so by placing it into the partial ContextOpts which it
passes into the backend when preparing for a local operation. To make that
clearer, here we remove Operation.Parallelism and change the few uses of
it to refer to ContextOpts.Parallelism instead, so that everyone is
reading and writing this value from the same place.
In order to handle optional attributes, the Variable type needs to keep
track of the type constraint for decoding and conversion, as well as the
concrete type for creating values and type comparison.
Since the Type field is referenced throughout the codebase, and for
future refactoring if the handling of optional attributes changes
significantly, the constraint is now loaded into an entirely new field
called ConstraintType. This prevents types containing
ObjectWithOptionalAttrs from escaping the decode/conversion codepaths
into the rest of the codebase.
Objects with optional attributes are only used for the decoding of
HCL, and those types should never be exposed elsewhere within
terraform.
Separate the external ImpliedType method from the cty.Type generated
internally for the decoder spec. This unfortunately causes our
ImpliedType method to return a different type than the
hcldec.ImpliedType function, but the former is only used within
terraform for concrete values, while the latter is used to decode
HCL. Renaming the ImpliedType methods could be done to further
differentiate them, but that does cause fairly large diff in the
codebase that does not seem worth the effort at this time.
Thus far our various interactions with the bits of state we keep
associated with a working directory have all been implemented directly
inside the "command" package -- often in the huge command.Meta type -- and
not managed collectively via a single component.
There's too many little codepaths reading and writing from the working
directory and data directory to refactor it all in one step, but this is
an attempt at a first step towards a future where everything that reads
and writes from the current working directory would do so via an object
that encapsulates the implementation details and offers a high-level API
to read and write all of these session-persistent settings.
The design here continues our gradual path towards using a dependency
injection style where "package main" is solely responsible for directly
interacting with the OS command line, the OS environment, the OS working
directory, the stdio streams, and the CLI configuration, and then
communicating the resulting information to the rest of Terraform by wiring
together objects. It seems likely that eventually we'll have enough wiring
code in package main to justify a more explicit organization of that code,
but for this commit the new "workdir.Dir" object is just wired directly in
place of its predecessors, without any significant change of code
organization at that top layer.
This first commit focuses on the main files and directories we use to
find provider plugins, because a subsequent commit will lightly reorganize
the separation of concerns for plugin launching with a similar goal of
collecting all of the relevant logic together into one spot.
Previously our graph walker expected to recieve a data structure
containing schemas for all of the provider and provisioner plugins used in
the configuration and state. That made sense back when
terraform.NewContext was responsible for loading all of the schemas before
taking any other action, but it no longer has that responsiblity.
Instead, we'll now make sure that the "contextPlugins" object reaches all
of the locations where we need schema -- many of which already had access
to that object anyway -- and then load the needed schemas just in time.
The contextPlugins object memoizes schema lookups, so we can safely call
it many times with the same provider address or provisioner type name and
know that it'll still only load each distinct plugin once per Context
object.
As of this commit, the Context.Schemas method is now a public interface
only and not used by logic in the "terraform" package at all. However,
that does leave us in a rather tenuous situation of relying on the fact
that all practical users of terraform.Context end up calling "Schemas" at
some point in order to verify that we have all of the expected versions
of plugins. That's a non-obvious implicit dependency, and so in subsequent
commits we'll gradually move all responsibility for verifying plugin
versions into the caller of terraform.NewContext, which'll heal a
long-standing architectural wart whereby the caller is responsible for
installing and locating the plugin executables but not for verifying that
what's installed is conforming to the current configuration and dependency
lock file.
Previously the graph builders all expected to be given a full manifest
of all of the plugin component schemas that they could need during their
analysis work. That made sense when terraform.NewContext would always
proactively load all of the schemas before doing any other work, but we
now have a load-as-needed strategy for schemas.
We'll now have the graph builders use the contextPlugins object they each
already hold to retrieve individual schemas when needed. This avoids the
need to prepare a redundant data structure to pass alongside the
contextPlugins object, and leans on the memoization behavior inside
contextPlugins to preserve the old behavior of loading each provider's
schema only once.
By tolerating ProviderSchema and ProvisionerSchema potentially returning
errors, we can slightly simplify EvalContextBuiltin by having it retrieve
individual schemas when needed directly from the "Plugins" object.
EvalContextBuiltin already needs to be holding a contextPlugins instance
for other reasons anyway, so this allows us to get the same result with
fewer moving parts.
The responsibility for actually instantiating a single plugin and reading
out its schema now belongs to the contextPlugins type, which memoizes the
results by each plugin's unique identifier so that we can avoid retrieving
the same schemas multiple times when working with the same context.
This doesn't change the API of Context.Schemas but it does restore the
spirit of an earlier version of terraform.Context which did all of the
schema loading proactively inside terraform.NewContext. In an earlier
commit we reduced the scope of terraform.NewContext, making schema loading
a separate step, but in the process of doing that removed the effective
memoization of the schema results that terraform.NewContext was providing.
The memoization here will play out in a different way than before, because
we'll be treating each plugin call as separate rather than proactively
loading them all up front, but this is effectively the same because all
of our operation methods on Context call Context.Schemas early in their
work and thus end up forcing all of the necessary schemas to load up
front nonetheless.
In the v0.12 timeframe we made contextComponentFactory an interface with
the expectation that we'd write mocks of it for tests, but in practice we
ended up just always using the same "basicComponentFactory" implementation
throughout.
In the interests of simplification then, here we replace that interface
and its sole implementation with a new concrete struct type
contextPlugins.
Along with the general benefit that this removes an unneeded indirection,
this also means that we can add additional methods to the struct type
without the usual restriction that interface types prefer to be small.
In particular, in a future commit I'm planning to add methods for loading
provider and provisioner schemas, working with the currently-unused new
fields this commit has included in contextPlugins, as compared to its
predecessor basicComponentFactory.
In earlier Terraform versions we used the set of all available plugins of
each type to make graph-building decisions, but in modern Terraform we
make those decisions based entirely on the configuration.
Consequently, we no longer need the methods which can enumerate all of the
known plugin components of a given type. Instead, we just try to
instantiate each of the plugins that the configuration refers to and then
handle the error when that fails, which typically means that the user
needs to run "terraform init" to install some new plugins.
In earlier incarnations of these transformers we used the set of all
available providers for tasks such as generating implied provider
configuration nodes.
However, in modern Terraform we can extract all of the information we need
from the configuration itself, and so these transformers weren't actually
using this set of provider addresses.
These also ended up getting left behind as sets of string rather than sets
of addrs.Provider in our earlier refactoring work, which didn't really
matter because the result wasn't used anywhere anyway.
Rather than updating these to use addrs.Provider instead, I've just
removed the unused arguments entirely in the hope of making it easier to
see what inputs these transformers use to make their decisions.
The public interface for loading schemas is Context.Schemas, which can
take into account the context's records of which plugin versions and
checksums we're expecting. loadSchemas is an implementation detail of
that, representing the part we run only after we've verified all of the
plugins.
For resources which are planned to move, render the previous run address
as additional information in the plan UI. For the case of a move-only
resource (which otherwise is unchanged), we also render that as a
planned change, but without any corresponding action symbol.
If all changes in the plan are moves without changes, the plan is no
longer considered "empty". In this case, we skip rendering the action
symbols in the UI.
This package level variable can be overridden at link time to allow
temporarily disabling the UI warning when experimental features are
enabled. This makes it easier to understand how UI will render when the
feature is no longer experimental.
This change is only for those developing Terraform.
This is the first test exercising the basic functionality of config-driven
move. We previously had it skipped because Terraform's previous design
of treating all three of the state artifacts as mutable attributes of
terraform.Context meant that it was too late during planning to deal with
the move operations, and thus this test was failing.
Thanks to the previous commit, which changes the terraform.Context API
such that we can defer creating the three state artifacts until we're
already doing planning, this test now works and shows Terraform correctly
handling a resource that was formerly called "a" and is now called "b",
with a "moved" block recording that renaming.
Previously terraform.Context was built in an unfortunate way where all of
the data was provided up front in terraform.NewContext and then mutated
directly by subsequent operations. That made the data flow hard to follow,
commonly leading to bugs, and also meant that we were forced to take
various actions too early in terraform.NewContext, rather than waiting
until a more appropriate time during an operation.
This (enormous) commit changes terraform.Context so that its fields are
broadly just unchanging data about the execution context (current
workspace name, available plugins, etc) whereas the main data Terraform
works with arrives via individual method arguments and is returned in
return values.
Specifically, this means that terraform.Context no longer "has-a" config,
state, and "planned changes", instead holding on to those only temporarily
during an operation. The caller is responsible for propagating the outcome
of one step into the next step so that the data flow between operations is
actually visible.
However, since that's a change to the main entry points in the "terraform"
package, this commit also touches every file in the codebase which
interacted with those APIs. Most of the noise here is in updating tests
to take the same actions using the new API style, but this also affects
the main-code callers in the backends and in the command package.
My goal here was to refactor without changing observable behavior, but in
practice there are a couple externally-visible behavior variations here
that seemed okay in service of the broader goal:
- The "terraform graph" command is no longer hooked directly into the
core graph builders, because that's no longer part of the public API.
However, I did include a couple new Context functions whose contract
is to produce a UI-oriented graph, and _for now_ those continue to
return the physical graph we use for those operations. There's no
exported API for generating the "validate" and "eval" graphs, because
neither is particularly interesting in its own right, and so
"terraform graph" no longer supports those graph types.
- terraform.NewContext no longer has the responsibility for collecting
all of the provider schemas up front. Instead, we wait until we need
them. However, that means that some of our error messages now have a
slightly different shape due to unwinding through a differently-shaped
call stack. As of this commit we also end up reloading the schemas
multiple times in some cases, which is functionally acceptable but
likely represents a performance regression. I intend to rework this to
use caching, but I'm saving that for a later commit because this one is
big enough already.
The proximal reason for this change is to resolve the chicken/egg problem
whereby there was previously no single point where we could apply "moved"
statements to the previous run state before creating a plan. With this
change in place, we can now do that as part of Context.Plan, prior to
forking the input state into the three separate state artifacts we use
during planning.
However, this is at least the third project in a row where the previous
API design led to piling more functionality into terraform.NewContext and
then working around the incorrect order of operations that produces, so
I intend that by paying the cost/risk of this large diff now we can in
turn reduce the cost/risk of future projects that relate to our main
workflow actions.
Here we wire through the "move results" into the graph walk data
structures so that all of the the nodes which produce
plans.ResourceInstanceChange values can capture the "PrevRunAddr" for
each resource instance.
This doesn't actually quite work yet, because the logic in Context.Plan
isn't actually correct and so the updated state from
refactoring.ApplyMoves isn't actually visible as the "previous run state".
For that reason, the context test in this commit is currently skipped,
with the intent of re-enabling it once the updated state is properly
propagating into the plan graph walk and thus we can actually react to
the result of the move while choosing actions for those addresses.
In order to expose the effect of any relevant "moved" statements we dealt
with prior to creating the plan, we'll record with each
ResourceInstanceChange both is current address and the address it was
tracked at for the previous run.
To save consumers of these objects from having to special-case the
situation where there _was_ no previous run (e.g. because this is a Create
change), we'll just pretend the previous run address was the same as the
current address in that case, the same as for an update without any
renaming in effect.
This includes a breaking change to the plan file format, but one that
doesn't require a version number increment because there is no ambiguity
between the two formats and so mismatched parsers will already fail with
an error message.
As of this commit we've just added the new field but not yet populated it
with any useful information: it always just matches Addr. A future commit
will wire this up to the result of applying the moves so that we can
populate it correctly. We also don't yet expose this new information
anywhere in the UI layer.
While blocks were not allowed to be computed by the provider, nested
objects can be. Remove the errors regarding blocks and verify unknown
values are valid.
We have a few different .proto files in this repository that all need to
get recompiled into .pb.go files each time we change them, but we were
previously handling that with some scripts that just assumed that protoc
and the relevant plugins were already installed on the system somewhere,
at the right versions.
In practice we've been constantly flopping between different versions of
these tools due to folks having different versions installed in their
development environments. In particular, the state of the .pb.go files
in the prior commit wasn't reproducible by any single version of the tools
because they've all slightly diverged from one another.
In the interests of being more consistent here and avoiding accidental
inconsistencies, we'll now centralize the protocol buffer compile steps
all into a single tool that knows how to fetch and install the expected
versions of the various tools we need and then run those tools with the
right options to get a stable result.
If we want to upgrade to either a newer protoc or a newer protoc-gen-go
in future then we'll do that in a central location and update all of the
.pb.go files at the same time, so that we're always consistently tracking
the same version of protocol buffers everywhere.
While doing this I attempted to keep as close as possible to the toolchain
we'd most recently used, but since they were not consistent with each
other they've now all changed which version numbers they record at minimum,
and the planproto stub in particular now also has a slightly different
descriptor serialization but is otherwise offering the same API.
CanChainFrom needs to be able to handle move statements from different
relative modules, re-implementing with addrs.anyKey
Add the anyKey InstanceKey value to the addrs package to simplify module
path comparison. This allows all combinations of module path
representation to be normalized into a ModuleInstance which can be
compared directly, rather than dealing with multiple levels of different
prefix types.
The -force-copy flag to init should automatically migrate state.
Previously this was not applied to one case: when migrating from
a backend with multiple workspaces to another backend supporting
multiple workspaces. I believe this was an oversight so this commit
fixes that.
We're aware of several quirks of this command's current design, which
result from some existing architectural limitations that we can't address
immediately.
However, we do still want to make this command available in its current
capacity as an incremental improvement, so as a compromise we'll document
it as experimental. Our intent here is to exclude it from the
Terraform 1.0 Compatibility Promises so that we can have the space to
continue to improve the design as other parts of the overall Terraform
system gain new capabilities.
We don't currently have any concrete plan for this command to be
stabilized and subject to compatibility promises. That decision will
follow from ongoing discussions with other teams whose systems may need to
change in order to support the final design of "terraform add".
The code adopted from block diffs was not set to handle null and unknown
values, as those are not allowed for blocks.
We also revert the change to formatting nested object types as single
attributes, because the attribute formatter cannot handle sensitive
values from the schema. This presents some awkward syntax for diffs for
now, but should suffice until the entire formatter can be refactored to
better handle these new nested types.
Go 1.17 includes a breaking change to both net.ParseIP and net.ParseCIDR
functions to reject IPv4 address octets written with leading zeros.
Our use of these functions as part of the various CIDR functions in the
Terraform language doesn't have the same security concerns that the Go
team had in evaluating this change to the standard library, and so we
can't justify an exception to our v1.0 compatibility promises on the same
sort of security grounds that the Go team used to justify their
compatibility exception.
For that reason, we'll now use our own fork of the Go library functions
which has the new check disabled in order to preserve the prior behavior.
We're taking this path, rather than pre-normalizing the IP address before
calling into the standard library, because an additional normalization
layer would be entirely new code and additional complexity, whereas this
fork is relatively minor in terms of code size and avoids any significant
changes to our own calls to these functions.
Thanks to the Kubernetes team for their prior work on carving out a subset
of the "net" package for their similar backward-compatibility concern.
Our "ipaddr" package here is a lightly-modified fork of their fork, with
only the comments changed to talk about Terraform instead of Kubernetes.
This fork is not intended for use in any other future feature
implementations, because they wouldn't be subject to the same
compatibility constraints as our existing functions. We will use these
forked implementations for new callers only if consistency with the
behavior of the existing functions is a key requirement.
This includes the addition of the new "//go:build" comment form in addition
to the legacy "// +build" notation, as produced by gofmt to ensure
consistent behavior between Go versions. The new directives are all
equivalent to what was present before, so there's no change in behavior.
Go 1.17 continues to use the Unicode 13 tables as in Go 1.16, so this
upgrade does not require also upgrading our Unicode-related dependencies.
This upgrade includes the following breaking changes which will also
appear as breaking changes for Terraform users, but that are consistent
with the Terraform v1.0 compatibility promises.
- On MacOS, Terraform now requires macOS 10.13 High Sierra or later.
This upgrade also includes the following breaking changes which will
appear as breaking changes for Terraform users that are inconsistent with
our compatibility promises, but have justified exceptions as follows:
- cidrsubnet, cidrhost, and cidrnetmask will now reject IPv4 CIDR
addresses whose decimal components have leading zeros, where previously
they would just silently ignore those leading zeros.
This is a security-motivated exception to our compatibility promises,
because some external systems interpret zero-prefixed octets as octal
numbers rather than decimal, and thus the previous lenient parsing could
lead to a different interpretation of the address between systems, and
thus potentially allow bypassing policy when configuring firewall rules
etc.
This upgrade also includes the following breaking changes which could
_potentially_ appear as breaking changes for Terraform users, but that do
not in practice for the reasons given:
- The Go net/url package no longer allows query strings with pairs
separated by semicolons instead of ampersands. This primarily affects
HTTP servers written in Go, and Terraform includes a special temporary
HTTP server as part of its implementation of OAuth for "terraform login",
but that server only needs to accept URLs created by Terraform itself
and Terraform does not generate any URLs that would be rejected.
Add implementations of CanChainFrom and NestedWithin for
MoveEndpointInModule.
CanChainFrom allows the linking of move statements of the same address,
which means the prior destination address must equal the following
source address. If the destination and source addresses are of different
types, they must be covered by NestedWithin rather than CanChainFrom.
NestedWithin checks if the destination contains the source address. Any
matching types would be covered by CanChainFrom.
Extend the outputs JSON log message to support an `action` field (and
make the `type` and `value` fields optional). This allows us to emit a
useful output change summary as part of the plan, bringing the JSON log
output into parity with the text output.
While we do have access to the before/after values in the output
changes, attempting to wedge those into a structured log message is not
appropriate. That level of detail can be extracted from the JSON plan
output from `terraform show -json`.
This is a first pass at implementing refactoring.ValidateMoves, covering
the main validation rules.
This is not yet complete. A couple situations not yet covered are
represented by commented test cases in TestValidateMoves, although that
isn't necessarily comprehensive. We'll do a further pass of filling this
out with any other subtleties before we ship this feature.
As of this commit, refactoring.ValidateMoves doesn't actually do anything
yet (always returns nil) but the goal here is to wire in the set of all
declared instances so that refactoring.ValidateMoves will then have all
of the information it needs to encapsulate our validation rules.
The actual implementation of refactoring.ValidateMoves will follow in
subsequent commits.
In order to precisely implement the validation rules for "moved"
statements we need to be able to test whether particular instances were
declared in the configuration.
The instance expander is the source of record for which instances we
decided while creating a plan, but it's API is far more involved than what
our validation rules need, so this new AllInstances method returns a
wrapper object with a more straightforward API that provides read-only
access to just the question of whether particular instances got registered
in the expander already.
This API covers all three of the kinds of objects that move statements can
refer to. It includes module calls and resources, even though they aren't
_themselves_ "instances" in the sense we usually mean, because the module
instance addresses they are contained within _are_ instances and so we
need to take their dynamic instance keys into account when answering these
queries.
All of our MoveDestination methods have the common problem of deciding
whether the receiver is even potentially in the scope of a particular
MoveEndpointInModule, which requires that the receiver belong to an
instance of the module where the move statement was found.
Previously we had this logic inline in all three cases, but now we'll
factor it out into a shared helper function.
At first it seemed like there ought to be more factoring possible for
the AbsResource vs. AbsResourceInstance implementations, since textually
they look very similar, but in practice they only look similar because
those two types have a lot of method names in common, but the Go compiler
sees them as completely distinct and thus we must write the same logic
out twice. I did try some further refactoring to address that but it
made the resulting code significantly more complicated and, by my
judgement, harder to follow. Consequently I decided that a little
duplication was okay and warranted here because this logic is already
quite fiddly to read through and isn't likely to change significantly once
released (due to backward-compatibility promises).
Previously our MoveDestination methods only honored move statements whose
endpoints were module calls, module instances, or resources.
Now we'll additionally handle when the endpoints are individual resource
instances. This situation only applies to
AbsResourceInstance.MoveDestination because no other objects can be
contained inside of a resource instance.
This completes all of the MoveDestination cases for all supported move
statement types and moveable object types.
Previously our MoveDestination methods only honored move statements whose
endpoints were module calls or module instances.
Now we'll additionally handle when the endpoints are whole resource
addresses. This includes both renaming resource blocks and moving resource
blocks into or out of child modules.
This doesn't yet include endpoints that are specific resource _instances_,
which will follow in a subsequent commit. For the moment that situation
will always indicate a non-match.
This is a subset of the MoveDestination behavior for AbsResource and
AbsResourceInstance which deals with source and destination addresses that
refer to module calls or module instances.
They both work by delegating to ModuleInstance.MoveDestination and then
applying the same resource or resource instance address to the
newly-chosen module instance address, thus ensuring that when we move
a module we also move all of the resources inside that module in the same
way.
This doesn't yet include support for moving between specific resource or
resource instance addresses; that'll follow later. This commit should have
enough logic to support moving between module names and module instance
keys, including any module calls or resources nested within.
This method encapsulates the move-processing rules for applying move
statements to ModuleInstance addresses. It honors both module call moves
and module instance moves by trying to find a subsequence of the input
that matches the "from" endpoint and then, if found, replacing it with
the "to" endpoint while preserving the prefix and suffix around the match,
if any.
* configs/configschema: extend block.AttributeByPath to descend into Objects
This commit adds a recursive Object.AttributeByPath function which will step through Attributes with NestedTypes. If an Attribute without a NestedType is encountered while there is still more to the path, it will return nil.
The etcdv3 client has a default request send limit of 2.0 MiB. This change
exposes the configuration option to increase that limit enabling larger
state using the etcdv3 backend.
This also requires that the corresponding --max-request-bytes flag be
increased on the server side. The default there is 1.5 MiB.
Fixes https://github.com/hashicorp/terraform/issues/25745
etcd rewrote its import path from coreos/etcd to go.etcd.io/etcd.
Changed the imports path in this commit, which also updates the code
version.
This lets us remove the github.com/ugorji/go/codec dependency, which
was pinned to a fairly old version. The net change is a loss of 30,000
lines of code in the vendor directory. (I first noticed this problem
because the outdated go/codec dependency was causing a dependency
failure when I tried to put Terraform and another project in the same
vendor directory.)
Note the version shows up funkily in go.mod, but I verified
visually it's the same commit as the "release-3.4" tag in
github.com/coreos/etcd. The etcd team plans to fix the release version
tagging in v3.5, which should be released soon.
The current usage of internal remote state backends requires that
`StateMgr` be able to return an instance of `statemgr.Full` even if the
state is currently locked.
Up until now marks were not considered by `ignore_changes`, that however
means changes to sensitivity within a configuration cannot ignored, even
though they are planned as changes.
Rather than separating the marks and tracking their paths, we can easily
update the processIgnoreChanges routine to handle the marked values
directly. Moving the `processIgnoreChanges` call also cleans up some of
the variable naming, making it more consistent through the body of the
function.
* configs/configschema: fix missing "computed" attributes from NestedObject's ImpliedType
listOptionalAttrsFromObject was not including "computed" attributes in the list of optional object attributes. This is now fixed. I've also added some tests and fixed some panics and otherwise bad behavior when bad input is given. One natable change is in ImpliedType, which was panicking on an invalid nesting mode. The comment expressly states that it will return a result even when the schema is inconsistent, so I removed the panic and instead return an empty object.
Update the version constraints for what providers will be downloaded
from the registry, allowing protocol 6 providers to be downloaded from
the registry.
This test would previously fail randomly due to the use of multiple
resource instances. Instance keys are iterated over as a map for
presentation, which has intentionally inconsistent ordering.
To fix this, I changed the test to use different resource addresses for
the three drift cases. I also extracted them to a separate test, and
tweaked the test helper functions to reduce the number of fatal exit
points, to make failed test debugging easier.
This is a whole lot of nothing right now, just stubbing out some control
flow that ultimately just leads to TODOs that cause it to do nothing at
all.
My intent here is to get this cross-cutting skeleton in place and thus
make it easier for us to collaborate on adding the meat to it, so that
it's more likely we can work on different parts separately and still get
a result that tessellates.
We previously built out addrs.UnifyMoveEndpoints with a different
implementation strategy in mind, but that design turns out to not be
viable because it forces us to move to AbsMoveable addresses too soon,
before we've done the analysis required to identify chained and nested
moves.
Instead, UnifyMoveEndpoints will return a new type MoveEndpointInModule
which conceptually represents a matching pattern which either matches or
doesn't match a particular AbsMoveable. It does this by just binding the
unified relative address from the MoveEndpoint to the module where it
was declared, and thus allows us to distinguish between the part of the
module path which applies to any instances of the given modules vs. the
user-specified part which must identify particular module instances.
Since these address types are not directly comparable themselves, we use
an unexported named type around the string representation, whereby the
special type can avoid any ambiguity between string representations of
different types and thus each type only needs to worry about possible
ambiguity of its _own_ string representation.
Many times now we've seen situations where we need to use addresses
as map keys, but not all of our address types are comparable and thus
we tend to end up using string representations as keys instead.
That's problematic because conversion to string uses type information
and some of the address types have string representations that are
ambiguous with one another.
UniqueKey therefore represents an opaque key that is unique for each
functionally-distinct address across all types that implement
UniqueKeyer.
For this initial commit I've implemented UniqueKeyer only for the
Referenceable family of types. These are an easy case because they
were all already comparable (intentionally) anyway. Later commits
can implement UniqueKeyer for other types that are not naturally
comparable, such as any which include a ModuleInstance.
This also includes a new type addrs.Set which wraps a map as a set
of addresses, using the unique keys to ensure that there can be only
one element for each distinct address.
Some users would want to use Consul namespaces when using the Consul
backend but the version of the Consul API client we use is too old and
don't support them. In preparation for this change this patch just update
it the client and replace testutil.NewTestServerConfig() by
testutil.NewTestServerConfigT() in the tests.
* states: add MoveAbsResource and MoveAbsResourceInstance state functions and corresponding syncState wrapper functions.
* states: add MoveModuleInstance and MaybeMoveModuleInstance
* addrs: adding a new function, ModuleInstance.IsDeclaredByCall, which returns true if the receiver is an instance of the given AbsModuleCall.
The logic behind this code took me a while to understand, so I wrote
down what I understand to be the reasoning behind how it works. The
trickiest part is rendering changing objects as updates. I think the
other pieces are fairly common to LCS sequence diff rendering, so I
didn't explain those in detail.
An earlier commit added logic to decode "moved" blocks and do static
validation of them. Here we now include that result also in modules
produced from those files, which we can then use in Terraform Core to
actually implement the moves.
This also places the feature behind an active experiment keyword called
config_driven_move. For now activating this doesn't actually achieve
anything except let you include moved blocks that Terraform will summarily
ignore, but we'll expand the scope of this in later commits to eventually
reach the point where it's really usable.
A common source of churn when we're running experiments is that a module
that would otherwise be valid ends up generating a warning merely because
the experiment is active. That means we end up needing to shuffle the
test files around if the feature ultimately graduates to stable.
To reduce that churn in simple cases, we'll make an exception to disregard
the "Experiment is active" warning for any experiment that a module has
intentionally opted into, because those warnings are always expected and
not a cause for concern.
It's still possible to test those warnings explicitly using the
testdata/warning-files directory, if needed.
Although addrs.Target can in principle capture the information we need to
represent move endpoints, it's semantically confusing because
addrs.Targetable uses addrs.Abs... types which are typically for absolute
addresses, but we were using them for relative addresses here.
We now have specialized address types for representing moves and probably
other things which have similar requirements later on. These types
largely communicate the same information in the end, but aim to do so in
a way that's explicit about which addresses are relative and which are
absolute, to make it less likely that we'd inadvertently misuse these
addresses.
These three types represent the three different address representations we
need to represent different stages of analysis for "moved" blocks in the
configuration.
The goal here is to encapsulate all of the static address wrangling inside
these types so that users of these types elsewhere would have to work
pretty hard to use them incorrectly.
In particular, the MovableEndpoint type intentionally fully encapsulates
the weird relative addresses we use in configuration so that code
elsewhere in Terraform can never end up holding an address of a type that
suggests absolute when it's actually relative. That situation only occurs
in the internals of MoveableEndpoint where we use not-really-absolute
AbsMoveable address types to represent the not-yet-resolved relative
addresses.
This only takes care of the static address wrangling. There's lots of
other rules for what makes a "moved" block valid which will need to be
checked elsewhere because they require more context than just the content
of the address itself.
Our documentation for ModuleCall originally asserted that we didn't need
AbsModuleCall because ModuleInstance captured the same information, but
when we added count and for_each for modules we introduced
ModuleCallInstance to represent a reference to an instance of a local
module call, and now _that_ is the type whose absolute equivalent is
ModuleInstance.
We previously had no absolute representation of the call itself, without
any particular instance. That's what AbsModuleCall now represents,
allowing us to be explicit about when we're talking about the module block
vs. instances it declares, which is the same distinction represented by
AbsResource vs. AbsResourceInstance.
Just like with AbsResource and AbsResourceInstance though, there is
syntactic ambiguity between a no-key call instance and a whole module call,
and so some codepaths might accept both to start and then use other
context to dynamically choose a particular interpretation, in which case
this distinction becomes meaningful in representing the result of that
decision.
The previous name didn't fit with the naming scheme for addrs types:
The "Abs" prefix typically means that it's an addrs.ModuleInstance
combined with whatever type name appears after "Abs", but this is instead
a ModuleCallOutput combined with an InstanceKey, albeit structured the
other way around for convenience, and so the expected name for this would
be the suffix "Instance".
We don't have an "Abs" type corresponding with this one because it would
represent no additional information than AbsOutputValue.
* command/jsonstate: remove redundant remarking of resource instance
ResourceInstanceObjectSrc.Decode already handles marking values with any marks stored in ri.Current.AttrSensitivePaths, so re-applying those marks is not necessary.
We've gotten reports of panics coming from this line of code, though I have yet to reproduce the panic in a test.
* Implement test to reproduce panic on #29042
Co-authored-by: David Alger <davidmalger@gmail.com>
Because our snippet generator is trying to select whole lines to include
in the snippet, it has some edge cases for odd situations where the
relevant source range starts or ends directly at a newline, which were
previously causing this logic to return out-of-bounds offsets into the
code snippet string.
Although arguably it'd be better for the original diagnostics to report
more reasonable source ranges, it's better for us to report a
slightly-inaccurate snippet than to crash altogether, and so we'll extend
our existing range checks to check both bounds of the string and thus
avoid downstreams having to deal with out-of-bounds indices.
For completeness here I also added some similar logic to the
human-oriented diagnostic formatter, which consumes the result of the
JSON diagnostic builder. That's not really needed with the additional
checks in the JSON diagnostic builder, but it's nice to reinforce that
this code can't panic (in this way, at least) even if its input isn't
valid.
* terraform: use hcl.MergeBodies instead of configs.MergeBodies for provider configuration
Previously, Terraform would return an error if the user supplied provider configuration via interactive input iff the configuration provided on the command line was missing any required attributes - even if those attributes were already set in config.
That error came from configs.MergeBody, which was designed for overriding valid configuration. It expects that the first ("base") body has all required attributes. However in the case of interactive input for provider configuration, it is perfectly valid if either or both bodies are missing required attributes, as long as the final body has all required attributes. hcl.MergeBodies works very similarly to configs.MergeBodies, with a key difference being that it only checks that all required attributes are present after the two bodies are merged.
I've updated the existing test to use interactive input vars and a schema with all required attributes. This test failed before switching from configs.MergeBodies to hcl.MergeBodies.
* add a command package test that shows that we can still have providers with dynamic configuration + required + interactive input merging
This test failed when buildProviderConfig still used configs.MergeBodies instead of hcl.MergeBodies
This PR adds decoding for the upcoming "moved" blocks in configuration. This code is gated behind an experiment called EverythingIsAPlan, but the experiment is not registered as an active experiment, so it will never run (there is a test in place which will fail if the experiment is ever registered).
This also adds a new function to the Targetable interface, AddrType, to simplifying comparing two addrs.Targetable.
There is some validation missing still: this does not (yet) descend into resources to see if the actual resource types are the same (I've put this off in part because we will eventually need the provider schema to verify aliased resources, so I suspect this validation will have to happen later on).
Previously, if any resources were found to have drifted, the JSON plan
output would include a drift entry for every resource in state. This
commit aligns the JSON plan output with the CLI UI, and only includes
those resources where the old value does not equal the new value---i.e.
drift has been detected.
Also fixes a bug where the "address" field was missing from the drift
output, and adds some test coverage.
* command: new command, terraform add, generates resource templates
terraform add ADDRESS generates a resource configuration template with all required (and optionally optional) attributes set to null. This can optionally also pre-populate nonsesitive attributes with values from an existing resource of the same type in state (sensitive vals will be populated with null and a comment indicating sensitivity)
* website: terraform add documentation
* Quoting filesystem path in scp command argument
* Adding proper shell quoting for scp commands
* Running go fmt
* Using a library for quoting shell commands
* Don't export quoteShell function
* jsonplan and jsonstate: include sensitive_values in state representations
A sensitive_values field has been added to the resource in state and planned values which is a map of all sensitive attributes with the values set to true.
It wasn't entirely clear to me if the values in state would suffice, or if we also need to consult the schema - I believe that this is sufficient for state files written since v0.15, and if that's incorrect or insufficient, I'll add in the provider schema check as well.
I also updated the documentation, and, since we've considered this before, bumped the FormatVersions for both jsonstate and jsonplan.
An unknown block represents a dynamic configuration block with an
unknown for_each value. We were not catching the case where a provider
modified this value unexpectedly, which would crash with block of type
NestingList blocks where the config value has no length for comparison.
Historically, we've used TFC's default run messages as a sort of dumping
ground for metadata about the run. We've recently decided to mostly stop
doing that, in favor of:
- Only specifying the run's source in the default message.
- Letting TFC itself handle the default messages.
Today, the remote backend explicitly sets a run message, overriding
any default that TFC might set. This commit removes that explicit message
so we can allow TFC to sort it out.
This shouldn't have any bad effect on TFE out in the wild, because it's
known how to set a default message for remote backend runs since late 2018.
* tools: remove terraform-bundle.
terraform-bundle is no longer supported in the main branch of terraform. Users can build terraform-bundle from terraform tagged v0.15 and older.
* add a README pointing users to the v0.15 branch
Previously we had a separation between ModuleSourceRemote and
ModulePackage as a way to represent within the type system that there's an
important difference between a module source address and a package address,
because module packages often contain multiple modules and so a
ModuleSourceRemote combines a ModulePackage with a subdirectory to
represent one specific module.
This commit applies that same strategy to ModuleSourceRegistry, creating
a new type ModuleRegistryPackage to represent the different sort of
package that we use for registry modules. Again, the main goal here is
to try to reflect the conceptual modelling more directly in the type
system so that we can more easily verify that uses of these different
address types are correct.
To make use of that, I've also lightly reworked initwd's module installer
to use addrs.ModuleRegistryPackage directly, instead of a string
representation thereof. This was in response to some earlier commits where
I found myself accidentally mixing up package addresses and source
addresses in the installRegistryModule method; with this new organization
those bugs would've been caught at compile time, rather than only at
unit and integration testing time.
While in the area anyway, I also took this opportunity to fix some
historical confusing names of fields in initwd.ModuleInstaller, to be
clearer that they are only for registry packages and not for all module
source address types.
We have some tests in this package that install real modules from the real
registry at registry.terraform.io. Those tests were written at an earlier
time when the registry's behavior was to return the URL of a .tar.gz
archive generated automatically by GitHub, which included an extra level
of subdirectory that would then be reflected in the paths to the local
copies of these modules.
GitHub started rate limiting those tar archives in a way that Terraform's
module installer couldn't authenticate to, and so the registry switched
to returning direct git repository URLs instead, which don't have that
extra subdirectory and so the local paths on disk now end up being a
little different, because the actual module directories are at a different
subdirectory of the package.
Now that we (in the previous commit) refactored how we deal with module
sources to do the parsing at config loading time rather than at module
installation time, we can expose a method to centralize the determination
for whether a particular module call (and its resulting Config object)
enters a new external package.
We don't use this for anything yet, but in later commits we will use this
for some cross-module features that are available only for modules
belonging to the same package, because we assume that modules grouped
together in a package can change together and thus it's okay to permit a
little more coupling of internal details in that case, which would not
be appropriate between modules that are versioned separately.
It's been a long while since we gave close attention to the codepaths for
module source address parsing and external module package installation.
Due to their age, these codepaths often diverged from our modern practices
such as representing address types in the addrs package, and encapsulating
package installation details only in a particular location.
In particular, this refactor makes source address parsing a separate step
from module installation, which therefore makes the result of that parsing
available to other Terraform subsystems which work with the configuration
representation objects.
This also presented the opportunity to better encapsulate our use of
go-getter into a new package "getmodules" (echoing "getproviders"), which
is intended to be the only part of Terraform that directly interacts with
go-getter.
This is largely just a refactor of the existing functionality into a new
code organization, but there is one notable change in behavior here: the
source address parsing now happens during configuration loading rather
than module installation, which may cause errors about invalid addresses
to be returned in different situations than before. That counts as
backward compatible because we only promise to remain compatible with
configurations that are _valid_, which means that they can be initialized,
planned, and applied without any errors. This doesn't introduce any new
error cases, and instead just makes a pre-existing error case be detected
earlier.
Our module registry client is still using its own special module address
type from registry/regsrc for now, with a small shim from the new
addrs.ModuleSourceRegistry type. Hopefully in a later commit we'll also
rework the registry client to work with the new address type, but this
commit is already big enough as it is.
We've previously had the syntax and representation of module source
addresses pretty sprawled around the codebase and intermingled with other
systems such as the module installer.
I've created a factored-out implementation here with the intention of
enabling some later refactoring to centralize the address parsing as part
of configuration decoding, and thus in turn allow the parsing result to
be seen by other parts of Terraform that interact with configuration
objects, so that they can more robustly handle differences between local
and remote modules, and can present module addresses consistently in the
UI.
This new package aims to encapsulate all of our interactions with
go-getter to fetch remote module packages, to ensure that the rest of
Terraform will only use the small subset of go-getter functionality that
our modern module installer uses.
In older versions of Terraform, go-getter was the entire implementation
of module installation, but along the way we found that several aspects of
its design are poor fit for Terraform's needs, and so now we're using it
as just an implementation detail of Terraform's handling of remote module
packages only, hiding it behind this wrapper API which exposes only
the services that our module installer needs.
This new package isn't actually used yet, but in a later commit we will
change all of the other callers to go-getter to only work indirectly
through this package, so that this will be the only package that actually
imports the go-getter packages.
As the comment notes, this hostname is the default for provide source
addresses. We'll shortly be adding some address types to represent module
source addresses too, and so we'll also have DefaultModuleRegistryHost
for that situation.
(They'll actually both contain the the same hostname, but that's a
coincidence rather than a requirement.)
When performing state migration to a remote backend target, Terraform
may fail due to mismatched remote and local Terraform versions. Here we
add the `-ignore-remote-version` flag to allow users to ignore this
version check when necessary.
When migrating multiple local workspaces to a remote backend target
using the `prefix` argument, we need to perform the version check
against all existing workspaces returned by the `Workspaces` method.
Failing to do so will result in a version check error.
Our module installer has a somewhat-informal idea of a "module package",
which is some external thing we can go fetch in order to add one or more
modules to the current configuration. Our documentation doesn't talk much
about it because most users seem to have found the distinction between
external and local modules pretty intuitive without us throwing a lot of
funny terminology at them, but there are some situations where the
distinction between a module and a module package are material to the
end-user.
One such situation is when using an absolute rather than relative
filesystem path: we treat that as an external package in order to make the
resulting working directory theoretically "portable" (although users can
do various other things to defeat that), and so Terraform will copy the
directory into .terraform/modules in the same way as it would download and
extract a remote archive package or clone a git repository.
A consequence of this, though, is that any relative paths called from
inside a module loaded from an absolute path will fail if they try to
traverse upward into the parent directory, because at runtime we're
actually running from a copy of the directory that's been taking out of
its original context.
A similar sort of situation can occur in a truly remote module package if
the author accidentally writes a "../" source path that traverses up out
of the package root, and so this commit introduces a special error message
for both situations that tries to be a bit clearer about there being a
package boundary and use that to explain why installation failed.
We would ideally have made escaping local references like that illegal in
the first place, but sadly we did not and so when we rebuilt the module
installer for Terraform v0.12 we ended up keeping the previous behavior of
just trying it and letting it succeed if there happened to somehow be a
matching directory at the given path, in order to remain compatible with
situations that had worked by coincidence rather than intention. For that
same reason, I've implemented this as a replacement error message we will
return only if local module installation was going to fail anyway, and
thus it only modifies the error message for some existing error situations
rather than introducing new error situations.
This also includes some light updates to the documentation to say a little
more about how Terraform treats absolute paths, though aiming not to get
too much into the weeds about module packages since it's something that
most users can get away with never knowing.
When returning from the context method, a deferred function call checked
for error diagnostics in the `diags` variable, and unlocked the state if
any exist. This means that we need to be extra careful to mutate that
variable when returning errors, rather than returning a different set of
diags in the same position.
Previously this would result in an invalid plan file causing a lock to
become stuck.
Most legacy provider resources do not implement any import functionality
other than returning an empty object with the given ID, relying on core
to later read that resource and obtain the complete state. Because of
this, we need to check the response from ReadResource for a null value,
and use that as an indication the import id was invalid.
This was dead code, and there is no clear way to retrieve this
information, as we currently only derive the drift information as part
of the rendering process.
The schemas for provider and the resources didn't match, so the changes
were not going to be rendered at all.
Add a test which contains a deposed resource.
* getproviders ParsePlatform: add check for invalid platform strings with too many parts
The existing logic would not catch things like a platform string containing multiple underscores. I've added an explicit check for exactly 2 parts and some basic tests to prove it.
* command/providers-lock: add tests
This commit adds some simple tests for the providers lock command. While adding this test I noticed that there was a mis-copied error message, so I replaced that with a more specific message. I also added .terraform.lock.hcl to our gitignore for hopefully obvious reasons.
getproviders.ParsePlatform: use parts in place of slice range, since it's available
* command: Providers mirror tests
The providers mirror command is already well tested in e2e tests, so this includes only the most absolutely basic test case.
Do not convert provisioner diagnostics to errors so that users can get
context from provisioner failures.
Return diagnostics from the builtin provisioners that can be annotated
with configuration context and instance addresses.
When an attribute value changes in sensitivity, we previously rendered
this in the diff with a `~` update action and a note about the
consequence of the sensitivity change. Since we also suppress the
attribute value, this made it impossible to know if the underlying value
was changing, too, which has significant consequences on the meaning of
the plan.
This commit adds an equality check of the old/new underlying values. If
these are unchanged, we add a note to the sensitivity warning to clarify
that only sensitivity is changing.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
Once a plugin process is started, go-plugin will redirect the stdout and
stderr stream through a grpc service and provide those streams to the
client. This is rarely used, as it is prone to failing with races
because those same file descriptors are needed for the initial handshake
and logging setup, but data may be accidentally sent to these
nonetheless.
The usual culprits are stray `fmt.Print` usage where logging was
intended, or the configuration of a logger after the os.Stderr file
descriptor was replaced by go-plugin. These situations are very hard for
provider developers to debug since the data is discarded entirely.
While there may be improvements to be made in the go-plugin package to
configure this behavior, in the meantime we can add a simple monitoring
io.Writer to the streams which will surface th data as warnings in the
logs instead of writing it to `io.Discard`
This is not currently a supported interface, but we plan to release
tool(s) that consume parts of it that are more dependable later,
separately from Terraform CLI itself.
* Add helper suggestion when failed registry err
When someone has a failed registry error on init, remind them that
they should have required_providers in every module
* Give suggestion for a provider based on reqs
Suggest another provider on a registry error, from the list of
requirements we have on init. This skips the legacy lookup
process if there is a similar provider existing in requirements.
Fixes#27506
Add a new flag `-lockfile=readonly` to `terraform init`.
It would be useful to allow us to suppress dependency lockfile changes
explicitly.
The type of the `-lockfile` flag is string rather than bool, leaving
room for future extensions to other behavior variants.
The readonly mode suppresses lockfile changes, but should verify
checksums against the information already recorded. It should conflict
with the `-upgrade` flag.
Note: In the original use-case described in #27506, I would like to
suppress adding zh hashes, but a test code here suppresses adding h1
hashes because it's easy for testing.
Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>
* providers.Interface: rename ValidateDataSourceConfig to
ValidateDataResourceConfig
This PR came about after renaming ValidateResourceTypeConfig to
ValidateResourceConfig: I now understand that we'd called it the former
instead of the latter to indicate that the function wasn't necessarily
operating on a resource that actually exists. A possibly-more-accurate
renaming of both functions might then be ValidateManagedResourceConfig
and ValidateDataResourceConfig.
The next commit will update the protocol (v6 only) as well; these are in
separate commits for reviewers and will get squashed together before
merging.
* extend renaming to protov6
This is just a prototype to gather some feedback in our ongoing research
on integration testing of Terraform modules. The hope is that by having a
command integrated into Terraform itself it'll be easier for interested
module authors to give it a try, and also easier for us to iterate quickly
based on feedback without having to coordinate across multiple codebases.
Everything about this is subject to change even in future patch releases.
Since it's a CLI command rather than a configuration language feature it's
not using the language experiments mechanism, but generates a warning
similar to the one language experiments generate in order to be clear that
backward compatibility is not guaranteed.
As part of ongoing research into Terraform testing we'd like to use an
experimental feature to validate our current understanding that expressing
tests as part of the Terraform language, as opposed to in some other
language run alongside, is a good and viable way to write practical
module integration tests.
This initial experimental incarnation of that idea is implemented as a
provider, just because that's an easier extension point for research
purposes than a first-class language feature would be. Whether this would
ultimately emerge as a provider similar to this or as custom language
constructs will be a matter for future research, if this first
experiment confirms that tests written in the Terraform language are the
best direction to take.
The previous incarnation of this experiment was an externally-developed
provider apparentlymart/testing, listed on the Terraform Registry. That
helped with showing that there are some useful tests that we can write
in the Terraform language, but integrating such a provider into Terraform
will allow us to make use of it in the also-experimental "terraform test"
command, which will follow in subsequent commits, to see how this might
fit into a development workflow.
* Add support for plugin protocol v6
This PR turns on support for plugin protocol v6. A provider can
advertise itself as supporting protocol version 6 and terraform will
use the correct client.
Todo:
The "unmanaged" providers functionality does not support protocol
version, so at the moment terraform will continue to assume that
"unmanaged" providers are on protocol v5. This will require some
upstream work on go-plugin (I believe).
I would like to convert the builtin providers to use protocol v6 in a
future PR; however it is not necessary until we remove protocol v6.
* add e2e test for using both plugin protocol versions
- copied grpcwrap and made a version that returns protocol v6 provider
- copied the test provider, provider-simple, and made a version that's
using protocol v6 with the above fun
- added an e2etest
Remove the README that had old user-facing information, replacing
it with a doc.go that describes the package and points to the
plugin SDK for external consumers.
* providers.Interface: huge renamification
This commit renames a handful of functions in the providers.Interface to
match changes made in protocol v6. The following commit implements this
change across the rest of the codebase; I put this in a separate commit
for ease of reviewing and will squash these together when merging.
One noteworthy detail: protocol v6 removes the config from the
ValidateProviderConfigResponse, since it's never been used. I chose to
leave that in place in the interface until we deprecate support for
protocol v5 entirely.
Note that none of these changes impact current providers using protocol
v5; the protocol is unchanged. Only the translation layer between the
proto and terraform have changed.
It's pretty common to want to apply the various fmt.Fprint... functions
to our two output streams, and so to make that much less noisy at the
callsite here we have a small number of very thin wrappers around the
underlying fmt package functionality.
Although we're aiming to not have too much abstraction in this "terminal"
package, this seems justified in that it is only a very thin wrapper
around functionality that most Go programmers are already familiar with,
and so the risk of this causing any surprises is low and the improvement
to readability of callers seems worth it.
This is to allow convenient testing of functions that are designed to work
directly with *terminal.Streams or the individual stream objects inside.
Because the InputStream and OutputStream APIs expose directly an *os.File,
this does some extra work to set up OS-level pipes so we can capture the
output into local buffers to make test assertions against. The idea here
is to keep the tricky stuff we need for testing confined to the test
codepaths, so that the "real" codepaths don't end up needing to work
around abstractions that are otherwise unnecessary.
This is the first commit for plugin protocol v6. This is currently
unused (dead) code; future commits will add the necessary conversion
packages, extend configschema, and modify the providers.Interface.
The new plugin protocol includes the following changes:
- A new field has been added to Attribute: NestedType. This will be the
key new feature in plugin protocol v6
- Several massages were renamed for consistency with the verb-noun
pattern seen in _most_ messages.
- The prepared_config has been removed from PrepareProviderConfig
(renamed ValidateProviderConfig), as it has never been used.
- The provisioner service has been removed entirely. This has no impact
on built-in provisioners. 3rd party provisioners are not supported by
the SDK and are not included in this protocol at all.
This is a helper package that creates a very thin abstraction over
terminal setup, with the main goal being to deal with all of the extra
setup we need to do in order to get a UTF-8-supporting virtual terminal
on a Windows system.
Previously we were expecting that the *hcl.File would always be non-nil,
even in error cases. That isn't always true, so now we'll be more robust
about it and explicitly return an empty locks object in that case, along
with the error diagnostics.
In particular this avoids a panic in a strange situation where the user
created a directory where the lock file would normally go. There's no
meaning to such a directory, so it would always be a mistake and so now
we'll return an error message about it, rather than panicking as before.
The error message for the situation where the lock file is a directory is
currently not very specific, but since it's HCL responsible for generating
that message we can't really fix that at this layer. Perhaps in future
we can change HCL to have a specialized error message for that particular
error situation, but for the sake of this commit the goal is only to
stop the panic and return a normal error message.
If a user forgets to specify the source address for a provider, Terraform
will assume they meant a provider in the registry.terraform.io/hashicorp/
namespace. If that ultimately doesn't exist, we'll now try to see if
there's some other provider source address recorded in the registry's
legacy provider lookup table, and suggest it if so.
The error message here is a terse one addressed primarily to folks who are
already somewhat familiar with provider source addresses and how to
specify them. Terraform v0.13 had a more elaborate version of this error
message which directed the user to try the v0.13 automatic upgrade tool,
but we no longer have that available in v0.14 and later so the user must
make the fix themselves.
The temporary directory on some systems (most notably MacOS) contains
symlinks, which would not be recorded by the installer. In order to make
these paths comparable in the tests we need to eval the symlinks in
the paths before giving them to the installer.
When logging is turned on, panicwrap will still see provider crashes and
falsely report them as core crashes, hiding the formatted provider
error. We can trick panicwrap by slightly obfuscating the error line.
When rendering a set of version constraints to a string, we normalize
partially-constrained versions. This means converting a version
like 2.68.* to 2.68.0.
Prior to this commit, this normalization was done after deduplication.
This could result in a version constraints string with duplicate
entries, if multiple partially-constrained versions are equivalent. This
commit fixes this by normalizing before deduplicating and sorting.
Previously we were only verifying locked hashes for local archive zip
files, but if we have non-ziphash hashes available then we can and should
also verify that a local directory matches at least one of them.
This does mean that folks using filesystem mirrors but yet also running
Terraform across multiple platforms will need to take some extra care to
ensure the hashes pass on all relevant platforms, which could mean using
"terraform providers lock" to pre-seed their lock files with hashes across
all platforms, or could mean using the "packed" directory layout for the
filesystem mirror so that Terraform will end up in the install-from-archive
codepath instead of this install-from-directory codepath, and can thus
verify ziphash too.
(There's no additional documentation about the above here because there's
already general information about this in the lock file documentation
due to some similar -- though not identical -- situations with network
mirrors.)
We previously had some tests for some happy paths and a few specific
failures into an empty directory with no existing locks, but we didn't
have tests for the installer respecting existing lock file entries.
This is a start on a more exhaustive set of tests for the installer,
aiming to visit as many of the possible codepaths as we can reasonably
test using this mocking strategy. (Some other codepaths require different
underlying source implementations, etc, so we'll have to visit those in
other tests separately.)
This won't be a typical usage pattern for normal code, but will be useful
for tests that need to work with locks as input so that they don't need to
write out a temporary file on disk just to read it back in immediately.
An earlier commit made this remove duplicates, which set the precedent
that this function is trying to canonically represent the _meaning_ of
the version constraints rather than exactly how they were expressed in
the configuration.
Continuing in that vein, now we'll also apply a consistent (though perhaps
often rather arbitrary) ordering to the terms, so that it doesn't change
due to irrelevant details like declarations being written in a different
order in the configuration.
The ordering here is intended to be reasonably intuitive for simple cases,
but constraint strings with many different constraints are hard to
interpret no matter how we order them so the main goal is consistency,
so those watching how the constraints change over time (e.g. in logs of
Terraform output, or in the dependency log file) will see fewer noisy
changes that don't actually mean anything.
Create a logger that will record any apparent crash output for later
processing.
If the cli command returns with a non-zero exit status, check for any
recorded crashes and add those to the output.
Now that hclog can independently set levels on related loggers, we can
separate the log levels for different subsystems in terraform.
This adds the new environment variables, `TF_LOG_CORE` and
`TF_LOG_PROVIDER`, which each take the same set of log level arguments,
and only applies to logs from that subsystem. This means that setting
`TF_LOG_CORE=level` will not show logs from providers, and
`TF_LOG_PROVIDER=level` will not show logs from core. The behavior of
`TF_LOG` alone does not change.
While it is not necessarily needed since the default is to disable logs,
there is also a new level argument of `off`, which reflects the
associated level in hclog.
A set of version constraints can contain duplicates. This can happen if
multiple identical constraints are specified throughout a configuration.
When rendering the set, it is confusing to display redundant
constraints. This commit changes the string renderer to only show the
first instance of a given constraint, and adds unit tests for this
function to cover this change.
This also fixes a bug with the locks file generation: previously, a
configuration with redundant constraints would result in this error on
second init:
Error: Invalid provider version constraints
on .terraform.lock.hcl line 6:
(source code not available)
The recorded version constraints for provider
registry.terraform.io/hashicorp/random must be written in normalized form:
"3.0.0".
Use a separate log sink to always capture trace logs for the panicwrap
handler to write out in a crash log.
This requires creating a log file in the outer process and passing that
path to the child process to log to.
Previously this codepath was generating a confusing message in the absense
of any symlinks, because filepath.EvalSymlinks returns a successful result
if the target isn't a symlink.
Now we'll emit the log line only if filepath.EvalSymlinks returns a
result that's different in a way that isn't purely syntactic (which
filepath.Clean would "fix").
The new message is a little more generic because technically we've not
actually ensured that a difference here was caused by a symlink and so
we shouldn't over-promise and generate something potentially misleading.
ioutil.TempFile has a special case where an empty string for its dir
argument is interpreted as a request to automatically look up the system
temporary directory, which is commonly /tmp .
We don't want that behavior here because we're specifically trying to
create the temporary file in the same directory as the file we're hoping
to replace. If the file gets created in /tmp then it might be on a
different device and thus the later atomic rename won't work.
Instead, we'll add our own special case to explicitly use "." when the
given filename is in the current working directory. That overrides the
special automatic behavior of ioutil.TempFile and thus forces the
behavior we need.
This hadn't previously mattered for earlier callers of this code because
they were creating files in subdirectories, but this codepath was failing
for the dependency lock file due to it always being created directly
in the current working directory.
Unfortunately since this is a picky implementation detail I couldn't find
a good way to write a unit test for it without considerable refactoring.
Instead, I verified manually that the temporary filename wasn't in /tmp on
my Linux system, and hope that the comment inline will explain this
situation well enough to avoid an accidental regression in future
maintenence.
If a configuration requires a partial provider version (with some parts
unspecified), Terraform considers this as a constrained-to-zero version.
For example, a version constraint of 1.2 will result in an attempt to
install version 1.2.0, even if 1.2.1 is available.
When writing the dependency locks file, we previously would write 1.2.*,
as this is the in-memory representation of 1.2. This would then cause an
error on re-reading the locks file, as this is not a valid constraint
format.
Instead, we now explicitly convert the constraint to its zero-filled
representation before writing the locks file. This ensures that it
correctly round-trips.
Because this change is made in getproviders.VersionConstraintsString, it
also affects the output of the providers sub-command.
Use a single log writer instance for all std library logging.
Setup the std log writer in the logging package, and remove boilerplate
from test packages.
In this case, "atomic" means that there will be no situation where the
file contains only part of the newContent data, and therefore other
software monitoring the file for changes (using a mechanism like inotify)
won't encounter a truncated file.
It does _not_ mean that there can't be existing filehandles open against
the old version of the file. On Windows systems the write will fail in
that case, but on Unix systems the write will typically succeed but leave
the existing filehandles still pointing at the old version of the file.
They'll need to reopen the file in order to see the new content.
This originated in the cliconfig code to write out credentials files. The
Windows implementation of this in particular was quite onerous to get
right because it needs a very specific sequence of operations to avoid
running into exclusive file locks, and so by factoring this out with
only cosmetic modification we can avoid repeating all of that engineering
effort for other atomic file writing use-cases.