Historically the responsibility for making sure that all of the available
providers are of suitable versions and match the appropriate checksums has
been split rather inexplicably over multiple different layers, with some
of the checks happening as late as creating a terraform.Context.
We're gradually iterating towards making that all be handled in one place,
but in this step we're just cleaning up some old remnants from the
main "terraform" package, which is now no longer responsible for any
version or checksum verification and instead just assumes it's been
provided with suitable factory functions by its caller.
We do still have a pre-check here to make sure that we at least have a
factory function for each plugin the configuration seems to depend on,
because if we don't do that up front then it ends up getting caught
instead deep inside the Terraform runtime, often inside a concurrent
graph walk and thus it's not deterministic which codepath will happen to
catch it on a particular run.
As of this commit, this actually does leave some holes in our checks: the
command package is using the dependency lock file to make sure we have
exactly the provider packages we expect (exact versions and checksums),
which is the most crucial part, but we don't yet have any spot where
we make sure that the lock file is consistent with the current
configuration, and we are no longer preserving the provider checksums as
part of a saved plan.
Both of those will come in subsequent commits. While it's unusual to have
a series of commits that briefly subtracts functionality and then adds
back in equivalent functionality later, the lock file checking is the only
part that's crucial for security reasons, with everything else mainly just
being to give better feedback when folks seem to be using Terraform
incorrectly. The other bits are therefore mostly cosmetic and okay to be
absent briefly as we work towards a better design that is clearer about
where that responsibility belongs.
There are a few different reasons why a resource instance tracked in the
prior state might be considered an "orphan", but previously we reported
them all identically in the planned changes.
In order to help users understand the reason for a surprising planned
delete, we'll now try to specify an additional reason for the planned
deletion, covering all of the main reasons why that could happen.
This commit only introduces the new detail to the plans.Changes result,
though it also incidentally exposes it as part of the JSON plan result
in order to keep that working without returning errors in these new
cases. We'll expose this information in the human-oriented UI output in
a subsequent commit.
Rather than delaying resource drift detection until it is ready to be
presented, here we perform that computation after the plan walk has
completed. The resulting drift is represented like planned resource
changes, using a slice of ResourceInstanceChangeSrc values.
For resources which are planned to move, render the previous run address
as additional information in the plan UI. For the case of a move-only
resource (which otherwise is unchanged), we also render that as a
planned change, but without any corresponding action symbol.
If all changes in the plan are moves without changes, the plan is no
longer considered "empty". In this case, we skip rendering the action
symbols in the UI.
In order to expose the effect of any relevant "moved" statements we dealt
with prior to creating the plan, we'll record with each
ResourceInstanceChange both is current address and the address it was
tracked at for the previous run.
To save consumers of these objects from having to special-case the
situation where there _was_ no previous run (e.g. because this is a Create
change), we'll just pretend the previous run address was the same as the
current address in that case, the same as for an update without any
renaming in effect.
This includes a breaking change to the plan file format, but one that
doesn't require a version number increment because there is no ambiguity
between the two formats and so mismatched parsers will already fail with
an error message.
As of this commit we've just added the new field but not yet populated it
with any useful information: it always just matches Addr. A future commit
will wire this up to the result of applying the moves so that we can
populate it correctly. We also don't yet expose this new information
anywhere in the UI layer.
While blocks were not allowed to be computed by the provider, nested
objects can be. Remove the errors regarding blocks and verify unknown
values are valid.
We have a few different .proto files in this repository that all need to
get recompiled into .pb.go files each time we change them, but we were
previously handling that with some scripts that just assumed that protoc
and the relevant plugins were already installed on the system somewhere,
at the right versions.
In practice we've been constantly flopping between different versions of
these tools due to folks having different versions installed in their
development environments. In particular, the state of the .pb.go files
in the prior commit wasn't reproducible by any single version of the tools
because they've all slightly diverged from one another.
In the interests of being more consistent here and avoiding accidental
inconsistencies, we'll now centralize the protocol buffer compile steps
all into a single tool that knows how to fetch and install the expected
versions of the various tools we need and then run those tools with the
right options to get a stable result.
If we want to upgrade to either a newer protoc or a newer protoc-gen-go
in future then we'll do that in a central location and update all of the
.pb.go files at the same time, so that we're always consistently tracking
the same version of protocol buffers everywhere.
While doing this I attempted to keep as close as possible to the toolchain
we'd most recently used, but since they were not consistent with each
other they've now all changed which version numbers they record at minimum,
and the planproto stub in particular now also has a slightly different
descriptor serialization but is otherwise offering the same API.
* configs/configschema: fix missing "computed" attributes from NestedObject's ImpliedType
listOptionalAttrsFromObject was not including "computed" attributes in the list of optional object attributes. This is now fixed. I've also added some tests and fixed some panics and otherwise bad behavior when bad input is given. One natable change is in ImpliedType, which was panicking on an invalid nesting mode. The comment expressly states that it will return a result even when the schema is inconsistent, so I removed the panic and instead return an empty object.
An unknown block represents a dynamic configuration block with an
unknown for_each value. We were not catching the case where a provider
modified this value unexpectedly, which would crash with block of type
NestingList blocks where the config value has no length for comparison.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.