In study of existing providers we've found a pattern we werent previously
accounting for of using a nested block type to represent a group of
arguments that relate to a particular feature that is always enabled but
where it improves configuration readability to group all of its settings
together in a nested block.
The existing NestingSingle was not a good fit for this because it is
designed under the assumption that the presence or absence of the block
has some significance in enabling or disabling the relevant feature, and
so for these always-active cases we'd generate a misleading plan where
the settings for the feature appear totally absent, rather than showing
the default values that will be selected.
NestingGroup is, therefore, a slight variation of NestingSingle where
presence vs. absence of the block is not distinguishable (it's never null)
and instead its contents are treated as unset when the block is absent.
This then in turn causes any default values associated with the nested
arguments to be honored and displayed in the plan whenever the block is
not explicitly configured.
The current SDK cannot activate this mode, but that's okay because its
"legacy type system" opt-out flag allows it to force a block to be
processed in this way anyway. We're adding this now so that we can
introduce the feature in a future SDK without causing a breaking change
to the protocol, since the set of possible block nesting modes is not
extensible.
Some providers may generate quite large schemas, and the internal
default grpc response size limit is 4MB. 64MB should cover most any use
case, and if we get providers nearing that we may want to consider a
finer-grained API to fetch individual resource schemas.
If the registry is unresponsive, you will now get an error
specific to this, rather than a misleading "provider unavailable" type
error. Also adds debug logging for when errors like this may occur
Terraform Registry (and other registry implementations) can now return
an array of warnings with the versions response. These warnings are now
displayed to the user during a `terraform init`.
https://github.com/hashicorp/terraform/pull/19389 introduced a change to
the provider GPG signature verification process, and removed the
hardcoded HashiCorp GPG key.
While the changes were intended and are still planned for a future
release, we should still be verifying all providers in the TF 0.12.0
release against the HashiCorp GPG key until a more robust key
verification procedure is in place.
Fixes https://github.com/hashicorp/terraform/issues/20527
Due to the inprecision of our shimming from the legacy SDK type system to
the new Terraform Core type system, the legacy SDK produces a number of
inconsistencies that produce only minor quirky behavior or broken
edge-cases. To retain compatibility with those existing weird behaviors,
the legacy SDK opts out of our safety checks.
The intent here is to allow existing providers to continue to do their
previous unsafe behaviors for now, accepting that this will allow certain
quirky bugs from previous releases to persist, and then gradually migrate
away from the legacy SDK and remove this opt-out on a per-resource basis
over time.
As with the apply-time safety check opt-out, this is reserved only for
the legacy SDK and must not be used in any new SDK implementations. We
still include any inconsistencies as warnings in the logs as an aid to
anyone debugging weird behavior, so that they can see situations where
blame may be misplaced in the user-visible error messages.
The shim layer for the legacy SDK type system is not precise enough to
guarantee it will produce identical results between plan and apply. In
particular, values that are null during plan will often become zero-valued
during apply.
To avoid breaking those existing providers while still allowing us to
introduce this check in the future, we'll introduce a rather-hacky new
flag that allows the legacy SDK to signal that it is the legacy SDK and
thus disable the check.
Once we start phasing out the legacy SDK in favor of one that natively
understands our new type system, we can stop setting this flag and thus
get the additional safety of this check without breaking any
previously-released providers.
No other SDK is permitted to set this flag, and we will remove it if we
ever introduce protocol version 6 in future, assuming that any provider
supporting that protocol will always produce consistent results.
It seems that all of the tools we run here are now sufficiently
modules-aware to run without problems in modules mode, and indeed running
_not_ in modules mode was causing problems with locating packages in
mockgen.
Previously we were using the type name requested in the import to select
the schema, but a provider is free to return additional objects of other
types as part of an import result, and so it's important that we perform
schema selection separately for each returned object.
If we don't do this, we get confusing downstream errors where the
resulting object decodes to the wrong type and breaks various invariants
expected by Terraform Core.
The testResourceImportOther test in the test provider didn't catch this
previously because it happened to have an identical schema to the other
resource type being imported. Now the schema is changed and also there's
a computed attribute we can set as part of the refresh phase to make sure
we're completing the Read call properly during import. Refresh was working
correctly, but we didn't have any tests for it as part of the import flow.
Any state modifying functions can only be run once during the plan-apply
cycle. When regenerating the Diff during ApplyResourceChange, strip out
all StateFunc and CustomizeDiff functions from the schema.
Thew NewExtra diff field was where config data that was modified by a
StateFunc was stored, and needs to be maintained between plan and apply.
During PlanResourceChange, store any NewExtra data from the Diff in the
PlannedPrivate data, and re-insert the NewExtra data into the Diff
generated during ApplyResourceChange.
The rest of Terraform is still using uint64 for this in various spots, but
we'll update that gradually later. We use int64 here because that matches
what's used in our protobuf definition, and unsigned integers are not
portable across all of the protobuf target languages anyway.
When verifying the signature of the SHA256SUMS file, we have been
hardcoding HashiCorp's public GPG key and using it as the keyring.
Going forward, Terraform will get a list of valid public keys for a
provider from the Terraform Registry (registry.terraform.io), and use
them as the keyring for the openpgp verification func.
The main significant change here is that the package name for the proto
definition is "tfplugin5", which is important because this name is part
of the wire protocol for references to types defined in our package.
Along with that, we also move the generated package into "internal" to
make it explicit that importing the generated Go package from elsewhere is
not the right approach for externally-implemented SDKs, which should
instead vendor the proto definition they are using and generate their
own stubs to ensure that the wire protocol is the only hard dependency
between Terraform Core and plugins.
After this is merged, any provider binaries built against our
helper/schema package will need to be rebuilt so that they use the new
"tfplugin5" package name instead of "proto".
In a future commit we will include more elaborate and organized
documentation on how an external codebase might make use of our RPC
interface definition to implement an SDK, but the primary concern here
is to ensure we have the right wire package name before release.
Since protoc is not go-gettable, and most development tasks in Terraform
won't involve recompiling protoc files anyway, we'll use a separate
mechanism for these.
This way "go generate" only depends on things we can "go get" in the
"make tools" target.
In a later commit we should also in some way specify a particular version
of protoc to use so that we don't get "flapping" regenerations as
developers work with different versions, but the priority here is just to
make "make generate" minimally usable again to restore the dev workflow
documented in the README.
This also includes some updates that resulted from running "make generate"
and "make protobuf" after those Makefile changes were in place.
Even if a provider doesn't indicate a specific attribute as the cause of
a resource operation error, we know the error relates to some aspect of
the resource, so we'll include that approximate information in the result
so that we don't produce user-hostile error messages with no context
whatsoever.
Later we can hopefully refine this to place the source range on the header
of the configuration block rather than on an empty part of the body, but
that'll require some more complex rework here and so for now we'll just
accept this as an interim state so that the user can at least figure out
which resource block the error is coming from.