This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
At the time of this commit we have a proposal #28700 which would, if
accepted, need to reserve a new reference prefix to represent template
arguments.
It seems unlikely that the proposal would be accepted and implemented
before Terraform v1.0 creates additional compatibility constraints, and so
this pre-emptively reserves a few candidate symbol names to allow
something like that proposal to potentially move forward later without
requiring a new opt-in language edition.
If we do move forward with the proposal then we'll select one of these
three reserved names depending on which form of the proposal we decide
to move forward with, and then un-reserve the other two. If we decide to
not pursue this proposal at all then we'll un-reserve all three once
that decision is finalized.
It's unlikely that there is any existing provider which has a resource
type named either "template", "lazy", or "arg", but in that unlikely event
users of that provider can keep using it by adding the "resource."
escaping prefix, such as changing "lazy.foo.bar" into
"resource.lazy.foo.bar".
The current way to refer to a managed resource is to use its resource type
name as a top-level symbol in the reference. This is convenient and makes
sense given that managed resources are the primary kind of object in
Terraform.
However, it does mean that an externally-extensible namespace (the set
of all possible resource type names) overlaps with a reserved word
namespace (the special prefixes like "path", "var", etc), and thus
introducing any new reserved prefix in future risks masking an existing
resource type so it can't be used anymore.
We only intend to introduce new reserved symbols as part of future
language editions that each module can opt into separately, and when doing
so we will always research to try to choose a name that doesn't overlap
with commonly-used providers, but not all providers are visible to us and
so there is always a small chance that the name we choose will already be
in use by a third-party provider.
In preparation for that event, this introduces an alternative way to refer
to managed resources that mimics the reference style used for data
resources: resource.type.name . When using this form, the second portion
is _always_ a resource type name and never a reserved word.
There is currently no need to use this because all of the already-reserved
symbol names are effectively blocked from use by existing Terraform
versions that lack this escape hatch. Therefore there's no explicit
documentation about it yet.
The intended use for this is that a module upgrade tool for a future
language edition would detect references to resource types that have now
become reserved words and add the "resource." prefix to keep that
functionality working. Existing modules that aren't opted in to the new
language edition would keep working without that prefix, thus keeping to
compatibility promises.
Several top-level block types in the Terraform language have a body where
two different schemas are overlayed on top of one another: Terraform first
looks for "meta-arguments" that are built into the language, and then
evaluates all of the remaining arguments against some externally-defined
schema whose content is not fully controlled by Terraform.
So far we've been cautiously adding new meta-arguments in these namespaces
after research shows us that there are relatively few existing providers
or modules that would have functionality masked by those additions, but
that isn't really a viable path forward as we prepare to make stronger
compatibility promises.
In an earlier commit we've introduced the foundational parts of a new
language versioning mechanism called "editions" which should allow us to
make per-module-opt-in breaking changes in the future, but these shared
namespaces remain a liability because it would be annoying if adopting a
new edition made it impossible to use a feature of a third-party provider
or module that was already using a name that has now become reserved in
the new edition.
This commit introduces a new syntax intended to be a rarely-used escape
hatch for that situation. When we're designing new editions we will do our
best to choose names that don't conflict with commonly-used providers and
modules, but there are many providers and modules that we cannot see and
so there is a risk that any name we might choose could collide with at
least one existing provider or module. The automatic migration tool to
upgrade an existing module to a new edition should therefore detect that
situation and make use of this escaping block syntax in order to retain
the existing functionality until all the called providers or modules are
updated to no longer use conflicting names.
Although we can't put in technical constraints on using this feature for
other purposes (because we don't know yet what future editions will add),
this mechanism is intentionally not documented for now because it serves
no immediate purpose. In effect, this change is just squatting on the
syntax of a special block type named "_" so that later editions can make
use of it without it _also_ conflicting, creating a confusing nested
escaping situation. However, the first time a new edition actually makes
use of this syntax we should then document alongside the meta-arguments
so folks can understand the meaning of escaping blocks produced by
edition upgrade tools.
Add `init -migrate-state` flag to indicate automatic state migration is
desired. This flag will be implied by the `-force-copy` flag, since that
would indicate state migration is expected.
If `init` encounters a change to the stored backend configuration, it
will now always return an error when neither `-reconfigure` or
`-migrate-state` is supplied.
Turn the most common legacy output strings into diagnostics, removing
the "see above text" error output.
For the `closed_issue_locker` behavior, this is a migration to an equivalent action.
For the `label_issue_migrater` behavior, this is not replaced and instead it is suggested to use native GitHub functionality for issue transfer. If mostly equivalent behavior is desired via label automation, it may be possible to submit an issue transfer feature request to dessant/label-actions as it is a popular community action or create a new GitHub Action. Please reach out if this is a major issue for your team.
For the `remove_labels_on_reply` behavior, it is equivalent except this initial configuration does not make the collaborators distinction. There is a workflow configuration workaround for setting up per-user ignores for any job/step, so if you desire that here please reach out.
- I'm using distinct subheaders and smaller paragraphs to try and make the info
about apply's two modes more skimmable.
- I'm also adding a separate "Plan Options" subheader (and keeping the section
tiny so it stays snugged up right next to the "Apply Options" one) to make it
extra-clear that Hey, There's More Options, They're Over There.
It's a relatively common mistake to try to refer to a data resource
without including the data. prefix, making Terraform understand it as a
reference to a managed resource.
To help with that case, we'll include an additonal suggestion if we can
see that there's a data resource declared with the same type and name as
in the given address.
Once a plugin process is started, go-plugin will redirect the stdout and
stderr stream through a grpc service and provide those streams to the
client. This is rarely used, as it is prone to failing with races
because those same file descriptors are needed for the initial handshake
and logging setup, but data may be accidentally sent to these
nonetheless.
The usual culprits are stray `fmt.Print` usage where logging was
intended, or the configuration of a logger after the os.Stderr file
descriptor was replaced by go-plugin. These situations are very hard for
provider developers to debug since the data is discarded entirely.
While there may be improvements to be made in the go-plugin package to
configure this behavior, in the meantime we can add a simple monitoring
io.Writer to the streams which will surface th data as warnings in the
logs instead of writing it to `io.Discard`
In the very unusual situation where we end up planning to destroy a
deposed object alone, it's likely that we're exposing users to this idea
of "deposed" for the very first time.
This additional sentence will hopefully give some extra context for what
that means. We don't really have room here for a lengthy explanation about
how deposed objects come to exist but this will hopefully be enough of
a hook to help users connect this with an error they saw on a previous
run, or at least to have some additional keywords to search for if they
want to research further.
In many ways a deposed object is equivalent to an orphaned current object
in that the only action we can take with it is to destroy it. However, we
do still need to take some preparation steps in both cases: first, we must
ensure we track the upgraded version of the existing object so that we'll
be able to successfully render our plan, and secondly we must refresh the
existing object to make sure it still exists in the remote system.
We were previously doing these extra steps for orphan objects but not for
deposed ones, which meant that the behavior for deposed objects would be
subtly different and violate the invariants our callers expect in order
to display a plan. This also created the risk that a deposed object
already deleted in the remote system would become "stuck" because
Terraform would still plan to destroy it, which might cause the provider
to return an error when it tries to delete an already-absent object.
This also makes the deposed object planning take into account the
"skipPlanChanges" flag, which is important to get a correct result in
the "refresh only" planning mode.
It's a shame that we have almost identical code handling both the orphan
and deposed situations, but they differ in that the latter must call
different functions to interact with the deposed rather than the current
objects in the state. Perhaps a later change can improve on this with some
more refactoring, but this commit is already a little more disruptive than
I'd like and so I'm intentionally deferring that for another day.
This is a light revamp of our plan output to make use of Terraform core's
new ability to report both the previous run state and the refreshed state,
allowing us to explicitly report changes made outside of Terraform.
Because whether a plan has "changes" or not is no longer such a
straightforward matter, this now merges views.Operation.Plan with
views.Operation.PlanNoChanges to produce a single function that knows how
to report all of the various permutations. This was also an opportunity
to fill some holes in our previous logic which caused it to produce some
confusing messages, including a new tailored message for when
"terraform destroy" detects that nothing needs to be destroyed.
This also allows users to request the refresh-only planning mode using a
new -refresh-only command line option. In that case, Terraform _only_
performs drift detection, and so applying a refresh-only plan only
involves writing a new state snapshot, without changing any real
infrastructure objects.
We've always had a mechanism to synchronize the Terraform state with
remote objects before creating a plan, but we previously kept the result
of that to ourselves, and so it would sometimes lead to Terraform
generating a planned action to undo some upstream drift, but Terraform
would give no context as to why that action was planned even though the
relevant part of the configuration hadn't changed.
Now we'll detect any differences between the previous run state and the
refreshed state and, if any managed resources now look different, show
an additional note about it as extra context for the planned changes that
follow.
This appears as an optional extra block of information before the normal
plan output. It'll appear the same way in all of the contexts where we
render plans, including "terraform show" for saved plans.
When we detect that a module is trying to declare a resource type that
doesn't exist in its corresponding provider, we have enough information to
give some hints as to what might be wrong.
We give preference to the possibility of mixing up a managed resource type
with a data resource type, but if not that then we'll use our usual
levenshtein-distance-based similar name suggestion function.
We currently don't typically test exact error message text, so and this is
just a cosmetic change to the message output, so there are now new tests
or test modifications here.
When we fail to read the organization entitlements, it's not
always because the organization doesn't exist; in practice, it's
usually due to a misspelled organization name, forgetting to
specify the correct hostname, or an invalid API token. Surfacing
more information within the error message will help new users
debug these issues more effectively.