For the `closed_issue_locker` behavior, this is a migration to an equivalent action.
For the `label_issue_migrater` behavior, this is not replaced and instead it is suggested to use native GitHub functionality for issue transfer. If mostly equivalent behavior is desired via label automation, it may be possible to submit an issue transfer feature request to dessant/label-actions as it is a popular community action or create a new GitHub Action. Please reach out if this is a major issue for your team.
For the `remove_labels_on_reply` behavior, it is equivalent except this initial configuration does not make the collaborators distinction. There is a workflow configuration workaround for setting up per-user ignores for any job/step, so if you desire that here please reach out.
- I'm using distinct subheaders and smaller paragraphs to try and make the info
about apply's two modes more skimmable.
- I'm also adding a separate "Plan Options" subheader (and keeping the section
tiny so it stays snugged up right next to the "Apply Options" one) to make it
extra-clear that Hey, There's More Options, They're Over There.
It's a relatively common mistake to try to refer to a data resource
without including the data. prefix, making Terraform understand it as a
reference to a managed resource.
To help with that case, we'll include an additonal suggestion if we can
see that there's a data resource declared with the same type and name as
in the given address.
Once a plugin process is started, go-plugin will redirect the stdout and
stderr stream through a grpc service and provide those streams to the
client. This is rarely used, as it is prone to failing with races
because those same file descriptors are needed for the initial handshake
and logging setup, but data may be accidentally sent to these
nonetheless.
The usual culprits are stray `fmt.Print` usage where logging was
intended, or the configuration of a logger after the os.Stderr file
descriptor was replaced by go-plugin. These situations are very hard for
provider developers to debug since the data is discarded entirely.
While there may be improvements to be made in the go-plugin package to
configure this behavior, in the meantime we can add a simple monitoring
io.Writer to the streams which will surface th data as warnings in the
logs instead of writing it to `io.Discard`
In the very unusual situation where we end up planning to destroy a
deposed object alone, it's likely that we're exposing users to this idea
of "deposed" for the very first time.
This additional sentence will hopefully give some extra context for what
that means. We don't really have room here for a lengthy explanation about
how deposed objects come to exist but this will hopefully be enough of
a hook to help users connect this with an error they saw on a previous
run, or at least to have some additional keywords to search for if they
want to research further.
In many ways a deposed object is equivalent to an orphaned current object
in that the only action we can take with it is to destroy it. However, we
do still need to take some preparation steps in both cases: first, we must
ensure we track the upgraded version of the existing object so that we'll
be able to successfully render our plan, and secondly we must refresh the
existing object to make sure it still exists in the remote system.
We were previously doing these extra steps for orphan objects but not for
deposed ones, which meant that the behavior for deposed objects would be
subtly different and violate the invariants our callers expect in order
to display a plan. This also created the risk that a deposed object
already deleted in the remote system would become "stuck" because
Terraform would still plan to destroy it, which might cause the provider
to return an error when it tries to delete an already-absent object.
This also makes the deposed object planning take into account the
"skipPlanChanges" flag, which is important to get a correct result in
the "refresh only" planning mode.
It's a shame that we have almost identical code handling both the orphan
and deposed situations, but they differ in that the latter must call
different functions to interact with the deposed rather than the current
objects in the state. Perhaps a later change can improve on this with some
more refactoring, but this commit is already a little more disruptive than
I'd like and so I'm intentionally deferring that for another day.
This is a light revamp of our plan output to make use of Terraform core's
new ability to report both the previous run state and the refreshed state,
allowing us to explicitly report changes made outside of Terraform.
Because whether a plan has "changes" or not is no longer such a
straightforward matter, this now merges views.Operation.Plan with
views.Operation.PlanNoChanges to produce a single function that knows how
to report all of the various permutations. This was also an opportunity
to fill some holes in our previous logic which caused it to produce some
confusing messages, including a new tailored message for when
"terraform destroy" detects that nothing needs to be destroyed.
This also allows users to request the refresh-only planning mode using a
new -refresh-only command line option. In that case, Terraform _only_
performs drift detection, and so applying a refresh-only plan only
involves writing a new state snapshot, without changing any real
infrastructure objects.
We've always had a mechanism to synchronize the Terraform state with
remote objects before creating a plan, but we previously kept the result
of that to ourselves, and so it would sometimes lead to Terraform
generating a planned action to undo some upstream drift, but Terraform
would give no context as to why that action was planned even though the
relevant part of the configuration hadn't changed.
Now we'll detect any differences between the previous run state and the
refreshed state and, if any managed resources now look different, show
an additional note about it as extra context for the planned changes that
follow.
This appears as an optional extra block of information before the normal
plan output. It'll appear the same way in all of the contexts where we
render plans, including "terraform show" for saved plans.
When we detect that a module is trying to declare a resource type that
doesn't exist in its corresponding provider, we have enough information to
give some hints as to what might be wrong.
We give preference to the possibility of mixing up a managed resource type
with a data resource type, but if not that then we'll use our usual
levenshtein-distance-based similar name suggestion function.
We currently don't typically test exact error message text, so and this is
just a cosmetic change to the message output, so there are now new tests
or test modifications here.
When we fail to read the organization entitlements, it's not
always because the organization doesn't exist; in practice, it's
usually due to a misspelled organization name, forgetting to
specify the correct hostname, or an invalid API token. Surfacing
more information within the error message will help new users
debug these issues more effectively.
Previously our file hashing functions were backed by the same "read file
into memory" function we use for situations like "file" and "templatefile",
meaning that they'd read the entire file into memory first and then
calculate the hash from that buffer.
All of the hash implementations we use here can calculate hashes from a
sequence of smaller buffer writes though, so there's no actual need for
us to create a file-sized temporary buffer here.
This, then, is a small refactoring of our underlying function into two
parts, where one is responsible for deciding the actual filename to load
opening it, and the other is responsible for buffering the file into
memory. Our hashing functions can then use only the first function and
skip the second.
This then allows us to use io.Copy to stream from the file into the
hashing function in smaller chunks, possibly of a size chosen by the hash
function if it happens to implement io.ReaderFrom.
The new implementation is functionally equivalent to the old but should
use less temporary memory if the user passes a large file to one of the
hashing functions.
Previously the docs for this were rather confusing because they showed an
option to turn _on_ state locking, even though it's on by default.
Instead, we'll now show -lock=false in all cases and document it as
_disabling_ the default locking.
While working on this I also noticed that the equivalent docs on the
website were differently inconsistent. I've not made them fully consistent
here but at least moreso than they were before.
My original motivation here was to add the previously-missing -dry-run
option to the list of options
However, while in the area I noticed that this command hasn't had a
documentation refresh for a while and so I took the opportunity to update
it to match with our current writing style and terminology used in other
parts of the documentation, and so I've rewritten prose elsewhere on the
page to hopefully give the same information in a way that fits in better
with concepts discussed elsewhere in the documentation, and also to try
to add some additional context to connect this information with what
we've described in other places.
This rewrite also drops the example of moving from one "state file" to
another, because that's a legacy usage pattern that isn't supported when
using remote backends, and we recommend most folks to use remote backends
so it's strange to show an example that therefore won't work for most
people. Rather than adding additional qualifiers to that example I chose
to just remove it altogether, because we've generally been working to
de-emphasize these legacy local backend command line options elsewhere in
the documentation.
My original motivation here was to add the previously-missing -dry-run
option to the list of options
However, while in the area I noticed that this command hasn't had a
documentation refresh for a while and so I took the opportunity to update
it to match with our current writing style and terminology used in other
parts of the documentation, and so I've rewritten prose elsewhere on the
page to hopefully give the same information in a way that fits in better
with concepts discussed elsewhere in the documentation, and also to try
to add some additional context to connect this information with what
we've described in other places.
The Git book seems to be using a different anchor format now, and so this
link was previously effectively linking to the page as a whole rather
than to the specific section we're trying to refer to.
We previously had only very short descriptions of what
-ignore-remote-version does due to having the documentation for it inline
on many different command pages and -help output.
Instead, we'll now centralize the documentation about this argument on
the remote backend page, and link to it or refer to it from all other
locations. This then allows us to spend more words on discussing what
Terraform normally does _without_ this option and warning about the
consequences of using it.
This continues earlier precedent for some local-backend-specific options
which we also don't recommend for typical use. While this does make these
options a little more "buried" than before, that feels justified given
that they are all "exceptional use only" sort of options where users ought
to learn about various caveats before using them.
While there I also took this opportunity to fix some earlier omissions
with the local-backend-specific options and a few other minor consistency
tweaks.
Make sure that this function can handle any unexpectedly marked values.
The only remaining caller of this function is in the diff formatter,
which uses it to suppress meaningless diffs created by legacy providers.
Marks stored in a plans.ChangeSrc were not decoded along with the
stored values. This was working in many cases by evaluation correctly
re-evaluating the marks, but this cannot happen in all cases.
* Add link to Modules in Package Sub-directories
Add link to "Modules in Package Sub-directories" section at top of page
* Fix broken links
* Update aws link, fixes missing anchor linkcheck
Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com>
This commit makes two changes to the provisioner connection block code:
- Change the `port` argument type from string to number, which is
technically more correct and consistent with `bastion_port`;
- Use `uint16` as the struct member type for both ports instead of
`int`, which gets us free range validation from the gocty package.
Includes a test of the validation message when the port number is an
invalid integer.
We now have RunningInAutomation has a general concern in views.View, so
we no longer need to specify it for each command-specific constructor
separately.
For this initial change I focused only on changing the exported interface
of the views package and let the command-specific views go on having their
own unexported fields containing a copy of the flag because it made this
change less invasive and I wasn't feeling sure yet about whether we
ought to have code within command-specific views directly access the
internals of views.View. However, maybe we'll simplify this further in
a later commit if we conclude that these copies of the flag are
burdensome.
The general version of this gets set directly inside the main package,
which might at some future point allow us to make the command package
itself unaware of this "running in automation" idea and thus reinforce
that it's intended as a presentation-only thing rather than as a
behavioral thing, but we'll save more invasive refactoring for another
day.
This "running in automation" idea is a best effort thing where we try to
avoid printing out specific suggestions of commands to run in the main
workflow when the user is running Terraform inside a wrapper script or
other automation, because they probably don't want to bypass that
automation.
This just makes that information available to the main views.View type,
so we can then make use of it in the implementation of more specialized
view types that embed views.View.
However, nothing is using it as of this commit. We'll use it in later
commits.
This pattern follows as a natural consequence of how for_each is defined,
but I've noticed from community forum Q&A that newcomers often don't
immediately notice the connection between what for_each expects as input
and what a for_each resource produces as a result, so my aim here is to
show a short example of that in the hope of helping folks see the link
here and get ideas on how to employ the technique in other situations.
In practice the current implementation isn't actually using this, and if
we need access to states in future we can access them in either the
plan.PriorState or plan.PrevRunState fields, depending on which stage we
want a state snapshot of.
Previously in refresh-only mode we were skipping making any updates to the
working state at all. That's not correct, though: if the state upgrade or
refresh steps detected changes then we need to at least commit _those_ to
the working state, because those can then be detected by downstream
objects like output values.
Our model for plans/planfile has unfortunately grown inconsistent with
changes to our modeling of plans.Plan.
Originally we considered the plan "header" and the planned changes as an
entirely separate artifact from the prior state, but we later realized
that carrying the prior state around with the plan is important to
ensuring we always have enough context to faithfully render a plan to the
user, and so we added the prior state as a field of plans.Plan.
More recently we've also added the "previous run state" to plans.Plan for
similar reasons.
Unfortunately as a result of that modeling drift our ReadPlan method was
silently producing an incomplete plans.Plan object, causing use-cases like
"terraform show" to produce slightly different results due to the
plan object not round-tripping completely.
As a short-term tactical fix, here we add state snapshot reading into the
ReadPlan function. This is not an ideal solution because it means that
in the case of applying a plan, where we really do need access to the
state _file_, we'll end up reading the prior state file twice. However,
the goal here is only to heal the modelling quirk with as little change
as possible, because we're not currently at a point where we'd be willing
to risk regressions from a larger refactoring.
The connection block schema defines the bastion_port argument as a
number, but we were incorrectly trying to convert it from a string. This
commit fixes that by attempting to convert the cty.Number to the int
result type, returning the error on failure.
An alternative approach would be to change the bastion_port argument in
the schema to be a string, matching the port argument. I'm less sure
about the secondary effects of that change, though.
Several changes to lookup to improve how we handle marked values:
- If the entire collection is marked, preserve the marks on any result
(whether successful or fallback)
- If a returned value from the collection is marked, preserve the marks
from only that value, combined with any overall collection marks
- Retain marks on the fallback value when it is returned, combined with
any overall collection marks
- Include marks on the key in the result, as otherwise the result it
ends up selecting could imply what the sensitive value was
- Retain collection marks when returning an unknown value for a not
wholly-known collection
See also https://github.com/zclconf/go-cty/pull/98
Similar to cty's implementation, we only need to preserve marks from the
value itself, not any nested values it may contain. This means that
taking the length of an umarked list with marked elements results in an
unmarked number.
If we don't do this then we can create a situation where refresh detects
that an object already doesn't exist but we plan to destroy it anyway,
rather than returning "no changes" as expected.
The "previous run state" is our record of what the previous run of
Terraform considered to be its outcome, but in order to do anything useful
with that we must ensure that the data inside conforms to the current
resource type schemas, which might be different than the schemas that were
current during the previous run if the relevant provider has since been
upgraded.
For that reason then, we'll start off with the previous run state set
exactly equal to what was saved in the prior snapshot (modulo any changes
that happened during a state file format upgrade) but then during our
planning operation we'll overwrite individual resource instance objects
with the result of upgrading, so that in a situation where we successfully
run plan to completion the previous run state should always have a
compatible schema with the "prior state" (the result of refreshing) for
managed resources, and thus the caller can meaningfully compare the two
in order to detect and describe any out-of-band changes that occurred
since the previous run.
Until now we've not really cared much about the state snapshot produced
by the previous Terraform operation, except to use it as a jumping-off
point for our refresh step.
However, we'd like to be able to report to an end-user whenever Terraform
detects a change that occurred outside of Terraform, because that's often
helpful context for understanding why a plan contains changes that don't
seem to have corresponding changes in the configuration.
As part of reporting that we'll need to keep track of the state as it
was before we did any refreshing work, so we can then compare that against
the state after refreshing. To retain enough data to achieve that, the
existing Plan field State is now two fields: PrevRunState and PriorState.
This also includes a very shallow change in the core package to make it
populate something somewhat-reasonable into this field so that integration
tests can function reasonably. However, this shallow implementation isn't
really sufficient for real-world use of PrevRunState because we'll
actually need to update PrevRunState as part of planning in order to
incorporate the results of any provider-specific state upgrades to make
the PrevRunState objects compatible with the current provider schema, or
else our diffs won't be valid. This deeper awareness of PrevRunState in
Terraform Core will follow in a subsequent commit, prior to anything else
making use of Plan.PrevRunState.