Add implementations of CanChainFrom and NestedWithin for
MoveEndpointInModule.
CanChainFrom allows the linking of move statements of the same address,
which means the prior destination address must equal the following
source address. If the destination and source addresses are of different
types, they must be covered by NestedWithin rather than CanChainFrom.
NestedWithin checks if the destination contains the source address. Any
matching types would be covered by CanChainFrom.
Extend the outputs JSON log message to support an `action` field (and
make the `type` and `value` fields optional). This allows us to emit a
useful output change summary as part of the plan, bringing the JSON log
output into parity with the text output.
While we do have access to the before/after values in the output
changes, attempting to wedge those into a structured log message is not
appropriate. That level of detail can be extracted from the JSON plan
output from `terraform show -json`.
This is a first pass at implementing refactoring.ValidateMoves, covering
the main validation rules.
This is not yet complete. A couple situations not yet covered are
represented by commented test cases in TestValidateMoves, although that
isn't necessarily comprehensive. We'll do a further pass of filling this
out with any other subtleties before we ship this feature.
As of this commit, refactoring.ValidateMoves doesn't actually do anything
yet (always returns nil) but the goal here is to wire in the set of all
declared instances so that refactoring.ValidateMoves will then have all
of the information it needs to encapsulate our validation rules.
The actual implementation of refactoring.ValidateMoves will follow in
subsequent commits.
In order to precisely implement the validation rules for "moved"
statements we need to be able to test whether particular instances were
declared in the configuration.
The instance expander is the source of record for which instances we
decided while creating a plan, but it's API is far more involved than what
our validation rules need, so this new AllInstances method returns a
wrapper object with a more straightforward API that provides read-only
access to just the question of whether particular instances got registered
in the expander already.
This API covers all three of the kinds of objects that move statements can
refer to. It includes module calls and resources, even though they aren't
_themselves_ "instances" in the sense we usually mean, because the module
instance addresses they are contained within _are_ instances and so we
need to take their dynamic instance keys into account when answering these
queries.
All of our MoveDestination methods have the common problem of deciding
whether the receiver is even potentially in the scope of a particular
MoveEndpointInModule, which requires that the receiver belong to an
instance of the module where the move statement was found.
Previously we had this logic inline in all three cases, but now we'll
factor it out into a shared helper function.
At first it seemed like there ought to be more factoring possible for
the AbsResource vs. AbsResourceInstance implementations, since textually
they look very similar, but in practice they only look similar because
those two types have a lot of method names in common, but the Go compiler
sees them as completely distinct and thus we must write the same logic
out twice. I did try some further refactoring to address that but it
made the resulting code significantly more complicated and, by my
judgement, harder to follow. Consequently I decided that a little
duplication was okay and warranted here because this logic is already
quite fiddly to read through and isn't likely to change significantly once
released (due to backward-compatibility promises).
Previously our MoveDestination methods only honored move statements whose
endpoints were module calls, module instances, or resources.
Now we'll additionally handle when the endpoints are individual resource
instances. This situation only applies to
AbsResourceInstance.MoveDestination because no other objects can be
contained inside of a resource instance.
This completes all of the MoveDestination cases for all supported move
statement types and moveable object types.
Previously our MoveDestination methods only honored move statements whose
endpoints were module calls or module instances.
Now we'll additionally handle when the endpoints are whole resource
addresses. This includes both renaming resource blocks and moving resource
blocks into or out of child modules.
This doesn't yet include endpoints that are specific resource _instances_,
which will follow in a subsequent commit. For the moment that situation
will always indicate a non-match.
This is a subset of the MoveDestination behavior for AbsResource and
AbsResourceInstance which deals with source and destination addresses that
refer to module calls or module instances.
They both work by delegating to ModuleInstance.MoveDestination and then
applying the same resource or resource instance address to the
newly-chosen module instance address, thus ensuring that when we move
a module we also move all of the resources inside that module in the same
way.
This doesn't yet include support for moving between specific resource or
resource instance addresses; that'll follow later. This commit should have
enough logic to support moving between module names and module instance
keys, including any module calls or resources nested within.
This method encapsulates the move-processing rules for applying move
statements to ModuleInstance addresses. It honors both module call moves
and module instance moves by trying to find a subsequence of the input
that matches the "from" endpoint and then, if found, replacing it with
the "to" endpoint while preserving the prefix and suffix around the match,
if any.
* configs/configschema: extend block.AttributeByPath to descend into Objects
This commit adds a recursive Object.AttributeByPath function which will step through Attributes with NestedTypes. If an Attribute without a NestedType is encountered while there is still more to the path, it will return nil.
The etcdv3 client has a default request send limit of 2.0 MiB. This change
exposes the configuration option to increase that limit enabling larger
state using the etcdv3 backend.
This also requires that the corresponding --max-request-bytes flag be
increased on the server side. The default there is 1.5 MiB.
Fixes https://github.com/hashicorp/terraform/issues/25745
etcd rewrote its import path from coreos/etcd to go.etcd.io/etcd.
Changed the imports path in this commit, which also updates the code
version.
This lets us remove the github.com/ugorji/go/codec dependency, which
was pinned to a fairly old version. The net change is a loss of 30,000
lines of code in the vendor directory. (I first noticed this problem
because the outdated go/codec dependency was causing a dependency
failure when I tried to put Terraform and another project in the same
vendor directory.)
Note the version shows up funkily in go.mod, but I verified
visually it's the same commit as the "release-3.4" tag in
github.com/coreos/etcd. The etcd team plans to fix the release version
tagging in v3.5, which should be released soon.
The current usage of internal remote state backends requires that
`StateMgr` be able to return an instance of `statemgr.Full` even if the
state is currently locked.
Up until now marks were not considered by `ignore_changes`, that however
means changes to sensitivity within a configuration cannot ignored, even
though they are planned as changes.
Rather than separating the marks and tracking their paths, we can easily
update the processIgnoreChanges routine to handle the marked values
directly. Moving the `processIgnoreChanges` call also cleans up some of
the variable naming, making it more consistent through the body of the
function.
* configs/configschema: fix missing "computed" attributes from NestedObject's ImpliedType
listOptionalAttrsFromObject was not including "computed" attributes in the list of optional object attributes. This is now fixed. I've also added some tests and fixed some panics and otherwise bad behavior when bad input is given. One natable change is in ImpliedType, which was panicking on an invalid nesting mode. The comment expressly states that it will return a result even when the schema is inconsistent, so I removed the panic and instead return an empty object.
Update the version constraints for what providers will be downloaded
from the registry, allowing protocol 6 providers to be downloaded from
the registry.
This test would previously fail randomly due to the use of multiple
resource instances. Instance keys are iterated over as a map for
presentation, which has intentionally inconsistent ordering.
To fix this, I changed the test to use different resource addresses for
the three drift cases. I also extracted them to a separate test, and
tweaked the test helper functions to reduce the number of fatal exit
points, to make failed test debugging easier.
This is a whole lot of nothing right now, just stubbing out some control
flow that ultimately just leads to TODOs that cause it to do nothing at
all.
My intent here is to get this cross-cutting skeleton in place and thus
make it easier for us to collaborate on adding the meat to it, so that
it's more likely we can work on different parts separately and still get
a result that tessellates.
We previously built out addrs.UnifyMoveEndpoints with a different
implementation strategy in mind, but that design turns out to not be
viable because it forces us to move to AbsMoveable addresses too soon,
before we've done the analysis required to identify chained and nested
moves.
Instead, UnifyMoveEndpoints will return a new type MoveEndpointInModule
which conceptually represents a matching pattern which either matches or
doesn't match a particular AbsMoveable. It does this by just binding the
unified relative address from the MoveEndpoint to the module where it
was declared, and thus allows us to distinguish between the part of the
module path which applies to any instances of the given modules vs. the
user-specified part which must identify particular module instances.
Since these address types are not directly comparable themselves, we use
an unexported named type around the string representation, whereby the
special type can avoid any ambiguity between string representations of
different types and thus each type only needs to worry about possible
ambiguity of its _own_ string representation.
Many times now we've seen situations where we need to use addresses
as map keys, but not all of our address types are comparable and thus
we tend to end up using string representations as keys instead.
That's problematic because conversion to string uses type information
and some of the address types have string representations that are
ambiguous with one another.
UniqueKey therefore represents an opaque key that is unique for each
functionally-distinct address across all types that implement
UniqueKeyer.
For this initial commit I've implemented UniqueKeyer only for the
Referenceable family of types. These are an easy case because they
were all already comparable (intentionally) anyway. Later commits
can implement UniqueKeyer for other types that are not naturally
comparable, such as any which include a ModuleInstance.
This also includes a new type addrs.Set which wraps a map as a set
of addresses, using the unique keys to ensure that there can be only
one element for each distinct address.
Some users would want to use Consul namespaces when using the Consul
backend but the version of the Consul API client we use is too old and
don't support them. In preparation for this change this patch just update
it the client and replace testutil.NewTestServerConfig() by
testutil.NewTestServerConfigT() in the tests.
* states: add MoveAbsResource and MoveAbsResourceInstance state functions and corresponding syncState wrapper functions.
* states: add MoveModuleInstance and MaybeMoveModuleInstance
* addrs: adding a new function, ModuleInstance.IsDeclaredByCall, which returns true if the receiver is an instance of the given AbsModuleCall.
The logic behind this code took me a while to understand, so I wrote
down what I understand to be the reasoning behind how it works. The
trickiest part is rendering changing objects as updates. I think the
other pieces are fairly common to LCS sequence diff rendering, so I
didn't explain those in detail.
An earlier commit added logic to decode "moved" blocks and do static
validation of them. Here we now include that result also in modules
produced from those files, which we can then use in Terraform Core to
actually implement the moves.
This also places the feature behind an active experiment keyword called
config_driven_move. For now activating this doesn't actually achieve
anything except let you include moved blocks that Terraform will summarily
ignore, but we'll expand the scope of this in later commits to eventually
reach the point where it's really usable.
A common source of churn when we're running experiments is that a module
that would otherwise be valid ends up generating a warning merely because
the experiment is active. That means we end up needing to shuffle the
test files around if the feature ultimately graduates to stable.
To reduce that churn in simple cases, we'll make an exception to disregard
the "Experiment is active" warning for any experiment that a module has
intentionally opted into, because those warnings are always expected and
not a cause for concern.
It's still possible to test those warnings explicitly using the
testdata/warning-files directory, if needed.
Although addrs.Target can in principle capture the information we need to
represent move endpoints, it's semantically confusing because
addrs.Targetable uses addrs.Abs... types which are typically for absolute
addresses, but we were using them for relative addresses here.
We now have specialized address types for representing moves and probably
other things which have similar requirements later on. These types
largely communicate the same information in the end, but aim to do so in
a way that's explicit about which addresses are relative and which are
absolute, to make it less likely that we'd inadvertently misuse these
addresses.
These three types represent the three different address representations we
need to represent different stages of analysis for "moved" blocks in the
configuration.
The goal here is to encapsulate all of the static address wrangling inside
these types so that users of these types elsewhere would have to work
pretty hard to use them incorrectly.
In particular, the MovableEndpoint type intentionally fully encapsulates
the weird relative addresses we use in configuration so that code
elsewhere in Terraform can never end up holding an address of a type that
suggests absolute when it's actually relative. That situation only occurs
in the internals of MoveableEndpoint where we use not-really-absolute
AbsMoveable address types to represent the not-yet-resolved relative
addresses.
This only takes care of the static address wrangling. There's lots of
other rules for what makes a "moved" block valid which will need to be
checked elsewhere because they require more context than just the content
of the address itself.
Our documentation for ModuleCall originally asserted that we didn't need
AbsModuleCall because ModuleInstance captured the same information, but
when we added count and for_each for modules we introduced
ModuleCallInstance to represent a reference to an instance of a local
module call, and now _that_ is the type whose absolute equivalent is
ModuleInstance.
We previously had no absolute representation of the call itself, without
any particular instance. That's what AbsModuleCall now represents,
allowing us to be explicit about when we're talking about the module block
vs. instances it declares, which is the same distinction represented by
AbsResource vs. AbsResourceInstance.
Just like with AbsResource and AbsResourceInstance though, there is
syntactic ambiguity between a no-key call instance and a whole module call,
and so some codepaths might accept both to start and then use other
context to dynamically choose a particular interpretation, in which case
this distinction becomes meaningful in representing the result of that
decision.
The previous name didn't fit with the naming scheme for addrs types:
The "Abs" prefix typically means that it's an addrs.ModuleInstance
combined with whatever type name appears after "Abs", but this is instead
a ModuleCallOutput combined with an InstanceKey, albeit structured the
other way around for convenience, and so the expected name for this would
be the suffix "Instance".
We don't have an "Abs" type corresponding with this one because it would
represent no additional information than AbsOutputValue.
* command/jsonstate: remove redundant remarking of resource instance
ResourceInstanceObjectSrc.Decode already handles marking values with any marks stored in ri.Current.AttrSensitivePaths, so re-applying those marks is not necessary.
We've gotten reports of panics coming from this line of code, though I have yet to reproduce the panic in a test.
* Implement test to reproduce panic on #29042
Co-authored-by: David Alger <davidmalger@gmail.com>
Because our snippet generator is trying to select whole lines to include
in the snippet, it has some edge cases for odd situations where the
relevant source range starts or ends directly at a newline, which were
previously causing this logic to return out-of-bounds offsets into the
code snippet string.
Although arguably it'd be better for the original diagnostics to report
more reasonable source ranges, it's better for us to report a
slightly-inaccurate snippet than to crash altogether, and so we'll extend
our existing range checks to check both bounds of the string and thus
avoid downstreams having to deal with out-of-bounds indices.
For completeness here I also added some similar logic to the
human-oriented diagnostic formatter, which consumes the result of the
JSON diagnostic builder. That's not really needed with the additional
checks in the JSON diagnostic builder, but it's nice to reinforce that
this code can't panic (in this way, at least) even if its input isn't
valid.
* terraform: use hcl.MergeBodies instead of configs.MergeBodies for provider configuration
Previously, Terraform would return an error if the user supplied provider configuration via interactive input iff the configuration provided on the command line was missing any required attributes - even if those attributes were already set in config.
That error came from configs.MergeBody, which was designed for overriding valid configuration. It expects that the first ("base") body has all required attributes. However in the case of interactive input for provider configuration, it is perfectly valid if either or both bodies are missing required attributes, as long as the final body has all required attributes. hcl.MergeBodies works very similarly to configs.MergeBodies, with a key difference being that it only checks that all required attributes are present after the two bodies are merged.
I've updated the existing test to use interactive input vars and a schema with all required attributes. This test failed before switching from configs.MergeBodies to hcl.MergeBodies.
* add a command package test that shows that we can still have providers with dynamic configuration + required + interactive input merging
This test failed when buildProviderConfig still used configs.MergeBodies instead of hcl.MergeBodies
This PR adds decoding for the upcoming "moved" blocks in configuration. This code is gated behind an experiment called EverythingIsAPlan, but the experiment is not registered as an active experiment, so it will never run (there is a test in place which will fail if the experiment is ever registered).
This also adds a new function to the Targetable interface, AddrType, to simplifying comparing two addrs.Targetable.
There is some validation missing still: this does not (yet) descend into resources to see if the actual resource types are the same (I've put this off in part because we will eventually need the provider schema to verify aliased resources, so I suspect this validation will have to happen later on).
Previously, if any resources were found to have drifted, the JSON plan
output would include a drift entry for every resource in state. This
commit aligns the JSON plan output with the CLI UI, and only includes
those resources where the old value does not equal the new value---i.e.
drift has been detected.
Also fixes a bug where the "address" field was missing from the drift
output, and adds some test coverage.
* command: new command, terraform add, generates resource templates
terraform add ADDRESS generates a resource configuration template with all required (and optionally optional) attributes set to null. This can optionally also pre-populate nonsesitive attributes with values from an existing resource of the same type in state (sensitive vals will be populated with null and a comment indicating sensitivity)
* website: terraform add documentation
* Quoting filesystem path in scp command argument
* Adding proper shell quoting for scp commands
* Running go fmt
* Using a library for quoting shell commands
* Don't export quoteShell function
* jsonplan and jsonstate: include sensitive_values in state representations
A sensitive_values field has been added to the resource in state and planned values which is a map of all sensitive attributes with the values set to true.
It wasn't entirely clear to me if the values in state would suffice, or if we also need to consult the schema - I believe that this is sufficient for state files written since v0.15, and if that's incorrect or insufficient, I'll add in the provider schema check as well.
I also updated the documentation, and, since we've considered this before, bumped the FormatVersions for both jsonstate and jsonplan.
An unknown block represents a dynamic configuration block with an
unknown for_each value. We were not catching the case where a provider
modified this value unexpectedly, which would crash with block of type
NestingList blocks where the config value has no length for comparison.
Historically, we've used TFC's default run messages as a sort of dumping
ground for metadata about the run. We've recently decided to mostly stop
doing that, in favor of:
- Only specifying the run's source in the default message.
- Letting TFC itself handle the default messages.
Today, the remote backend explicitly sets a run message, overriding
any default that TFC might set. This commit removes that explicit message
so we can allow TFC to sort it out.
This shouldn't have any bad effect on TFE out in the wild, because it's
known how to set a default message for remote backend runs since late 2018.
* tools: remove terraform-bundle.
terraform-bundle is no longer supported in the main branch of terraform. Users can build terraform-bundle from terraform tagged v0.15 and older.
* add a README pointing users to the v0.15 branch
Previously we had a separation between ModuleSourceRemote and
ModulePackage as a way to represent within the type system that there's an
important difference between a module source address and a package address,
because module packages often contain multiple modules and so a
ModuleSourceRemote combines a ModulePackage with a subdirectory to
represent one specific module.
This commit applies that same strategy to ModuleSourceRegistry, creating
a new type ModuleRegistryPackage to represent the different sort of
package that we use for registry modules. Again, the main goal here is
to try to reflect the conceptual modelling more directly in the type
system so that we can more easily verify that uses of these different
address types are correct.
To make use of that, I've also lightly reworked initwd's module installer
to use addrs.ModuleRegistryPackage directly, instead of a string
representation thereof. This was in response to some earlier commits where
I found myself accidentally mixing up package addresses and source
addresses in the installRegistryModule method; with this new organization
those bugs would've been caught at compile time, rather than only at
unit and integration testing time.
While in the area anyway, I also took this opportunity to fix some
historical confusing names of fields in initwd.ModuleInstaller, to be
clearer that they are only for registry packages and not for all module
source address types.
We have some tests in this package that install real modules from the real
registry at registry.terraform.io. Those tests were written at an earlier
time when the registry's behavior was to return the URL of a .tar.gz
archive generated automatically by GitHub, which included an extra level
of subdirectory that would then be reflected in the paths to the local
copies of these modules.
GitHub started rate limiting those tar archives in a way that Terraform's
module installer couldn't authenticate to, and so the registry switched
to returning direct git repository URLs instead, which don't have that
extra subdirectory and so the local paths on disk now end up being a
little different, because the actual module directories are at a different
subdirectory of the package.
Now that we (in the previous commit) refactored how we deal with module
sources to do the parsing at config loading time rather than at module
installation time, we can expose a method to centralize the determination
for whether a particular module call (and its resulting Config object)
enters a new external package.
We don't use this for anything yet, but in later commits we will use this
for some cross-module features that are available only for modules
belonging to the same package, because we assume that modules grouped
together in a package can change together and thus it's okay to permit a
little more coupling of internal details in that case, which would not
be appropriate between modules that are versioned separately.
It's been a long while since we gave close attention to the codepaths for
module source address parsing and external module package installation.
Due to their age, these codepaths often diverged from our modern practices
such as representing address types in the addrs package, and encapsulating
package installation details only in a particular location.
In particular, this refactor makes source address parsing a separate step
from module installation, which therefore makes the result of that parsing
available to other Terraform subsystems which work with the configuration
representation objects.
This also presented the opportunity to better encapsulate our use of
go-getter into a new package "getmodules" (echoing "getproviders"), which
is intended to be the only part of Terraform that directly interacts with
go-getter.
This is largely just a refactor of the existing functionality into a new
code organization, but there is one notable change in behavior here: the
source address parsing now happens during configuration loading rather
than module installation, which may cause errors about invalid addresses
to be returned in different situations than before. That counts as
backward compatible because we only promise to remain compatible with
configurations that are _valid_, which means that they can be initialized,
planned, and applied without any errors. This doesn't introduce any new
error cases, and instead just makes a pre-existing error case be detected
earlier.
Our module registry client is still using its own special module address
type from registry/regsrc for now, with a small shim from the new
addrs.ModuleSourceRegistry type. Hopefully in a later commit we'll also
rework the registry client to work with the new address type, but this
commit is already big enough as it is.
We've previously had the syntax and representation of module source
addresses pretty sprawled around the codebase and intermingled with other
systems such as the module installer.
I've created a factored-out implementation here with the intention of
enabling some later refactoring to centralize the address parsing as part
of configuration decoding, and thus in turn allow the parsing result to
be seen by other parts of Terraform that interact with configuration
objects, so that they can more robustly handle differences between local
and remote modules, and can present module addresses consistently in the
UI.
This new package aims to encapsulate all of our interactions with
go-getter to fetch remote module packages, to ensure that the rest of
Terraform will only use the small subset of go-getter functionality that
our modern module installer uses.
In older versions of Terraform, go-getter was the entire implementation
of module installation, but along the way we found that several aspects of
its design are poor fit for Terraform's needs, and so now we're using it
as just an implementation detail of Terraform's handling of remote module
packages only, hiding it behind this wrapper API which exposes only
the services that our module installer needs.
This new package isn't actually used yet, but in a later commit we will
change all of the other callers to go-getter to only work indirectly
through this package, so that this will be the only package that actually
imports the go-getter packages.
As the comment notes, this hostname is the default for provide source
addresses. We'll shortly be adding some address types to represent module
source addresses too, and so we'll also have DefaultModuleRegistryHost
for that situation.
(They'll actually both contain the the same hostname, but that's a
coincidence rather than a requirement.)
When performing state migration to a remote backend target, Terraform
may fail due to mismatched remote and local Terraform versions. Here we
add the `-ignore-remote-version` flag to allow users to ignore this
version check when necessary.
When migrating multiple local workspaces to a remote backend target
using the `prefix` argument, we need to perform the version check
against all existing workspaces returned by the `Workspaces` method.
Failing to do so will result in a version check error.
Our module installer has a somewhat-informal idea of a "module package",
which is some external thing we can go fetch in order to add one or more
modules to the current configuration. Our documentation doesn't talk much
about it because most users seem to have found the distinction between
external and local modules pretty intuitive without us throwing a lot of
funny terminology at them, but there are some situations where the
distinction between a module and a module package are material to the
end-user.
One such situation is when using an absolute rather than relative
filesystem path: we treat that as an external package in order to make the
resulting working directory theoretically "portable" (although users can
do various other things to defeat that), and so Terraform will copy the
directory into .terraform/modules in the same way as it would download and
extract a remote archive package or clone a git repository.
A consequence of this, though, is that any relative paths called from
inside a module loaded from an absolute path will fail if they try to
traverse upward into the parent directory, because at runtime we're
actually running from a copy of the directory that's been taking out of
its original context.
A similar sort of situation can occur in a truly remote module package if
the author accidentally writes a "../" source path that traverses up out
of the package root, and so this commit introduces a special error message
for both situations that tries to be a bit clearer about there being a
package boundary and use that to explain why installation failed.
We would ideally have made escaping local references like that illegal in
the first place, but sadly we did not and so when we rebuilt the module
installer for Terraform v0.12 we ended up keeping the previous behavior of
just trying it and letting it succeed if there happened to somehow be a
matching directory at the given path, in order to remain compatible with
situations that had worked by coincidence rather than intention. For that
same reason, I've implemented this as a replacement error message we will
return only if local module installation was going to fail anyway, and
thus it only modifies the error message for some existing error situations
rather than introducing new error situations.
This also includes some light updates to the documentation to say a little
more about how Terraform treats absolute paths, though aiming not to get
too much into the weeds about module packages since it's something that
most users can get away with never knowing.
When returning from the context method, a deferred function call checked
for error diagnostics in the `diags` variable, and unlocked the state if
any exist. This means that we need to be extra careful to mutate that
variable when returning errors, rather than returning a different set of
diags in the same position.
Previously this would result in an invalid plan file causing a lock to
become stuck.
Most legacy provider resources do not implement any import functionality
other than returning an empty object with the given ID, relying on core
to later read that resource and obtain the complete state. Because of
this, we need to check the response from ReadResource for a null value,
and use that as an indication the import id was invalid.
This was dead code, and there is no clear way to retrieve this
information, as we currently only derive the drift information as part
of the rendering process.
The schemas for provider and the resources didn't match, so the changes
were not going to be rendered at all.
Add a test which contains a deposed resource.
* getproviders ParsePlatform: add check for invalid platform strings with too many parts
The existing logic would not catch things like a platform string containing multiple underscores. I've added an explicit check for exactly 2 parts and some basic tests to prove it.
* command/providers-lock: add tests
This commit adds some simple tests for the providers lock command. While adding this test I noticed that there was a mis-copied error message, so I replaced that with a more specific message. I also added .terraform.lock.hcl to our gitignore for hopefully obvious reasons.
getproviders.ParsePlatform: use parts in place of slice range, since it's available
* command: Providers mirror tests
The providers mirror command is already well tested in e2e tests, so this includes only the most absolutely basic test case.
Do not convert provisioner diagnostics to errors so that users can get
context from provisioner failures.
Return diagnostics from the builtin provisioners that can be annotated
with configuration context and instance addresses.
When an attribute value changes in sensitivity, we previously rendered
this in the diff with a `~` update action and a note about the
consequence of the sensitivity change. Since we also suppress the
attribute value, this made it impossible to know if the underlying value
was changing, too, which has significant consequences on the meaning of
the plan.
This commit adds an equality check of the old/new underlying values. If
these are unchanged, we add a note to the sensitivity warning to clarify
that only sensitivity is changing.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
Once a plugin process is started, go-plugin will redirect the stdout and
stderr stream through a grpc service and provide those streams to the
client. This is rarely used, as it is prone to failing with races
because those same file descriptors are needed for the initial handshake
and logging setup, but data may be accidentally sent to these
nonetheless.
The usual culprits are stray `fmt.Print` usage where logging was
intended, or the configuration of a logger after the os.Stderr file
descriptor was replaced by go-plugin. These situations are very hard for
provider developers to debug since the data is discarded entirely.
While there may be improvements to be made in the go-plugin package to
configure this behavior, in the meantime we can add a simple monitoring
io.Writer to the streams which will surface th data as warnings in the
logs instead of writing it to `io.Discard`