The import command wasn't loading the full plugin path for discovery.
Run a basic plugin init sequence, and verify we can find a plugin (even
though the plugin is invalid and will fail).
The import command wasn't loading the plugin path at all, relying on the
local directory for binaries.
Load the plugin dir into Meta, and pass in ForceLocal for consistency.
The Backend returned was going to be a Local anyway, so the added check
wasn't ensuring anything.
The "confirm" method was directly checking the meta struct's input field,
but that only represents the -input command line flag, and doesn't
respect the TF_INPUT environment variable.
By calling the Input method instead, we check both.
This fixes#15338.
Added a new test that ensures that pre/post-diff hooks are not called
when EvalDiff is run with Stub set, tested through a full refresh run.
This helps test the expected behaviour of EvalDiff itself, versus the
end result of the diff being counted in a plan, which is what the
TestLocal_planScaleOutNoDupeCount test in backend/local checks.
Rather than overloading InstanceDiff with a "Stub" attribute that is
going to be largely meaningless, we are just going to skip
pre/post-diff hooks altogether. This is under the notion that we will
eventually not need to "stub" a diff for scale-out, stateless nodes on
refresh at all, so diff behaviour won't be necessary at that point, so
we should not assume that hooks will run at this stage anyway.
Also as part of this removed the CountHook test that is now failing
because CountHook is out of scope of the new behaviour.
We are changing the behaviour of the "stub" diff operation to just have
the pre/post-diff hooks skipped on eval, meaning that the test against
CountHook will ultimately be meaningless and fail, hence we need a
different test here that tests it on a more general level.
This should make things a bit more clear as to what we are doing in the
EvalTree scale-out test - ensuring that we get the correct eval sequence
for a node with no state through EvalTree.
We use the .# key primarily as a hint that we have a list, but its value
describes how many items the writer thinks were in the list.
Since this information is redundant with the _actual_ data, it's
potentially useful as a form of corrupted data detection but this function
isn't equipped to actually report on such a problem (no error return
value, and not in a good place for UI feedback anyway), so instead we'll
largely ignore this value and just go by the number of items we
encounter.
The result of this is that when the counts mismatch we will go by the
number of items actually holding the prefix, rather than panicking as
we would've before.
This fixes the crashes in #15300, #15135 and #15334, though it does not
address any root-cause for the count to be incorrect in the first place,
so there may be something to fix here somewhere else.
When using a `state` command, if the `-state` flag is provided we do not
want to modify the Backend state. In this case we should always create a
local state instance.
The backup flag was also being ignored, and some tests were relying on
that, which have been fixed.
If we provide a -state flag to a state command, we do not want terraform
to modify the backend state. This test fails since the state specified
in the backend doesn't exist
The s3.Backend was using it's own code for DeleteState, but the dynamo
entries are only handled through the RemoteClient. Have DeleteState use
a RemoteClient for delete.
Error loading Terraform: Error downloading modules: error downloading 'ssh://git@bitbucket.org/acme/foo.git?bar': /usr/bin/git exited with 128: Cloning into '.terraform/modules/yadayada'...
invalid command syntax.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Remove "checksum" from the error, and only indicate that the plugin has
changed.
Always show requested versions even if it's "any", and found versions of
plugins.
During plan and apply, because the provider constraints need to be built
from a plan, they are not checked until the terraform.Context is
created. Since the context is always requested by the backend during the
Operation, the backend needs to be responsible for generating contextual
error messages for the user.
Instead of formatting the ResolveProviders errors during NewContext,
return a special error type, ResourceProviderError to signal that
init will be required. The backend can then extract and format the
errors.
This change makes various minor adjustments to the rendering of plans
in the output of "terraform plan":
- Resources are identified using the standard resource address syntax,
rather than exposing the legacy internal representation used in the
module diff resource keys. This fixes#8713.
- Subjectively, having square brackets in the addresses made it look more
visually "off" when the same name but with different indices were
shown together with differing-length "symbols", so the symbols are now
all padded and right-aligned to three characters for consistent layout
across all operations.
- The -/+ action is now more visually distinct, using several different
colors to help communicate what it will do and including a more obvious
"(new resource required)" marker to help draw attention to this not
being just an update diff. This fixes#15350.
- The resources are now sorted in a manner that sorts index [10] after
index [9], rather than after index [1] as we did before. This makes it
easier to scan the list and avoids the common confusion where it seems
that there are only 10 items when in fact there are 11-20 items with
all the tens hiding further up in the list.
This is a specialized thin wrapper around parseResourceAddressInternal
that can be used to obtain a ResourceAddress from the keys in
ModuleDiff.Resources.
This is not something we'd ideally expose, but since the internal address
format is already exposed in the ModuleDiff object this ends up being
necessary to process the ModuleDiff from other packages, e.g. for
display in the UI.
Lexicographic sorting by the string form produces the wrong result because
[9] sorts after [10], so this custom comparison function takes that into
account and compares each portion separately to get a more intuitive
result.