The import command wasn't loading the full plugin path for discovery.
Run a basic plugin init sequence, and verify we can find a plugin (even
though the plugin is invalid and will fail).
The import command wasn't loading the plugin path at all, relying on the
local directory for binaries.
Load the plugin dir into Meta, and pass in ForceLocal for consistency.
The Backend returned was going to be a Local anyway, so the added check
wasn't ensuring anything.
The "confirm" method was directly checking the meta struct's input field,
but that only represents the -input command line flag, and doesn't
respect the TF_INPUT environment variable.
By calling the Input method instead, we check both.
This fixes#15338.
Added a new test that ensures that pre/post-diff hooks are not called
when EvalDiff is run with Stub set, tested through a full refresh run.
This helps test the expected behaviour of EvalDiff itself, versus the
end result of the diff being counted in a plan, which is what the
TestLocal_planScaleOutNoDupeCount test in backend/local checks.
Rather than overloading InstanceDiff with a "Stub" attribute that is
going to be largely meaningless, we are just going to skip
pre/post-diff hooks altogether. This is under the notion that we will
eventually not need to "stub" a diff for scale-out, stateless nodes on
refresh at all, so diff behaviour won't be necessary at that point, so
we should not assume that hooks will run at this stage anyway.
Also as part of this removed the CountHook test that is now failing
because CountHook is out of scope of the new behaviour.
We are changing the behaviour of the "stub" diff operation to just have
the pre/post-diff hooks skipped on eval, meaning that the test against
CountHook will ultimately be meaningless and fail, hence we need a
different test here that tests it on a more general level.
This should make things a bit more clear as to what we are doing in the
EvalTree scale-out test - ensuring that we get the correct eval sequence
for a node with no state through EvalTree.
We use the .# key primarily as a hint that we have a list, but its value
describes how many items the writer thinks were in the list.
Since this information is redundant with the _actual_ data, it's
potentially useful as a form of corrupted data detection but this function
isn't equipped to actually report on such a problem (no error return
value, and not in a good place for UI feedback anyway), so instead we'll
largely ignore this value and just go by the number of items we
encounter.
The result of this is that when the counts mismatch we will go by the
number of items actually holding the prefix, rather than panicking as
we would've before.
This fixes the crashes in #15300, #15135 and #15334, though it does not
address any root-cause for the count to be incorrect in the first place,
so there may be something to fix here somewhere else.
When using a `state` command, if the `-state` flag is provided we do not
want to modify the Backend state. In this case we should always create a
local state instance.
The backup flag was also being ignored, and some tests were relying on
that, which have been fixed.
If we provide a -state flag to a state command, we do not want terraform
to modify the backend state. This test fails since the state specified
in the backend doesn't exist
The s3.Backend was using it's own code for DeleteState, but the dynamo
entries are only handled through the RemoteClient. Have DeleteState use
a RemoteClient for delete.
Error loading Terraform: Error downloading modules: error downloading 'ssh://git@bitbucket.org/acme/foo.git?bar': /usr/bin/git exited with 128: Cloning into '.terraform/modules/yadayada'...
invalid command syntax.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Remove "checksum" from the error, and only indicate that the plugin has
changed.
Always show requested versions even if it's "any", and found versions of
plugins.
During plan and apply, because the provider constraints need to be built
from a plan, they are not checked until the terraform.Context is
created. Since the context is always requested by the backend during the
Operation, the backend needs to be responsible for generating contextual
error messages for the user.
Instead of formatting the ResolveProviders errors during NewContext,
return a special error type, ResourceProviderError to signal that
init will be required. The backend can then extract and format the
errors.