This requires more GitHub token permissions than we have, and it's
inessential. The backport PRs should have reviews assigned to the
original PR author anyway.
When rendering planned output changes, we need to filter the plan's
output changes to ensure that only root module outputs which have
changed are rendered. Otherwise we will render changes for submodule
outputs, and (with concise diff disabled) render unchanged outputs also.
* The index must be non-negative integer
and added instructions on how to get the last value in the list.
* Typo fix
Co-authored-by: Nick Fagerlund <nick@hashicorp.com>
Previously when printing the relevant variables involved in a failed
expression evaluation we would just skip over unknown values entirely.
There are some errors, though, which are _caused by_ a value being
unknown, in which case it's helpful to show which of the inputs to that
expression were known vs. unknown so that the user can limit their further
investigation only to the unknown ones.
While here I also added a special case for sensitive values that overrides
all other display, because we don't know what about a value is sensitive
and so better to give nothing away at the expense of a slightly less
helpful error message.
Our diagnostics model allows for optionally annotating an error or warning
with information about the expression and eval context it was generated
from, which the diagnostic renderer for the UI will then use to give the
user some additional hints about what values may have contributed to the
error.
We previously didn't have those annotations on the results of evaluating
for_each expressions though, because in that case we were using the helper
function to evaluate an expression in one shot and thus we didn't ever
have a reference to the EvalContext in order to include it in the
diagnostic values.
Now, at the expense of having to handle the evaluation at a slightly lower
level of abstraction, we'll annotate all of the for_each error messages
with source expression information. This is valuable because we see users
often confused as to how their complex for_each expressions ended up being
invalid, and hopefully giving some information about what the inputs were
will allow more users to self-solve.
For this version of Terraform and forward, we no longer refuse to read
compatible state files written by future versions of Terraform. This is
a commitment that any changes to the semantics or format of the state
file after this commit will require a new state file version 5.
The result of this is that users of this Terraform version will be able
to share remote state with users of future versions, and all users will
be able to read and write state. This will be true until the next major
state file version is required.
This does not affect users of previous versions of Terraform, which will
continue to refuse to read state written by later versions.
This was mostly unused now, since we no longer needed to interrupt a
series of eval node executions.
The exception was the stopHook, which is still used to halt execution
when there's an interrupt. Since interrupting execution should not
complete successfully, we use a normal opaque error to halt everything,
and return it to the UI.
We can work on coalescing or hiding these if necessary in a separate PR.
Previously we were only verifying locked hashes for local archive zip
files, but if we have non-ziphash hashes available then we can and should
also verify that a local directory matches at least one of them.
This does mean that folks using filesystem mirrors but yet also running
Terraform across multiple platforms will need to take some extra care to
ensure the hashes pass on all relevant platforms, which could mean using
"terraform providers lock" to pre-seed their lock files with hashes across
all platforms, or could mean using the "packed" directory layout for the
filesystem mirror so that Terraform will end up in the install-from-archive
codepath instead of this install-from-directory codepath, and can thus
verify ziphash too.
(There's no additional documentation about the above here because there's
already general information about this in the lock file documentation
due to some similar -- though not identical -- situations with network
mirrors.)
We previously had some tests for some happy paths and a few specific
failures into an empty directory with no existing locks, but we didn't
have tests for the installer respecting existing lock file entries.
This is a start on a more exhaustive set of tests for the installer,
aiming to visit as many of the possible codepaths as we can reasonably
test using this mocking strategy. (Some other codepaths require different
underlying source implementations, etc, so we'll have to visit those in
other tests separately.)
This won't be a typical usage pattern for normal code, but will be useful
for tests that need to work with locks as input so that they don't need to
write out a temporary file on disk just to read it back in immediately.
* Update module-registry-protocol.html.md
1: There is a mismatch in the segment labels for the version query URL (system vs provider)
2: There is a discrepancy between the documentation and the actual generated request for retrieving module source code (URL segments 4 vs 3)
- There is no segment for "provider"
* Update module-registry-protocol.html.md
Changed ```:system``` to ```:provider``` for versions and source API URLs