We previously had a shallow IsMarked call in compactValueStr's caller but
then a more-conservative deep ContainsMarked call inside compactValueStr
with a different resulting message. As well as causing an inconsistency
in messages, this was also a bit confusing because it made it seem like
a non-sensitive collection containing a sensitive element value was wholly
sensitive, making the debug information in the diagnostic messages not
trustworthy for debugging certain varieties of problem.
I originally considered just removing the redundant check in
compactValueStr here, but ultimately I decided to keep it as a sort of
defense in depth in case a future refactoring disconnects these two
checks. This should also serve as a prompt to someone making later changes
to compactValueStr to think about the implications of sensitive values
in there, which otherwise wouldn't be mentioned at all.
Disclosing information about a collection containing sensitive values is
safe here because compactValueStr only discloses information about the
value's type and element keys, and neither of those can be sensitive in
isolation. (Constructing a map with sensitive keys reduces to a sensitive
overall map.)
To ensure that the apply command can determine whether an operation is
executed locally or remotely, we add an IsLocalOperations method on the
remote backend. This returns the internal forceLocal boolean.
We also update this flag after checking if the corresponding remote
workspace is in local operations mode or not. This ensures that we know
if an operation is running locally (entirely on the practitioner's
machine), pseudo-locally (on a Terraform Cloud worker), or remotely
(executing on a worker, rendering locally).
Disabling the resource count and outputs rendering when the remote
backend is in use causes them to be omitted from Terraform Cloud runs.
This commit changes the condition to render these values if either the
remote backend is not in use, or the command is running in automation
via the TF_IN_AUTOMATION flag. As this is intended to be set by
Terraform Cloud and other remote backend implementations, this addresses
the problem.
Fix two bugs which surface when using the remote backend:
- When migrating to views, we removed the call to `(*Meta).process`
which initialized the color boolean. This resulted in the legacy UI
calls in the remote backend stripping color codes. To fix this, we
populate this boolean from the common arguments.
- Remote apply will output the resource summary and output changes, and
these are rendered via the remote backend streaming. We need to
special case this in the apply command and prevent displaying a
zero-change summary line.
Neither of these are coverable by automated tests, as we don't have any
command-package level testing for the remote backend. Manually verified.
When rendering the JSON plan sensitivity output, if the plan contained
unknown collection or structural types, Terraform would crash. We need
to detect unknown values before attempting to iterate them.
Unknown collection or structural values cannot have sensitive contents
accidentally displayed, as those values are not known until after apply.
As a result we return an empty value of the appropriate type for the
sensitivity mapping.
If the provider locks have not changed, there is no need to rewrite the
locks file. Preventing this needless rewrite should allow Terraform to
operate in a read-only directory, so long as the provider requirements
don't change.
When an output value changes, we have a small amount of information we
can convey about its sensitivity. If either the output was previously
marked sensitive, or is currently marked sensitive in the config, this
is tracked in the output change data.
This commit encodes this boolean in the change struct's
`before_sensitive` and `after_sensitive` fields, in the a way which
matches resource value sensitivity. Since we have so little information
to work with, these two values will always be booleans, and always equal
each.
This is logically consistent with how else we want to obscure sensitive
data: a changing output which was or is marked sensitive should not have
the value shown in human-readable output.
Similar to `after_unknown`, `before_sensitive` and `after_sensitive` are
values with similar structure to `before` and `after` which encode the
presence of sensitive values in a planned change. These should be used
to obscure sensitive values from human-readable output.
These values follow the same structure as the `before` and `after`
values, replacing sensitive values with `true`, and non-sensitive values
with `false`. Following the `after_unknown` precedent, we omit
non-sensitive `false` values for object attributes/map values, to make
serialization more compact.
One difference from `after_unknown` is that a sensitive complex value
(collection or structural type) is replaced with `true`. If the complex
value itself is sensitive, all of its contents should be obscured.
We have these funny extra options that date back to before Terraform even
had remote state, which we've preserved along the way by most recently
incorporating them as special-case overrides for the local backend.
The documentation we had for these has grown less accurate over time as
the details have shifted, and was in many cases missing the requisite
caveats that they are only for the local backend and that backend
configuration is the modern, preferred way to deal with the use-cases they
were intended for.
We always have a bit of a tension with this sort of legacy option because
we want to keep them documented just enough to be useful to someone who
finds an existing script/etc using them and wants to know what they do,
but not to take up so much space that they might distract users from
finding the modern alternative they should consider instead.
As a compromise in that vein here I've created a new section about these
options under the local backend documentation, which then gives us the
space to go into some detail about the various behaviors and interactions
and also to discuss their history and our recommended alternatives. I then
simplified all of the other mentions of these in command documentation
to just link to or refer to the local backend documentation. My hope then
is that folks who need to know what these do can still find the docs, but
that information can be kept out of the direct path of new users so they
can focus on learning about remote backends instead.
This is certainly not the most ideal thing ever, but it seemed like the
best compromise between the competing priorities I described above.
The formatter for value expressions which use legacy interpolation
syntax was previously behaving incorrectly with some multi-line
expressions. Any HCL expression which requires parenthesis to be allowed
to span multiple lines could be skip those parens if already inside
string interpolation (`"${}"`).
When removing string interpolation, we now check for a resulting
multi-line expression, and conservatively ensure that it starts and ends
with parenthesis. These may be redundant, as not all expressions require
parens to permit spanning multiple lines, but at least it will be valid
output.
* checkpoint save: update InternalValidate tests to compare exact error
* configschema: extract and extend attribute validation
This commit adds an attribute-specific InternalValidate which was extracted directly from the block.InternalValidate logic and extended to verify any NestedTypes inside an Attribute. Only one error message changed, since it is now valid to have a cty.NilType for Attribute.Type as long as NestedType is set.
* terraform: validate provider schema's during NewContext
We haven't been able to guarantee that providers are validating their own schemas using (some version of) InternalValidate since providers were split out of the main codebase. This PR adds a call to InternalValidate when provider schemas are initially loaded by NewContext, which required a few other changes:
InternalValidate's handling of errors vs multierrors was a little weird - before this PR, it was occasionally returning a non-nil error which only stated "0 errors occurred" - so I addressed that in InternalValidate. I then tested this with a configuration that was using all of our most popular providers, and found that at least on provider had some invalid attribute names, so I commented that particular validation out. Adding that in would be a breaking change which we would have to coordinate with enablement and providers and (especially in this case) make sure it's well communicated to external provider developers.
I ran a few very unscientific tests comparing the timing with and without this validation, and it appeared to only cause a sub-second increase.
* refactor validate error message to closer match the sdk's message
* better error message
* tweak error message: move the instruction to run init to the end of the message, after the specific error.
Support for attributes with NestedTypes was added in https://github.com/hashicorp/terraform/pull/28055, and should have included a format version bump: this is a backwards-compatible change, but consumers will need to be updated in order to properly decode attributes (with NestedTypes) going forward.
In line with the other complex JSON output formats for plan and provider
schema, here we add an explicit `format_version` field to the JSON
output of terraform validate.
* format/diff: extract attributes-writing logic to a function
This is a stepping-stone commit (for easier reviewability, and to prove that tests did not change) as part of writing a NestedType-specific diff printer.
* command/format: add support for formatting attributes with NestedTypes
This commit adds custom formatting for NestedType attributes. THe logic was mostly copied from the block diff printer, with minor tweaks here and there. I used the (excellent) existing test coverage and added a NestedType attribute to every test.
Since the (nested-block specific) test schemas were nearly identical, I added a function that returns the schema with the requested NestingMode.
Now that we have a comprehensive JSON diagnostic structure, we can use
it in the `validate -json` output instead of the inline version. Note
that this changes the output of `validate -json` in two ways:
1. We fix some off-by-one errors caused by zero-width highlight ranges.
This aligns the JSON diagnostic output with the text output seen by
most Terraform users, so I consider this a bug fix.
2. We add the `snippet` field to the JSON diagnostics where available.
This is purely additive and is permitted under our JSON format
stability guarantees.
This commit adds a comprehensive JSON format for diagnostics, which
ensures that all current diagnostic output can be semantically
represented in a machine-readable format. The diagnostic formatter
interface remains unchanged, but it first transforms its input via the
JSON format to ensure that there is only one code path for creating the
diagnostic data.
The JSON diagnostic renderer extracts the non-presentational logic from
the format package, and returns a structure which can either be
marshaled into JSON or rendered as text. The resulting text diagnostic
output is unchanged for all cases covered by unit tests and my own
manual testing.
Included in this commit are a number of golden reference files for the
marshaled JSON output of a diagnostic. This format should change rarely
if at all, and these are in place to ensure that any changes to the
format are intentional and considered.
This PR extends jsonprovider to support attributes with NestedTypes and extends test coverage in jsonprovider and the providers schemas tests. I've also cleaned up some comments and extracted the logic to parse the nesting mode so it can be used in both marshalling blocks and attributes.
* Add helper suggestion when failed registry err
When someone has a failed registry error on init, remind them that
they should have required_providers in every module
* Give suggestion for a provider based on reqs
Suggest another provider on a registry error, from the list of
requirements we have on init. This skips the legacy lookup
process if there is a similar provider existing in requirements.
Fixes#27506
Add a new flag `-lockfile=readonly` to `terraform init`.
It would be useful to allow us to suppress dependency lockfile changes
explicitly.
The type of the `-lockfile` flag is string rather than bool, leaving
room for future extensions to other behavior variants.
The readonly mode suppresses lockfile changes, but should verify
checksums against the information already recorded. It should conflict
with the `-upgrade` flag.
Note: In the original use-case described in #27506, I would like to
suppress adding zh hashes, but a test code here suppresses adding h1
hashes because it's easy for testing.
Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>
The previous implementation of views was copying and embedding the base
View struct in each individual view. While this allowed for easy access
to the interface of that struct (both in the view and externally), it more
importantly completely broke the ability of the diagnostic printer to
output source code snippets.
This is because the `configSources` field on the base view is lazily set
after the config loader is initialized. In the commands ported to use
views, this happens after the base View struct is copied, so we are
updating the wrong copy of the struct.
This commit fixes this with a simple mechanical refactor: keep a pointer
to the base View struct instead, and update all of the individual views
to explicitly refer to that struct to access its fields and methods.
This is not a particularly satisfying solution, but I can't find
anything clearly better. It might be worth exploring the alternative
approach in the view for the new test command, which explicitly pulls
its dependencies out of the base view, rather than retaining a full
reference. Maybe there's a third way which is better still.
Instead of returning an error with no context about unexpected
attributes or incorrect types, notify users that the schema stored in
the state does not match the current provider.
User can only encounter this error if the providers have updated their
schemas since the state was stored. This would appears when running
`terraform show -json` to display the current state, or
`terraform show -json planfile` if that plan was created with
`-refresh=false`. In either case, the state must be refreshed in order
to properly json encoded.
The auto-approve argument was part of the arguments.Operation type,
which resulted in adding a silent -auto-approve flag to plan and
refresh. This was unintended, and is fixed in this commit by moving the
flag to the arguments.Apply type and updating the downstream callers.
This is just a prototype to gather some feedback in our ongoing research
on integration testing of Terraform modules. The hope is that by having a
command integrated into Terraform itself it'll be easier for interested
module authors to give it a try, and also easier for us to iterate quickly
based on feedback without having to coordinate across multiple codebases.
Everything about this is subject to change even in future patch releases.
Since it's a CLI command rather than a configuration language feature it's
not using the language experiments mechanism, but generates a warning
similar to the one language experiments generate in order to be clear that
backward compatibility is not guaranteed.
* Add support for plugin protocol v6
This PR turns on support for plugin protocol v6. A provider can
advertise itself as supporting protocol version 6 and terraform will
use the correct client.
Todo:
The "unmanaged" providers functionality does not support protocol
version, so at the moment terraform will continue to assume that
"unmanaged" providers are on protocol v5. This will require some
upstream work on go-plugin (I believe).
I would like to convert the builtin providers to use protocol v6 in a
future PR; however it is not necessary until we remove protocol v6.
* add e2e test for using both plugin protocol versions
- copied grpcwrap and made a version that returns protocol v6 provider
- copied the test provider, provider-simple, and made a version that's
using protocol v6 with the above fun
- added an e2etest
This code does not appear to have any effect. The operation request has
its PlanOutBackend field populated directly in the Meta.Operation
method, by calling m.backendForState.ForPlan().
I tested a trivial null-resource config with a Consul backend, and the
saved plans with and without this code present were identical.