Convert a global reference to a specific AbsResource and attribute pair.
The hcl.Traversal is converted to a cty.Path at this point because plan
rendering is based on cty values.
Our existing functionality for dealing with references generally only has
to concern itself with one level of references at a time, and only within
one module, because we use it to draw a dependency graph which then ends
up reflecting the broader context.
However, there are some situations where it's handy to be able to ask
questions about the indirect contributions to a particular expression in
the configuration, particularly for additional hints in the user interface
where we're just providing some extra context rather than changing
behavior.
This new "globalref" package therefore aims to be the home for algorithms
for use-cases like this. It introduces its own special "Reference" type
that wraps addrs.Reference to annotate it also with the usually-implied
context about where the references would be evaluated.
With that building block we can therefore ask questions whose answers
might involve discussing references in multiple packages at once, such as
"which resources directly or indirectly contribute to this expression?",
including indirect hops through input variables or output values which
would therefore change the evaluation context.
The current implementations of this are around mapping references onto the
static configuration expressions that they refer to, which is a pretty
broad and conservative approach that unfortunately therefore loses
accuracy when confronted with complex expressions that might take dynamic
actions on the contents of an object. My hunch is that this'll be good
enough to get some initial small use-cases solved, though there's plenty
room for improvement in accuracy.
It's somewhat ironic that this sort of "what is this value built from?"
question is the use-case I had in mind when I designed the "marks" feature
in cty, yet we've ended up putting it to an unexpected but still valid
use in Terraform for sensitivity analysis and our currently handling of
that isn't really tight enough to permit other concurrent uses of marks
for other use-cases. I expect we can address that later and so maybe we'll
try for a more accurate version of these analyses at a later date, but my
hunch is that this'll be good enough for us to still get some good use out
of it in the near future, particular related to helping understand where
unknown values came from and in tailoring our refresh results in plan
output to deemphasize detected changes that couldn't possibly have
contributed to the proposed plan.
This commit introduces a capsule type, `TypeType`, which is used to
extricate type information from the console-only `type` function. In
combination with the `TypeType` mark, this allows us to restrict the use
of this function to top-level display of a value's type. Any other use
of `type()` will result in an error diagnostic.
These instances of marks.Raw usage were semantically only testing the
properties of combining multiple marks. Testing this with an arbitrary
value for the mark is just as valid and clearer.
This is not currently gated by the experiment only because it is awkward
to do so in the context of evaluationStateData, which doesn't have any
concept of experiments at the moment.
Previously we were just returning a string representation of the file mode,
which spends more characters on the irrelevant permission bits that it
does on the directory entry type, and is presented in a Unix-centric
format that likely won't be familiar to the user of a Windows system.
Instead, we'll recognize a few specific directory entry types that seem
worth mentioning by name, and then use a generic message for the rest.
The original motivation here was actually to deal with the fact that our
tests for this function were previously not portable due to the error
message leaking system-specific permission detail that are not relevant
to the test. Rather than just directly addressing that portability
problem, I took the opportunity to improve the error messages at the same
time.
However, because of that initial focus there are only actually tests here
for the directory case. A test that tries to test any of these other file
modes would not be portable and in some cases would require superuser
access, so we'll just leave those cases untested for the moment since they
weren't tested before anyway, and so we've not _lost_ any test coverage
here.
Some function errors include values derived from arguments. This commit
is the result of a manual audit of these errors, which resulted in:
- Adding a helper function to redact sensitive values;
- Applying that helper function where errors include values derived from
possibly-sensitive arguments;
- Cleaning up other errors which need not include those values, or were
otherwise incorrect.
Running the tool this way ensures that we'll always run the version
selected by our go.mod file, rather than whatever happened to be available
in $GOPATH/bin on the system where we're running this.
This change caused some contexts to now be using a newer version of
staticcheck with additional checks, and so this commit also includes some
changes to quiet the new warnings without any change in overall behavior.
We can also rule out some attribute types as indicating something other
than the legacy SDK.
- Tuple types were not generated at all.
- There were no single objects types, the convention was to use a block
list or set of length 1.
- Maps of objects were not possible to generate, since named blocks were
not implemented.
- Nested collections were not supported, but when they were generated they
would have primitive types.
If structural types are being used, we can be assured that the legacy
SDK SchemaConfigModeAttr is not being used, and the fixup is not needed.
This prevents inadvertent mapping of blocks to structural attributes,
and allows us to skip the fixup overhead when possible.
Go 1.17 includes a breaking change to both net.ParseIP and net.ParseCIDR
functions to reject IPv4 address octets written with leading zeros.
Our use of these functions as part of the various CIDR functions in the
Terraform language doesn't have the same security concerns that the Go
team had in evaluating this change to the standard library, and so we
can't justify an exception to our v1.0 compatibility promises on the same
sort of security grounds that the Go team used to justify their
compatibility exception.
For that reason, we'll now use our own fork of the Go library functions
which has the new check disabled in order to preserve the prior behavior.
We're taking this path, rather than pre-normalizing the IP address before
calling into the standard library, because an additional normalization
layer would be entirely new code and additional complexity, whereas this
fork is relatively minor in terms of code size and avoids any significant
changes to our own calls to these functions.
Thanks to the Kubernetes team for their prior work on carving out a subset
of the "net" package for their similar backward-compatibility concern.
Our "ipaddr" package here is a lightly-modified fork of their fork, with
only the comments changed to talk about Terraform instead of Kubernetes.
This fork is not intended for use in any other future feature
implementations, because they wouldn't be subject to the same
compatibility constraints as our existing functions. We will use these
forked implementations for new callers only if consistency with the
behavior of the existing functions is a key requirement.
The previous name didn't fit with the naming scheme for addrs types:
The "Abs" prefix typically means that it's an addrs.ModuleInstance
combined with whatever type name appears after "Abs", but this is instead
a ModuleCallOutput combined with an InstanceKey, albeit structured the
other way around for convenience, and so the expected name for this would
be the suffix "Instance".
We don't have an "Abs" type corresponding with this one because it would
represent no additional information than AbsOutputValue.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.