This is useful for creating a valid placeholder configuration, but not
much else. Most callers should use BuildConfig to build a configuration
that actually has something in it.
Initially the intent here was to tease these apart a little more since
they don't really share much behavior in common in core, but in practice
it'll take a lot of refactoring to tease apart these assumptions in core
right now and so we'll keep these things unified at the configuration
layer in the interests of minimizing disruption at the core layer.
The two types are still kept in separate maps to help reinforce the fact
that they are separate concepts with some behaviors in common, rather than
the same concept.
We initially just mimicked our old practice of using []string for module
paths here, but the addrs package now gives us a pair of types that better
capture the two different kinds of module addresses we are dealing with:
static addresses (nodes in the configuration tree) and dynamic/instance
addresses (which can represent the situation where multiple instances are
created from a single module call).
This distinction still remains rather artificial since we don't yet have
support for count or for_each on module calls, but this is intended to lay
the foundations for that to be added later, and in the mean time just
gives us some handy helper functions for parsing and formatting these
address types.
Whereas package "configs" deals with the static structure of the
configuration language, this new package "lang" deals with the dynamic
aspects such as expression evaluation.
So far this mainly consists of populating a hcl.EvalContext that contains
the values necessary to evaluate a block or an expression. There is also
special handling here for dynamic block generation using the HCL
"dynblock" extension, which is exposed in the public interface (rather
than hiding it as an implementation detail of EvalBlock) so that the
caller can then extract proper source locations for any result values
using the expanded body.
This also includes the beginnings of a replacement for the function table
handling that currently lives in the old "config" package, but most of
the functions are not yet ported and so this will expand in subsequent
commits.
This function corresponds to terraform.NewInterpolatedVariable, but built
with HCL2 primitives. It accepts a hcl.Traversal, which is what is
returned from the HCL2 API functions to find which variables are
referenced in a given expression.
This package is intended to contain all the functionality for parsing,
representing, and formatting addresses of objects within Terraform.
It will eventually subsume the responsibilities of both the
InterpolatedVariable and ResourceAddress types in the "terraform" package,
but for the moment is just a set of types for representing these things,
lacking any way to parse or format them. The remaining functionality
will follow in subsequent commits.
For the moment this is just a lightly-adapted copy of
ModuleTreeDependencies named ConfigTreeDependencies, with the goal that
the two can live concurrently for the moment while not all callers are yet
updated and then we can drop ModuleTreeDependencies and its helper
functions altogether in a later commit.
This can then be used to make "terraform init" and "terraform providers"
work properly with the HCL2-powered configuration loader.
This is a rather-messy, complex change to get the "command" package
building again against the new backend API that was updated for
the new configuration loader.
A lot of this is mechanical rewriting to the new API, but
meta_config.go and meta_backend.go in particular saw some major
changes to interface with the new loader APIs and to deal with
the change in order of steps in the backend API.
The new config loader requires some steps to happen in a different
order, particularly in regard to knowing the schema in order to
decode the configuration.
Here we lean directly on the configschema package, rather than
on helper/schema.Backend as before, because it's generally
sufficient for our needs here and this prepares us for the
helper/schema package later moving out into its own repository
to seed a "plugin SDK".
These utility functions are intended to allow concisely loading a
configuration from a fixture directory in a test, bailing out early if
there are any unexpected errors.
The usual way to use a configschema.Block is to obtain a hcldec spec from
it and then decode an hcl.Body. There are inevitably situations though
where a body has already been decoded into a cty.Value before we know
which schema we need to use.
This new method CoerceValue is intended to deal with this case, applying
the schema to an already-decoded value in what should be an intuitive way
for most situations.
We have a few special use-cases in Terraform where an object is
constructed from a mixture of different sources, such as a configuration
file, command line arguments, and environment variables.
To represent this within the HCL model, we introduce a new "synthetic"
HCL body type that just represents a map of values that are interpreted
as attributes.
We then export the previously-private MergeBodies function to allow the
synthetic body to be used as an override for a "real" body, which then
allows us to combine these various sources together while still retaining
the proper source location information for each individual attribute.
Since a synthetic body doesn't actually exist in configuration, it does
not produce source locations that can be turned into source snippets but
we can still use placeholder strings to help the user to understand
which of the many different sources a particular value came from.
We will need access to this information in order to render interactive
input prompts, and it will also be useful in returning schema information
to external tools such as text editors that have autocomplete-like
functionality.
The remote API this talks to will be going away very soon, before our next
major release, and so we'll remove the command altogether in that release.
This also removes the "encodeHCL" function, which was used only for
adding a .tfvars-formatted file to the uploaded archive.
Absent values are omitted by the old code we are emulating in HCL, so we
must do the same here in order to avoid breaking assumptions in the
helper/schema layer.
While diagnostics are primarily designed for reporting problems in
configuration, they can also be used for errors and warnings about the
environment Terraform is running in, such as inability to reach a remote
service, etc.
Function Sourceless makes it easy to produce such diagnostics with the
customary summary/detail structure. When these diagnostics are rendered
they will have no source code snippet and will instead just include the
English-language content.
cty doesn't have a native string representation of a cty.Path because it
is the layer below any particular syntax, but up here in Terraform land
we know that we use HCL native syntax and so we can format a string in
a HCL-ish way for familiarity to the user.
We'll use this form automatically when such an error is used directly as a
diagnostic, but we also expose the function publicly so that other code
that incorporates errors into diagnostic detail strings can apply the
same formatting.
The usual usage of diagnostics requires us to pass around source location
information to everywhere that might generate a diagnostic, and that is
always the best way to get the most precise diagnostic source locations.
However, it's impractical to require source location information to be
retained in every Terraform subsystem, and so this new idea of "contextual
diagnostics" allows us to separate the generation of a diagnostic from
the resolution of its source location, instead resolving the location
information as a post-processing step once the call stack unwinds to a
place where there is enough context to find it.
This is necessarily a less precise approach than reading the source ranges
directly from the configuration AST, but gives us an alternative to no
diagnostics at all in portions of Terraform where full location
information is not available.
This is a best-effort sort of thing which will get as precise as it can
but may return a range in a parent block if the precise location of a
particular attribute cannot be found. Diagnostics that rely on this
mechanism should include some other contextual information in the detail
message to make up for any loss of precision that results.
In the long run we'd like to offer machine-readable output for more
commands, but for now we'll just start with a tactical feature in
"terraform validate" since this is useful for automated testing scenarios,
editor integrations, etc, and doesn't include any representations of types
that are expected to have breaking changes in the near future.
As part of some light reorganization of our commands, this new
implementation no longer does validation of variables and will thus avoid
the need to spin up a fully-valid context. Instead, its focus is on
validating the configuration itself, regardless of any variables, state,
etc.
This change anticipates us later adding a -validate-only flag to
"terraform plan" which will then take over the related use-case of
checking if a particular execution of Terraform is valid, _including_ the
state, variables, etc.
Although leaving variables out of validate feels pretty arbitrary today
while all of the variable sources are local anyway, we have plans to
allow per-workspace variables to be stored in the backend in future and
at that point it will no longer be possible to fully validate variables
without accessing the backend. The "terraform plan" command explicitly
requires access to the backend, while "terraform validate" is now
explicitly for local-only validation of a single module.
In a future commit this will be extended to do basic type checking of
the configuration based on provider schemas, etc.
We need to share a single config loader across all callers because that
allows us to maintain the source code cache we'll use for snippets in
error messages.
Nothing calls this yet. Callers will be gradually updated away from Module
and Config in subsequent commits.
If we get a diagnostic message that references a source range, and if the
source code for the referenced file is available, we'll show a snippet of
the source code with the source range highlighted.
At the moment we have no cache of source code, so in practice this
codepath can never be visited. Callers to format.Diagnostic will be
gradually updated in subsequent commits.