The initial pass of implementation here missed the special case where
ignore_changes can, in the old parser, be set to ["*"] to ignore changes
to all attributes.
Since that syntax is awkward and non-obvious, our new decoder will instead
expect ignore_changes = all, using HCL2's capability to interpret an
expression as a literal keyword. For compatibility with old configurations
we will still accept the ["*"] form but emit a deprecation warning to
encourage moving to the new form.
In our new loader we are changing certain values in configuration to be
naked keywords or references rather than quoted strings as before. Since
many of these have been shown in books, tutorials, and our own
documentation we will make the old forms generate deprecation warnings
rather than errors so that newcomers starting from older documentation
can be eased into the new syntax, rather than getting blocked.
This will also avoid creating a hard compatibility wall for reusable
modules that are already published, allowing them to still be used in
spite of these warnings and then fixed when the maintainer is able.
Previously we were just loading the module and asserting no diagnostics,
but that is not really good enough since if we install modules incorrectly
it's possible that we are still able to load an empty configuration
successfully.
Now we'll do some basic inspecetion of the module tree that results from
loading what we installed, to ensure that all of the expected modules
are present at the right locations in the tree.
This will provide the functionality of "terraform init -from-module=...",
which uses the contents of a given module to populate the working
directory.
This mechanism is intended for installing e.g. examples from Terraform
Registry or elsewhere. It's not fully-general since it can't reasonably
install a module from a subdir that refers up to a parent directory, but
that isn't an issue for all reasonable uses of this option.
Originally the hope was to use the afero filesystem abstraction for all
loader operations, but since we install modules using go-getter we cannot
(without a lot of refactoring) support vfs for installation.
The vfs use-case is for reading configuration from plan zip files anyway,
and so we have no real reason to support installation into a vfs. For now
at least we will just add the possibility that a loader might not be
install-capable. At the moment we have no non-install-capable loaders, but
we'll add one later once we get to loading configuration from plan files.
Unlike the old installer in config/module, this uses new-style
installation directories that include the static module path so that paths
we show in diagnostics will be more meaningful to the user.
As before, we retrieve the entire "package" associated with the given
source string, rather than any given subdirectory directly, because the
retrieved module may contain ../ references into parent directories which
must be resolvable after extraction.
This is not strictly necessary, but since this is not a
performance-critical codepath we'll do this because it makes life easier
for callers that want to print out user-facing logs about build process,
or who are logging actions taken as part of a unit test.
Enough of the InstallModules method to install local modules (those with
relative paths). "Install" is actually a bit of an exaggeration for these
since we actually just record them in our manifest after verifying that
the source directory exists.
This is a change of behavior relative to the old module installer since
we no longer create a symlink to the module directory inside the
.terraform/modules directory. Instead, we record the module's true
location in our manifest so that the loader will find it later.
The use of a symlink here predated the manifest file. Now that we have a
manifest file the symlinks are redundant. Using the "natural" location of
the module leads to more helpful error messages, since we'll refer to
the module path as the user expects it, rather than to an internal alias.
Previously the behavior for loading and installing modules was included in
the same package as the representation of the module tree (in the
config/module package).
In our new world, the model of a module tree (now called a "Config") is
included in "configs" along with the Module and File structs. This new
package replaces the loading and installation functionality previously
in config/module with new equivalents that work with the model objects
in "configs".
As of this commit, only the loading functionality is implemented. The
installation functionality will follow in subsequent commits.
BuildConfig creates a module tree by recursively walking through module
calls in the root module and any descendent modules. This is intended to
be used both for the simple case of loading already-installed modules and
the more complex case of installing modules inside "terraform init", both
of which will be dealt with in a separate package.
mergeBody is a hcl.Body implementation that deals with our override file
merging behavior for the portions of the configuration that are not
processed until full eval time.
Mimicking the behavior of our old config merge implementation from the
"config" package, the rules here are:
- Attributes in the override body hide attributes of the same name in
the base body.
- Any block in the override body hides all blocks with the same type name
that appear in the base body.
This is tested by a new test for the overriding of module arguments, which
asserts the correct behavior of the merged body as part of its work.
Some of the fields in our config structs are either mandatory in primary
files or there is a default value that we apply if absent.
Unfortunately override files impose the additional constraint that we
be allowed to omit required fields (which have presumably already been
set in the primary files) and that we are able to distinguish between a
default value and omitting a value entirely.
Since most of our fields were already acceptable for override files, here
we just add some new fields to deal with the few cases where special
handling is required and a helper function to disable the "Required" flag
on attributes in a given schema.
This method wraps LoadConfigFile to load all of the .tf and .tf.json files
in a given directory and then bundle them together into a Module object.
This function also deals with the distinction between primary and override
files, first appending together the primary files in lexicographic order
by filename, and then merging in override files in the same order.
The merging behavior is not fully implemented as of this commit, and so
will be expanded in future commits.
Much like TestParserLoadConfigFileSuccess, this is intended to be an
easy-to-maintain collection of bad examples to test different permutations
of our error handling.
As with TestParserLoadConfigFileSuccess, we should also have more specific
tests alongside this that check that the error outcome is what was
expected, since this test just accepts any error and may thus not be
testing what we think it is.
This test is intended to be an easy-to-maintain catalog of good examples
that we can use to catch certain parsing or decoding regressions easily.
It's not a fully-comprehensive test since it doesn't check the result
of decoding, instead just accepting any decode that completes without
errors. However, an easy-to-maintain test like this is a good complement
to some more specialized tests since we can easily collect good examples
over time and just add them in here.
This is a first pass of decoding of the main Terraform configuration file
format. It hasn't yet been tested with any real-world configurations, so
it will need to be revised further as we test it more thoroughly.
These types represent the individual elements within configuration, the
modules a configuration is made of, and the configuration (static module
tree) itself.
This method loads a "values file" -- also known as a "tfvars file" -- and
returns the values found inside.
A values file is an HCL file (in either native or JSON syntax) whose
top-level body is treated as a set of arbitrary key/value pairs whose
values may not depend on any variables or functions.
We will load values files through a configs.Parser -- even though values
files are not strictly-speaking part of configuration -- because this
causes them to be registered in our source code cache so that we can
generate source code snippets if we need to report any diagnostics.
configs.Parser is the entry-point for this package, providing functions to
load and parse HCL-based configuration files.
We use the library "afero" to decouple the parser from the physical OS
filesystem, which here allows us to easily use an in-memory filesystem
for testing and will, in future, allow us to read files from more unusual
places, such as configuration embedded in a plan file.
There's a lot of complexity in our existing "config" package that results
from our approach to handling configuration with HCL and HIL. A lot of
that functionality is no longer needed -- or must work in a significantly
different way -- for HCL2.
The new package "configs", which is named following the convention of some
Go standard library packages like "strings", is a re-imagination of some
of the functionality from the "config" package for an HCL2-only world.
The scope of this package will be slightly smaller than "config", since
it only deals with config loading and not with expression evaluation.
Another package "lang" (mentioned in the docstring here but not yet added)
will deal with the more dynamic portions of of configuration handling,
including populating an hcl.EvalContext to evaluate expressions.
There no reason to retry around the execution of remote scripts. We've
already established a connection, so the only that could happen here is
to continually retry uploading or executing a script that can't succeed.
This also simplifies the streaming output from the command, which
doesn't need such explicit synchronization. Closing the output pipes is
sufficient to stop the copyOutput functions, and they don't close around
any values that are accessed again after the command executes.
Every provisioner that uses communicator implements its own retryFunc.
Take the remote-exec implementation (since it's the most complete) and
put it in the communicator package for each provisioner to use.
Add a public interface `communicator.Fatal`, which can wrap an error to
indicate a fatal error that should not be retried.
Add `host_key` and `bastion_host_key` fields to the ssh communicator
config for strict host key checking.
Both fields expect the contents of an openssh formated public key. This
key can either be the remote host's public key, or the public key of the
CA which signed the remote host certificate.
Support for signed certificates is limited, because the provisioner
usually connects to a remote host by ip address rather than hostname, so
the certificate would need to be signed appropriately. Connecting via
a hostname needs to currently be done through a secondary provisioner,
like one attached to a null_resource.
Currently the provisioner will fail if the `hab` user already exists on
the target system.
This adds a check to see if we need to create the user before trying to
add it.
Fixes#17159
Signed-off-by: Nolan Davidson <ndavidson@chef.io>
This change allows the Habitat supervisor service name to be
configurable. Currently it is hard coded to `hab-supervisor`.
Signed-off-by: Nolan Davidson <ndavidson@chef.io>