Use the new StateLocker field to provide a wrapper for locking the state
during terraform.Context creation in the commands that directly
manipulate the state.
Simplify the use of clistate.Lock by creating a clistate.Locker
instance, which stores the context of locking a state, to allow unlock
to be called without knowledge of how the state was locked.
This alows the backend code to bring the needed UI methods to the point
where the state is locked, and still unlock the state from an outer
scope.
Provide a NoopLocker as well, so that callers can always call Unlock
without verifying the status of the lock.
Add the StateLocker field to the backend.Operation, so that the state
lock can be carried between the different function scopes of the backend
code. This will allow the backend context to lock the state before it's
read, while allowing the different operations to unlock the state when
they complete.
Use the new StateLocker field to provide a wrapper for locking the state
during terraform.Context creation in the commands that directly
manipulate the state.
Use the new StateLocker field to provide a wrapper for locking the state
during terraform.Context creation. We can then remove all the state
locking code from individual operations, and unlock them in one place
inside the main Operation method.
Add the StateLocker field so that the state lock can be carried between
the different function scopes of the backend code. This will allow the
backend context to lock the state before it's read, while allowing the
different operations to unlock the state when they complete.
Simplify the use of clistate.Lock by creating a clistate.Locker
instance, which stores the context of locking a state, to allow unlock
to be called without knowledge of how the state was locked.
This alows the backend code to bring the needed UI methods to the point
where the state is locked, and still unlock the state from an outer
scope.
The test target will already cover vet, since it's run as part of the
test command now.
Also remove the `go test -i` since it's no longer needed with the new
build cache.
provisioner. Also fixes an issue where channels and URLs are
not honored in the initial package install.
Signed-off-by: Rob Campbell <rcampbell@chef.io>
Fix the now failing state unlock test by reporting the correct ID.
The ID used by GCS is the generation number of the info object, which
isn't known until the info is already written out. While we can't get
the correct ID from the info data for the error rmessage, we can update
it with the generation number after it's read.
This adds a general test to verify that a remote state backend returns
the expected error type when it cannot lock a state. It then extracts
the ID reported in the error, and attempts to unlock the state using
that ID, which simulated the force-unlock scenario. This is a separate
test, since not all backends have persistent locks that can be unlocked
later.
We also split out the backend test to be called individually as needed.
This new argument allows overriding of the working directory of the child process, with the default still being the working directory of Terraform itself.
This is rarely needed, but sometimes tests need to create temporary files as part of their operation. This should be used sparingly, since it prevents the pro-active cleanup of the temporary working directory.
The initial pass of implementation here missed the special case where
ignore_changes can, in the old parser, be set to ["*"] to ignore changes
to all attributes.
Since that syntax is awkward and non-obvious, our new decoder will instead
expect ignore_changes = all, using HCL2's capability to interpret an
expression as a literal keyword. For compatibility with old configurations
we will still accept the ["*"] form but emit a deprecation warning to
encourage moving to the new form.
In our new loader we are changing certain values in configuration to be
naked keywords or references rather than quoted strings as before. Since
many of these have been shown in books, tutorials, and our own
documentation we will make the old forms generate deprecation warnings
rather than errors so that newcomers starting from older documentation
can be eased into the new syntax, rather than getting blocked.
This will also avoid creating a hard compatibility wall for reusable
modules that are already published, allowing them to still be used in
spite of these warnings and then fixed when the maintainer is able.
Previously we were just loading the module and asserting no diagnostics,
but that is not really good enough since if we install modules incorrectly
it's possible that we are still able to load an empty configuration
successfully.
Now we'll do some basic inspecetion of the module tree that results from
loading what we installed, to ensure that all of the expected modules
are present at the right locations in the tree.
This will provide the functionality of "terraform init -from-module=...",
which uses the contents of a given module to populate the working
directory.
This mechanism is intended for installing e.g. examples from Terraform
Registry or elsewhere. It's not fully-general since it can't reasonably
install a module from a subdir that refers up to a parent directory, but
that isn't an issue for all reasonable uses of this option.
Originally the hope was to use the afero filesystem abstraction for all
loader operations, but since we install modules using go-getter we cannot
(without a lot of refactoring) support vfs for installation.
The vfs use-case is for reading configuration from plan zip files anyway,
and so we have no real reason to support installation into a vfs. For now
at least we will just add the possibility that a loader might not be
install-capable. At the moment we have no non-install-capable loaders, but
we'll add one later once we get to loading configuration from plan files.
Unlike the old installer in config/module, this uses new-style
installation directories that include the static module path so that paths
we show in diagnostics will be more meaningful to the user.
As before, we retrieve the entire "package" associated with the given
source string, rather than any given subdirectory directly, because the
retrieved module may contain ../ references into parent directories which
must be resolvable after extraction.
This is not strictly necessary, but since this is not a
performance-critical codepath we'll do this because it makes life easier
for callers that want to print out user-facing logs about build process,
or who are logging actions taken as part of a unit test.
Enough of the InstallModules method to install local modules (those with
relative paths). "Install" is actually a bit of an exaggeration for these
since we actually just record them in our manifest after verifying that
the source directory exists.
This is a change of behavior relative to the old module installer since
we no longer create a symlink to the module directory inside the
.terraform/modules directory. Instead, we record the module's true
location in our manifest so that the loader will find it later.
The use of a symlink here predated the manifest file. Now that we have a
manifest file the symlinks are redundant. Using the "natural" location of
the module leads to more helpful error messages, since we'll refer to
the module path as the user expects it, rather than to an internal alias.
Previously the behavior for loading and installing modules was included in
the same package as the representation of the module tree (in the
config/module package).
In our new world, the model of a module tree (now called a "Config") is
included in "configs" along with the Module and File structs. This new
package replaces the loading and installation functionality previously
in config/module with new equivalents that work with the model objects
in "configs".
As of this commit, only the loading functionality is implemented. The
installation functionality will follow in subsequent commits.
BuildConfig creates a module tree by recursively walking through module
calls in the root module and any descendent modules. This is intended to
be used both for the simple case of loading already-installed modules and
the more complex case of installing modules inside "terraform init", both
of which will be dealt with in a separate package.
mergeBody is a hcl.Body implementation that deals with our override file
merging behavior for the portions of the configuration that are not
processed until full eval time.
Mimicking the behavior of our old config merge implementation from the
"config" package, the rules here are:
- Attributes in the override body hide attributes of the same name in
the base body.
- Any block in the override body hides all blocks with the same type name
that appear in the base body.
This is tested by a new test for the overriding of module arguments, which
asserts the correct behavior of the merged body as part of its work.
Some of the fields in our config structs are either mandatory in primary
files or there is a default value that we apply if absent.
Unfortunately override files impose the additional constraint that we
be allowed to omit required fields (which have presumably already been
set in the primary files) and that we are able to distinguish between a
default value and omitting a value entirely.
Since most of our fields were already acceptable for override files, here
we just add some new fields to deal with the few cases where special
handling is required and a helper function to disable the "Required" flag
on attributes in a given schema.
This method wraps LoadConfigFile to load all of the .tf and .tf.json files
in a given directory and then bundle them together into a Module object.
This function also deals with the distinction between primary and override
files, first appending together the primary files in lexicographic order
by filename, and then merging in override files in the same order.
The merging behavior is not fully implemented as of this commit, and so
will be expanded in future commits.
Much like TestParserLoadConfigFileSuccess, this is intended to be an
easy-to-maintain collection of bad examples to test different permutations
of our error handling.
As with TestParserLoadConfigFileSuccess, we should also have more specific
tests alongside this that check that the error outcome is what was
expected, since this test just accepts any error and may thus not be
testing what we think it is.