On Unix-derived systems a directory must be marked as "executable" in
order to be accessible, so our previous mode of 0660 here was unsufficient
and would cause a failure if it happened to be the installer that was
creating the plugins directory for the first time here.
Now we'll make it executable and readable for all but only writable by
the same user/group. For consistency, we also make the selections file
itself readable by everyone. In both cases, the umask we are run with may
further constrain these modes.
They still aren't passing, but this is just enough updating to make the
test program compile successfully after the refactoring related to
provider installation. They are now using the mock provider source offered
by the getproviders package, which is similar but not totally identical
to the idea of mocking the entire installer as these tests used to do, and
so many of them need further adjustment to still be testing what they
intended to test under this new architecture.
Subsequent commits will gradually repair the failing tests.
These are some helpers to support unit testing in other packages, allowing
callers to exercise provider installation mechanisms without hitting any
real upstream source or having to prepare local package directories.
MockSource is a Source implementation that just scans over a provided
static list of packages and returns whatever matches.
FakePackageMeta is a shorthand for concisely constructing a
realistic-looking but uninstallable PackageMeta, probably for use with
MockSource.
FakeInstallablePackageMeta is similar to FakePackageMeta but also goes to
the trouble of creating a real temporary archive on local disk so that
the resulting package meta is pointing to something real on disk. This
makes the result more useful to the caller, but in return they get the
responsibility to clean up the temporary file once the test is over.
Nothing is using these yet.
* terraform: add helper functions for creating test state
testSetResourceInstanceCurrent and testSetResourceInstanceTainted are
wrapper functions around states.Module.SetResourceInstanceCurrent()
used to set a resource in state. They work with current, non-deposed
resources with no dependencies.
testSetResourceInstanceDeposed can be used to set a desosed resource in state.
* terraform: update all tests to use modern providers and state
This commit reverts an earlier change which automatically converted
provider strings to legacy provider FQNs. It has become apparent that a
state upgrade step will be required before upgrading to v0.13.
These are cases where we were using the legacy string only to produce a
message to the user or to write to the log. It's enough to make some
basic Terraform commands like "terraform validate" not panic and get far
enough along to see that provider startup is working.
Back when we first introduced provider versioning in Terraform 0.10, we
did the provider version resolution in terraform.NewContext because we
weren't sure yet how exactly our versioning model was going to play out
(whether different versions could be selected per provider configuration,
for example) and because we were building around the limitations of our
existing filesystem-based plugin discovery model.
However, the new installer codepath is new able to do all of the
selections up front during installation, so we don't need such a heavy
inversion of control abstraction to get this done: the command package can
select the exact provider versions and pass their factories directly
to terraform.NewContext as a simple static map.
The result of this commit is that CLI commands other than "init" are now
able to consume the local cache directory and selections produced by the
installation process in "terraform init", passing all of the selected
providers down to the terraform.NewContext function for use in
implementing the main operations.
This commit is just enough to get the providers passing into the
terraform.Context. There's still plenty more to do here, including to
repair all of the tests this change has additionally broken.
The introduction of a heirarchical addressing scheme for providers gives
us an opportunity to make more explicit the special case of "built-in"
providers.
Thus far we've just had a special case in the "command" package that the
provider named "terraform" is handled differently than all others, though
there's nothing especially obvious about that in the UI.
Moving forward we'll put such "built-in" providers under the special
namespace terraform.io/builtin/terraform, which will be visible in the UI
as being different than the other providers and we can use the namespace
itself (rather than a particular name) as the trigger for our special-case
behaviors around built-in plugins.
We have no plans to introduce any built-in providers other than
"terraform" in the foreseeable future, so any others will produce an
error.
This commit just establishes the addressing convention, without making use
of it anywhere yet. Subsequent commits will make the provider installer
and resolver codepaths aware of it, replacing existing checks for the
provider just being called "terraform".
Just as with the old installer mechanism, our goal is that explicit
provider installation is the only way that new provider versions can be
selected.
To achieve that, we conclude each call to EnsureProviderVersions by
writing a selections lock file into the target directory. A later caller
can then recall the selections from that file by calling SelectedPackages,
which both ensures that it selects the same set of versions and also
verifies that the checksums recorded by the installer still match.
This new selections.json file has a different layout than our old
plugins.json lock file. Not only does it use a different hashing algorithm
than before, we also record explicitly which version of each provider
was selected. In the old model, we'd repeat normal discovery when
reloading the lock file and then fail with a confusing error message if
discovery happened to select a different version, but now we'll be able
to distinguish between a package that's gone missing since installation
(which could previously have then selected a different available version)
from a package that has been modified.
For the old-style provider cache directory model we hashed the individual
executable file for each provider. That's no longer appropriate because
we're giving each provider package a whole directory to itself where it
can potentially have many files.
This therefore introduces a new directory-oriented hashing algorithm, and
it's just using the Go Modules directory hashing algorithm directly
because that's already had its cross-platform quirks and other wrinkles
addressed during the Go Modules release process, and is now used
prolifically enough in Go codebases that breaking changes to the upstream
algorithm would be very expensive to the Go ecosystem.
This is also a bit of forward planning, anticipating that later we'll use
hashes in a top-level lock file intended to be checked in to user version
control, and then use those hashes also to verify packages _during_
installation, where we'd need to be able to hash unpacked zip files. The
Go Modules hashing algorithm is already implemented to consistently hash
both a zip file and an unpacked version of that zip file.
There's still a lot of work to do here around both the UX and the
follow-up steps that need to happen after installation completes, but this
is enough to faciliate some initial end-to-end testing of the new-style
install process.
We cannot evaluate expansion during validation, since the values may not
be known at that time.
Inject a nodeValidateModule, using the "Concrete" pattern used for other
node types during graph building. This node will always evaluate to a
single module instance, so that we have a valid context within which to
evaluate all sub resources.
Make the expansion logic easier to follow, keeping the evaluation and
registration local to switch cases. We don't validate anything between
count or for_each (config loading should handle that), and we don't need
to keep relying on the count == -1 sentinel value.
os.NewFile was called on file descriptors 3, 4, and 5 during every init,
in case this process happened to be running inside panicwrap. If the
runtime has already chosen one of these file descriptors to use
internally, starting polling on them can cause the runtime to crash.
Initialize the file descriptors lazily, only if we know that they belong
to us, after Wrapped is checked.
Replace the graphNodeRoot for the main graph with a nodeCloseModule for
the root module. USe a new transformer as well, so as to not change any
behavior of DynamicExpand graphs.
Closing out the root module like we do with sub modules means we no
longer need the OrphanResourceTransformer, or the NodeDestroyResource.
The old resource destroy logic has mostly moved into the instance nodes,
and the remaining resource node was just for cleanup, which need to be
done again by the module since there isn't always a NodeDestroyResource
to be evaluated.
The more-correct state caused a few tests to fail, which need to be
cleaned up to match the state without empty resource husks.
There is not one more non-dependent type to look for when pruning unused
values. This fixes the oversight, but still leaves the ugly concrete
type checking which we need to remove.
During plan, anything dependent on a module can connect to the module
expansion node, because all instance nodes are created during
DynamicExpand. During apply the instance nodes are created from the
diff, so we need a root module to terminate the logical module subgraph.
Besides providing an anchor for the completion of a module, the
nodeCloseModule can also be used to cleanup the orphan resource and
module placeholders in the state.
NodeDestroyResource does not require a provider, and to avoid this a
temporary GraphNodeNoProvider was used to differentiate it from other
resource nodes. We can now de-couple the destroy node from the abstract
resource which was adding the ProvidedBy method, and remove the
NoProvider method.