The code is loosely based on state/remote/gcs_test.go. If the
GOOGLE_PROJECT environment variable is set, this test will
1) create a new bucket; error out if the bucket already exists.
2) create a new state
3) list states and ensure that the newly created state is listed
4) ensure that an object with the expected name exists
5) rum "state/remote".TestClient()
6) delete the state
The bucket is deleted when the test exits, though this may fail if the
bucket it not empty.
This config option was used by the legacy "gcs" client. If set, we're
using it for the default state -- all other states still use the
"state_dir" setting.
Calling context.Background() from outside the main() function is
discouraged. The configure functions are only called from
"…/helper/schema".Backend.Configure which provides the Background context,
i.e. a long-living context we can use for backend communication.
Provide a way to pass in credentials to be used by the module.Storage
when contacting registries.
Remove the mockTLSServer and use a static discovery map pointing to the
http url for tests.
Update the command package to use the new module storage. Move the old
command output strings into the module storage itself. This could be
moved back later either by using ui callbacks, or designing a module
storage interface once we know what the final requirements will look
like.
Exporting ModuleStorage allows us to explicitly pass in the storgae
location rather than extracting it out of the getter.Storage interface,
set a UI for communiating actions back to the user, and accepting a
services Disco for discovery.
If a provider configuration is inherited from another module, any
interpolations in that config won't have variables declared locally. Let
the config only be validated in it's original location.
Registry modules can't be handled directly by the getter.Storage
implementation, which doesn't know how to handle versions. First see if
we have a matching module stored that satisfies our constraints. If
not, and we're getting or updating, we can look it up in the registry.
This essentially takes the place of a "registry detector" for go-getter,
but required the intermediate step of resolving the version dependency.
This also starts breaking up the huge Tree.Load method into more
manageable parts. It was sorely needed, as indicated by the difficulty
encountered in this refactor. There's still a lot that can be done to
improve this, but at least there are now a few easier to read methods
when we come back to it.
The detection of registry modules will have to happen in mutliple
phases. The go-getter interface requires that the detector return the
final URL, while we won't know that until we verify which version we
need. This leaves the regisry sources broken, to be re-integrated in a
following commit.
wire up HTTP so we can test the mock discovery service
test lookupModuleVersions
Add a versions endpoint to the mock registry, and use that to verify the
lookupModuleVersions behavior.
lookupModuleVersions takes a Disco as the argument so a custom Transport
can be injected, and uses that transport for its own client if it set.
test looking up modules with default registry
Add registry.terrform.io to the hostname that the mock registry resolves
to localhost.
ACC test looking up module versions
Lookup a basic module for the available version in the default registry.
This test highlights how changing an intermediate source path prevents
reloading of submodules. While this is somewhat of an edge case now, it
becomes quite common in the cacse where module versions are updated.
Adds basic detector for registry module source strings. While this isn't
a thorough validation, this will eliminate anything that is definitely
not a registry module, and split out our host and module id strings.
lookupModuleVersions interrogates the registry for the available
versions of a particular module and the tree of dependencies.
Submodules were located by using their module path as the storage key.
Now that modules may have versions, a submodule needs to know how to
locate the corect source depending on the versions of its ancestors in
the tree.
Add a version field to each Tree, and a pointer back to the parent Tree
to step back through the ancestors. The new versionedPathKey method uses
this information to build a unique key for each module, dependent on the
ancestor versions.
Not only do stored modules need to know their version if it exists, but
any relative source needs to know all the ancestor versions in order to
resolve correctly.
The getter.Storage abstraction is proving entirely inadequate here, but
we can't replace it wholesale at the moment.
The Tree loader needs to know the location of the manifest before it can
start loading any modules. Since the version will have to be part of the
hashed storage key, there is no way to know what version of each module
are stored. The storageDir function will extract the StorageDir field
from the underlying FolderStorage instance for the tree to locate the
manifest.
To add registry support, a workaround in the local module storage was
added to record the subdirectory containing the module source from
within the archive file. Here we replace that temporary implementation
with the full manifest needed to record the necessary module metadata
for module loading.
In order to support versioned modules, the actual stored version needs
to be recorded. This can't be derived from the configuration, because
the configuration only contains the constraints, and at load time we need
to be able to enumerate the stored modules and all versions in order to
resolve them.
While the local storage key will be derived from the source and version,
that information is lost once it's hashed. While the entire storage
layer could be replaced to encode the needed data in the path itself,
this provides a minimal change to work with the existing storage code.