This should eventually grow to be a step that actually verifies the
validity of the docs source prior to publishing the artifact that a
downstream publishing pipeline can consume, but for the moment it's really
just a placeholder since we have no such validation step and no downstream
pipeline consuming this artifact.
The general idea here is that the artifacts from this workflow should be
sufficient for all downstream release steps to occur without any direct
access to the Terraform CLI repository, and so this is intended to
eventually meet that ideal but as of this commit the website docs
publishing step _does_ still depend on direct access to this repository.
This uses the decoupled build and run strategy to run the e2etests so that
we can arrange to run the tests against the real release packages produced
elsewhere in this workflow, rather than ones generated just in time by
the test harness.
The modifications to make-archive.sh here make it more consistent with its
originally-intended purpose of producing a harness for testing "real"
release executables. Our earlier compromise of making it include its own
terraform executable came from a desire to use that script as part of
manual cross-platform testing when we weren't yet set up to support
automation of those tests as we're doing here. That does mean, however,
that the terraform-e2etest package content must be combined with content
from a terraform release package in order to produce a valid contest for
running the tests.
We use a single job to cross-compile the test harness for all of the
supported platforms, because that build is relatively fast and so not
worth the overhead of matrix build, but then use a matrix build to
actually run the tests so that we can run them in a worker matching the
target platform.
We currently have access only to amd64 (x64) runners in GitHub Actions
and so for the moment this process is limited only to the subset of our
supported platforms which use that architecture.
For the moment this is just an experimental additional sidecar package
build process, separate from the one we really use for releases, so that
we can get some experience building in the GitHub Actions environment
before hopefully eventually switching to using the artifacts from this
process as the packages we'll release through the official release
channels.
It will react to any push to one of our release branches or to a release
tag by building official-release-like .zip, .deb, and .rpm packages, along
with Docker images, based on the content of the corresponding commit.
For the moment this doesn't actually produce _shippable_ packages because
in particular it doesn't know how to update our version/version.go file
to hard-code the correct version number. Once Go 1.18 is release and we've
upgraded to it we'll switch to using debug.ReadBuildInfo to determine
our version number at runtime and so no longer need to directly update
a source file for each release, but that functionality isn't yet available
in our current Go 1.17 release.
When creating a Set of BasicEdges, the Hashcode function is used to determine
map keys for the underlying set data structure.
The string hex representation of the two vertices' pointers is unsafe to use
as a map key, since these addresses may change between the time they are added
to the set and the time the set is operated on.
Instead we modify the Hashcode function to maintain the references to the
underlying vertices so they cannot be garbage collected during the lifetime
of the Set.
TransitiveReduction does not rely on having a single root, and only
must be free of cycles.
DepthFirstWalk and ReverseDepthFirstWalk do not do a topological sort,
so if order matters TransitiveReduction must be run first.
These two functions were left during a refactor to ensure the old
behavior of a sorted walk was still accessible in some manner. The
package has since been removed from any public API, and the sorted
versions are no longer called, so we can remove them.
Create a separate `validateMoveStatementGraph` function so that
`ValidateMoves` and `ApplyMoves` both check the same conditions. Since
we're not using the builtin `graph.Validate` method, because we may have
multiple roots and want better cycle diagnostics, we need to add checks
for self references too. While multiple roots are an error enforced by
`Validate` for the concurrent walk, they are OK when using
`TransitiveReduction` and `ReverseDepthFirstWalk`, so we can skip that
check.
Apply moves must first use `TransitiveReduction` to reduce the graph,
otherwise nodes may be skipped if they are passed over by a transitive
edge.
Changing only the index on a nested module will cause all nested moves
to create cycles, since their full addresses will match both the From
and To addresses. When building the dependency graph, check if the
parent is only changing the index of the containing module, and prevent
the backwards edge for the move.
Add a method for checking if the From and To addresses in a move
statement are only changing the indexes of modules relative to the
statement module.
This is needed because move statement nested within the module will be
able to match against both the From and To addresses, causing cycles in
the order of move operations.
There was an unintended regression in go-getter v1.5.9's GitGetter which
caused us to temporarily fork that particular getter into Terraform to
expedite a fix. However, upstream v1.5.10 now includes a
functionally-equivalent fix and so we can heal that fork by upgrading.
We'd also neglected to update the Module Sources docs when upgrading to
go-getter v1.5.9 originally and so we were missing documentation about the
new "depth" argument to enable shadow cloning, which I've added
retroactively here along with documenting its restriction of only
supporting named refs.
This new go-getter release also introduces a new credentials-passing
method for the Google Cloud Storage getter, and so we must incorporate
that into the Terraform-level documentation about module sources.
Changing only the index on a nested module will cause all nested moves
to create cycles, since their full addresses will match both the From
and To addresses. When building the dependency graph, check if the
parent is only changing the index of the containing module, and prevent
the backwards edge for the move.
This paragraph is trying to say that try only works for dynamic errors and
not for errors that are _not_ based on dynamic decision-making in
expressions.
I'm not sure if this typo was always here or if it was mistakenly "corrected"
at some point, but either way the word "probably" changes the meaning
of this sentence entirely, making it seem like Terraform is hedging
the likelihood of a problem rather than checking exactly for one.
Add a method for checking if the From and To addresses in a move
statement are only changing the indexes of modules relative to the
statement module.
This is needed because move statement nested within the module will be
able to match against both the From and To addresses, causing cycles in
the order of move operations.
When applying module `moved` statements by iterating through modules in
state, we previously required an exact match from the `moved`
statement's `from` field and the module address. This permitted moving
resources directly inside a module, but did not recur into module calls
within those moved modules.
This commit moves that exact match requirement so that it only applies
to `moved` statements targeting resources. In turn this allows nested
modules to be moved.
Resource dependencies are by nature an unordered collection, but they're
persisted to state as a JSON array (in random order). This makes a mess for
`terraform apply -refresh-only`, which sees the new random order as a change
that requires the user to approve a state update.
(As an additional problem on top of that, the user interface for refresh-only
runs doesn't expect to see that as a type of change, so it says "no changes!
would you like to update to reflect these detected changes?")
This commit changes `ResourceInstanceObject.Encode()` to sort the in-memory
slice of dependencies (lexically, by address) before passing it on to be
compared and persisted. This appears to fix the observed UI issues with a
minimum of logic changes.
As the cloud e2e tests evolved some common patters became apparent. This
standardizes and consolidates the patterns into a common test runner
that takes the table tests and runs them in parallel. Some tests also
needed to be converted to utilize table tests.
Previously we would only ever add new lock entries or update existing
ones. However, it's possible that over time a module may _cease_ using
a particular provider, at which point we ought to remove it from the lock
file so that operations won't fail when seeing that the provider cache
directory is inconsistent with the lock file.
Now the provider installer (EnsureProviderVersions) will remove any lock
file entries that relate to providers not included in the given
requirements, which therefore makes the resulting lock file properly match
the set of packages the installer wrote into the cache.
This does potentially mean that someone could inadvertently defeat the
lock by removing a provider dependency, running "terraform init", then
undoing that removal, and finally running "terraform init" again. However,
that seems relatively unlikely compared to the likelihood of removing
a provider and keeping it removed, and in the event it _did_ happen the
changes to the lock entry for that provider would be visible in the diff
of the provider lock file as usual, and so could be noticed in code
review just as for any other change to dependencies.
When showing a saved plan, we do not need to check the state lineage
against current state, because the plan cannot be applied. This is
relevant when plan and apply specify a `-state` argument to choose a
non-default state file. In this case, the stored prior state in the plan
will not match the default state file, so a lineage check will always
error.