Added a few more test cases for CustomizeDiff, caught another error in
the process. I think this is ready for review now, and possibly some
real-world testing of the waters by way of porting some resources that
would benefit from the feature.
It's alive! CustomizeDiff logic now has been inserted into the diff
process. The test_resource_with_custom_diff resource provides some basic
testing and a reference implementation.
There should now be plenty of test coverage for this feature via the
tests added for ResourceDiff, and the basic test added to the
schemaMap.Diff test, and the test resource, but more can be added to
test any specific case that comes up otherwise.
As the interface has been specced out for ResourceDiff, the only diff
operation that can function on a non-computed key is ForceNew, and that
is only if a diff exists for that key. As such, allowing the user to
clear keys that cannot be operated on afterwards does not really make
much sense.
Will re-add this function if it is determined it's needed after all on
review.
Final set of coverage for ResourceDiff and bug fixes to correct issues
the test cases brought up.
Will be dropping ClearAll in next commit, as with how this interface is
intended to be used, it does not really make much sense.
This provides a deep copy of a schemaMap, which will be needed for the
diff customization process as ResourceDiff will be able to flag fields
as ForceNew - we don't want to affect the source schema.
In diffString, removed values are marked if the old value is non-nil and
the new value is nil, regardless of if the new value is marked as a
computed value. With the new diff override behaviour, this can lead to
issues where a nil attribute may get inserted into the diff if the new
value has been marked as computed (as the value will be marked as
missing, but computed).
This propagates into finalizeDiff in some respects because even if
NewComputed is manually set in the diff that gets passed in, it is
ignored, and the schema becomes the only source of truth. Since our new
diff behaviour is mainly designed to be supported on computed keys only,
it stands to reason that we there will be a time where we want to set a
diff as NewComputed on a key, and see that change in the diff.
These two small changes makes that happen. No regressions in tests have
been observed via this change, so it seems safe and non-invasive.
The beginning of tests for SetNew. Noticed that removed values are not
logged for computed keys in a diff - not too sure if that is going to be
a problem (possibly not as GetChange in ResourceData should still
reference state for the old value). I came on this most notably in the
"TestSetNew/complex-ish_set_diff" test, for reference.
More tests pending for other functions for coverage of other functions
and test cases as defined in #8769.
This ensures that when we hook this into the main diff logic, that we
can just re-diff the keys that are modified, ensuring that the diff is
not re-run on keys that were not touched, or on keys that were
intentionally removed.
This should complete the feature set of the prototype. This function
removes a specific key from the existing diff, preventing conflicts.
Further functionality might be needed to make this behave as expected,
namely ensuring that we don't re-process the whole diff after the
CustomizeDiffFunc runs.
This new prototype removes the dependence on a underlying ResourceData,
moving several items up to ensure an approach that more matches
ResourceData. Namely, we create our own MultiLevelFieldReader that
also layers updated diff values on top. New values use a small
re-implementation of the MapFieldWriter/Reader that allows computed
values to be set.
The assumption here now is that a second diff will happen after the
first one is done, processing the new values set in the 2 new
reader/writer levels and updating the diff as necessary.
This adds a new object, ResourceDiff, to the schema package. This
object, in conjunction with a function defined in CustomizeDiff in the
resource schema, allows for the in-flight customization of a Terraform
diff. This helps support use cases such as when there are necessary
changes to a resource that cannot be detected in config, such as via
computed fields (most of the utility in this object works on computed
fields only). It also allows for a wholesale wipe of the diff to allow
for diff logic completely offloaded to an external API, if it is a
better use case for a specific provider.
As part of this work, many internal diff functions have been moved to
use a special resourceDiffer interface, to allow for shared
functionality between ResourceDiff and ResourceData. This may be
extended to the DiffSuppressFunc as well which would restrict use of
ResourceData in DiffSuppressFunc to generally read-only fields.
This work is not yet in its final state - CustomizeDiff is not yet
implemented in the general diff workflow, new functions may be added
(notably Clear() for a single key), and functionality may be altered.
Tests will follow as well.
If registry API discovery fails for a particular host then it's better to
generate an explicit error message for that early -- so we can tell the
user exactly what happened -- rather than assuming a default path and
then failing downstream when we get a 404 from that request.
This is a significant rework of the Modules getting started guide to be
in terms of the Consul module available via the Terraform Registry. This
allows us to introduce the registry as part of the tutorial, and also
gives us some auto-generated documentation to link to as context for
the tutorial.
This new module is designed pretty differently than the one we formerly
used, and in particular it doesn't expose any server addresses for the
created Consul cluster -- due to using an auto-scaling group -- and thus
we're forced to use the (arguably-less-compelling) auto-scaling group name
for our output example.
Now includes more complete information on usage of private registries and
updates the module registry API documentation to include the new
version-enumeration endpoint along with describing Terraform's discovery
protocol.
The modules mechanism has changed quite a bit for version 0.11 and so
although simple usage remains broadly compatible there are some
significant changes in the behavior of more complex modules.
Since large parts of this were rewritten anyway, I also took the
opportunity to do some copy-editing to make the prose on this page more
consistent with our usual editorial voice and to wrap the long
lines.
In the 0.10 release we added an opt-in mode where Terraform would prompt
interactively for confirmation during apply. We made this opt-in to give
those who wrap Terraform in automation some time to update their scripts
to explicitly opt out of this behavior where appropriate.
Here we switch the default so that a "terraform apply" with no arguments
will -- if it computes a non-empty diff -- display the diff and wait for
the user to type "yes" in similar vein to the "terraform destroy" command.
This makes the commonly-used "terraform apply" a safe workflow for
interactive use, so "terraform plan" is now mainly for use in automation
where a separate planning step is used. The apply command remains
non-interactive when given an explicit plan file.
The previous behavior -- though not recommended -- can be obtained by
explicitly setting the -auto-approve option on the apply command line,
and indeed that is how all of the tests are updated here so that they can
continue to run non-interactively.
While the TLS handshakes are a fairly small overhead compared to
downloading the providers, clients in some situation are failing to
complete the TLS handshake in a timely manner. It's unclear if this is
because of heavily constrained clients are stalling while doing the
major crpto operations, or the edge servers are throttling repeated
requests from the same IPs.
This should allow reusing the open TLS connection to the release edge
servers during init.
This patch allows `build.sh` to be used with terraform plugins to
easily create cross-platform packages, using the same method as the
terraform Makefile:
```
mkdir scripts
curl https://raw.githubusercontent.com/hashicorp/terraform/master/scripts/build.sh -o scripts/build.sh
TF_RELEASE=1 sh -c "scripts/build.sh" # make bin
```
It's not always easy or convenient for a web application to determine its
own absolute URL to return, so here we pragmatically allow the download
source string from a registry to be a path relative to the download
endpoint.
Since X-Terraform-Get is a go-getter string, not all valid values are
valid URLs and so we sniff for certain relative-path-looking prefixes
in order to decide whether to apply the relative lookup transform.
This PR changes manta from being a legacy remote state client to a new backend type. This also includes creating a simple lock within manta
This PR also unifies the way the triton client is configured (the schema) and also uses the same env vars to set the backend up
It is important to note that if the remote state path does not exist, then the backend will create that path. This means the user doesn't need to fall into a chicken and egg situation of creating the directory in advance before interacting with it
Reuse the running consul server for all tests.
Update the lostLockConnection package, since the api client should no
longer lose a lock immediately on network errors.
This is from a commit just after the v1.0.0 release, because it removes
the Porter service dependency for tests. The client api package was not
changed.
Previously we forced all remote state backends to be wrapped in a
BackupState wrapper that generates a local "terraform.tfstate.backup"
file before updating the remote state.
This backup mechanism was motivated by allowing users to recover a
previous state if user error caused an undesirable change such as loss
of the record of one or more resources. However, it also has the downside
of flushing a possibly-sensitive state to local disk in a location where
users may not realize its purpose and accidentally check it into version
control. Those using remote state would generally prefer that state never
be flushed to local disk at all.
The use-case of recovering older states can be dealt with for remote
backends by selecting a backend that has preservation of older versions
as a first-class feature, such as S3 versioning or Terraform Enterprise's
first-class historical state versioning mechanism.
There remains still one case where state can be flushed to local disk: if
a write to the remote backend fails during "terraform apply" then we will
still create the "errored.tfstate" file to allow the user to recover. This
seems like a reasonable compromise because this is done only in an
_exceptional_ case, and the console output makes it very clear that this
file has been created.
Fixes#15339.
Add GetModule for the cli to initialize from a regisry module source.
Storage.GetModule fetches a module using the same detection and
discovery as used by the normal module loading. The final copy is still
done by module.GetCopy to remove vcs files.