Merge branch 'hashicorp:main' into main
This commit is contained in:
commit
de8810cdd9
|
@ -10,7 +10,7 @@ references:
|
|||
executors:
|
||||
go:
|
||||
docker:
|
||||
- image: docker.mirror.hashicorp.services/cimg/go:1.16
|
||||
- image: docker.mirror.hashicorp.services/cimg/go:1.17.2
|
||||
environment:
|
||||
CONSUL_VERSION: 1.7.2
|
||||
GOMAXPROCS: 4
|
||||
|
@ -26,7 +26,9 @@ jobs:
|
|||
steps:
|
||||
- checkout
|
||||
- run: go mod verify
|
||||
- run: make fmtcheck generate
|
||||
- run: go install honnef.co/go/tools/cmd/staticcheck
|
||||
- run: go install github.com/nishanths/exhaustive/...
|
||||
- run: make fmtcheck generate staticcheck exhaustive
|
||||
- run:
|
||||
name: verify no code was generated
|
||||
command: |
|
||||
|
@ -37,6 +39,20 @@ jobs:
|
|||
git status --porcelain
|
||||
exit 1
|
||||
fi
|
||||
- run:
|
||||
name: verify go.mod and go.sum are correct
|
||||
command: |
|
||||
go mod tidy
|
||||
git diff --quiet && exit 0
|
||||
echo "please run 'go mod tidy' to ensure go.mod and go.sum are up to date"
|
||||
exit 1
|
||||
- run:
|
||||
name: verify that our protobuf stubs are up-to-date
|
||||
command: |
|
||||
make protobuf
|
||||
git diff --quiet && exit 0
|
||||
echo "Run 'make protobuf' to ensure that the protobuf stubs are up-to-date."
|
||||
exit 1
|
||||
|
||||
go-test:
|
||||
executor:
|
||||
|
|
|
@ -1,6 +1,12 @@
|
|||
# Contributing to Terraform
|
||||
|
||||
This repository contains only Terraform core, which includes the command line interface and the main graph engine. Providers are implemented as plugins that each have their own repository in [the `terraform-providers` organization](https://github.com/terraform-providers) on GitHub. Instructions for developing each provider are in the associated README file. For more information, see [the provider development overview](https://www.terraform.io/docs/plugins/provider.html).
|
||||
This repository contains only Terraform core, which includes the command line interface and the main graph engine. Providers are implemented as plugins that each have their own repository linked from the [Terraform Registry index](https://registry.terraform.io/browse/providers). Instructions for developing each provider are usually in the associated README file. For more information, see [the provider development overview](https://www.terraform.io/docs/plugins/provider.html).
|
||||
|
||||
---
|
||||
|
||||
**Note:** Due to current low staffing on the Terraform Core team at HashiCorp, **we are not routinely reviewing and merging community-submitted pull requests**. We do hope to begin processing them again soon once we're back up to full staffing again, but for the moment we need to ask for patience. Thanks!
|
||||
|
||||
**Additional note:** The intent of the prior comment was to provide clarity for the community around what to expect for a small part of the work related to Terraform. This does not affect other PR reviews, such as those for Terraform providers. We expect that the relevant team will be appropriately staffed within the coming weeks, which should allow us to get back to normal community PR review practices. For the broader context and information on HashiCorp’s continued commitment to and investment in Terraform, see [this blog post](https://www.hashicorp.com/blog/terraform-community-contributions).
|
||||
|
||||
---
|
||||
|
||||
|
@ -119,9 +125,6 @@ The following checks run when a PR is opened:
|
|||
|
||||
- Contributor License Agreement (CLA): If this is your first contribution to Terraform you will be asked to sign the CLA.
|
||||
- Tests: tests include unit tests and acceptance tests, and all tests must pass before a PR can be merged.
|
||||
- Test Coverage Report: We use [codecov](https://codecov.io/) to check both overall test coverage, and patch coverage.
|
||||
|
||||
-> **Note:** We are still deciding on the right targets for our code coverage check. A failure in `codecov` does not necessarily mean that your PR will not be approved or merged.
|
||||
|
||||
----
|
||||
|
||||
|
@ -129,7 +132,9 @@ The following checks run when a PR is opened:
|
|||
|
||||
This repository contains the source code for Terraform CLI, which is the main component of Terraform that contains the core Terraform engine.
|
||||
|
||||
The HashiCorp-maintained Terraform providers are also open source but are not in this repository; instead, they are each in their own repository in [the `terraform-providers` organization](https://github.com/terraform-providers) on GitHub.
|
||||
Terraform providers are not maintained in this repository; you can find relevant
|
||||
repository and relevant issue tracker for each provider within the
|
||||
[Terraform Registry index](https://registry.terraform.io/browse/providers).
|
||||
|
||||
This repository also does not include the source code for some other parts of the Terraform product including Terraform Cloud, Terraform Enterprise, and the Terraform Registry. Those components are not open source, though if you have feedback about them (including bug reports) please do feel free to [open a GitHub issue on this repository](https://github.com/hashicorp/terraform/issues/new/choose).
|
||||
|
||||
|
|
|
@ -4,8 +4,8 @@ contact_links:
|
|||
url: https://support.hashicorp.com/hc/en-us/requests/new
|
||||
about: For issues and feature requests related to the Terraform Cloud/Enterprise platform, please submit a HashiCorp support request or email tf-cloud@hashicorp.support
|
||||
- name: Provider-related Feedback and Questions
|
||||
url: https://github.com/terraform-providers
|
||||
about: Each provider (e.g. AWS, Azure, GCP, Oracle, K8S, etc.) has its own repository, any provider related issues or questions should be directed to appropriate provider repository.
|
||||
url: https://registry.terraform.io/browse/providers
|
||||
about: Each provider (e.g. AWS, Azure, GCP, Oracle, K8S, etc.) has its own repository, any provider related issues or questions should be directed to the appropriate issue tracker linked from the Registry.
|
||||
- name: Provider Development Feedback and Questions
|
||||
url: https://github.com/hashicorp/terraform-plugin-sdk/issues/new/choose
|
||||
about: Plugin SDK has its own repository, any SDK and provider development related issues or questions should be directed there.
|
||||
|
|
|
@ -1,9 +1,6 @@
|
|||
*.dll
|
||||
*.exe
|
||||
.DS_Store
|
||||
example.tf
|
||||
terraform.tfplan
|
||||
terraform.tfstate
|
||||
bin/
|
||||
modules-dev/
|
||||
/pkg/
|
||||
|
@ -13,9 +10,6 @@ website/build
|
|||
website/node_modules
|
||||
.vagrant/
|
||||
*.backup
|
||||
./*.tfstate
|
||||
.terraform/
|
||||
*.log
|
||||
*.bak
|
||||
*~
|
||||
.*.swp
|
||||
|
@ -27,9 +21,5 @@ website/node_modules
|
|||
website/vendor
|
||||
vendor/
|
||||
|
||||
# Test exclusions
|
||||
!command/testdata/**/*.tfstate
|
||||
!command/testdata/**/.terraform/
|
||||
|
||||
# Coverage
|
||||
coverage.txt
|
||||
|
|
|
@ -1 +1 @@
|
|||
1.16.4
|
||||
1.17.2
|
||||
|
|
20
CHANGELOG.md
20
CHANGELOG.md
|
@ -1,17 +1,33 @@
|
|||
## 1.1.0 (Unreleased)
|
||||
|
||||
UPGRADE NOTES:
|
||||
|
||||
* Terraform on macOS now requires macOS 10.13 High Sierra or later; Older macOS versions are no longer supported.
|
||||
* The `terraform graph` command no longer supports `-type=validate` and `-type=eval` options. The validate graph is always the same as the plan graph anyway, and the "eval" graph was just an implementation detail of the `terraform console` command. The default behavior of creating a plan graph should be a reasonable replacement for both of the removed graph modes. (Please note that `terraform graph` is not covered by the Terraform v1.0 compatibility promises, because its behavior inherently exposes Terraform Core implementation details, so we recommend it only for interactive debugging tasks and not for use in automation.)
|
||||
* `terraform apply` with a previously-saved plan file will now verify that the provider plugin packages used to create the plan fully match the ones used during apply, using the same checksum scheme that Terraform normally uses for the dependency lock file. Previously Terraform was checking consistency of plugins from a plan file using a legacy mechanism which covered only the main plugin executable, not any other files that might be distributed alongside in the plugin package.
|
||||
|
||||
This additional check should not affect typical plugins that conform to the expectation that a plugin package's contents are immutable once released, but may affect a hypothetical in-house plugin that intentionally modifies extra files in its package directory somehow between plan and apply. If you have such a plugin, you'll need to change its approach to store those files in some other location separate from the package directory. This is a minor compatibility break motivated by increasing the assurance that plugins have not been inadvertently or maliciously modified between plan and apply.
|
||||
|
||||
NEW FEATURES:
|
||||
|
||||
* cli: `terraform add` generates resource configuration templates ([#28874](https://github.com/hashicorp/terraform/issues/28874))
|
||||
* config: a new `type()` function, only available in `terraform console` ([#28501](https://github.com/hashicorp/terraform/issues/28501))
|
||||
* `terraform plan` and `terraform apply`: When Terraform plans to destroy a resource instance due to it no longer being declared in the configuration, the proposed plan output will now include a note hinting at what situation prompted that proposal, so you can more easily see what configuration change might avoid the object being destroyed. ([#29637](https://github.com/hashicorp/terraform/pull/29637))
|
||||
* `terraform plan` and `terraform apply`: When Terraform automatically moves a singleton resource instance to index zero or vice-versa in response to adding or removing `count`, it'll report explicitly that it did so as part of the plan output. ([#29605](https://github.com/hashicorp/terraform/pull/29605))
|
||||
* `terraform add`: The (currently-experimental) `terraform add` generates a starting point for a particular resource configuration. ([#28874](https://github.com/hashicorp/terraform/issues/28874))
|
||||
* config: a new `type()` function, available only in `terraform console`. ([#28501](https://github.com/hashicorp/terraform/issues/28501))
|
||||
|
||||
ENHANCEMENTS:
|
||||
|
||||
* config: Terraform now checks the syntax of and normalizes module source addresses (the `source` argument in `module` blocks) during configuration decoding rather than only at module installation time. This is largely just an internal refactoring, but a visible benefit of this change is that the `terraform init` messages about module downloading will now show the canonical module package address Terraform is downloading from, after interpreting the special shorthands for common cases like GitHub URLs. ([#28854](https://github.com/hashicorp/terraform/issues/28854))
|
||||
* `terraform plan` and `terraform apply`: Terraform will now report explicitly in the UI if it automatically moves a resource instance to a new address as a result of adding or removing the `count` argument from an existing resource. For example, if you previously had `resource "aws_subnet" "example"` _without_ `count`, you might have `aws_subnet.example` already bound to a remote object in your state. If you add `count = 1` to that resource then Terraform would previously silently rebind the object to `aws_subnet.example[0]` as part of planning, whereas now Terraform will mention that it did so explicitly in the plan description. ([#29605](https://github.com/hashicorp/terraform/issues/29605))
|
||||
* `terraform workspace delete`: will now allow deleting a workspace whose state contains only data resource instances and output values, without running `terraform destroy` first. Previously the presence of data resources would require using `-force` to override the safety check guarding against accidentally forgetting about remote objects, but a data resource is not responsible for the management of its associated remote object(s) anyway. [GH-29754]
|
||||
* provisioner/remote-exec and provisioner/file: When using SSH agent authentication mode on Windows, Terraform can now detect and use [the Windows 10 built-in OpenSSH Client](https://devblogs.microsoft.com/powershell/using-the-openssh-beta-in-windows-10-fall-creators-update-and-windows-server-1709/)'s SSH Agent, when available, in addition to the existing support for the third-party solution [Pageant](https://documentation.help/PuTTY/pageant.html) that was already supported. [GH-29747]
|
||||
|
||||
BUG FIXES:
|
||||
|
||||
* core: Fixed an issue where provider configuration input variables were not properly merging with values in configuration ([#29000](https://github.com/hashicorp/terraform/issues/29000))
|
||||
* core: Reduce scope of dependencies that may defer reading of data sources when using `depends_on` or directly referencing managed resources [GH-29682]
|
||||
* cli: Blocks using SchemaConfigModeAttr in the provider SDK can now represented in the plan json output ([#29522](https://github.com/hashicorp/terraform/issues/29522))
|
||||
* cli: Prevent applying a stale planfile when there was no previous state [GH-29755]
|
||||
|
||||
## Previous Releases
|
||||
|
||||
|
|
17
Makefile
17
Makefile
|
@ -11,16 +11,21 @@ generate:
|
|||
# Terraform do not involve changing protobuf files and protoc is not a
|
||||
# go-gettable dependency and so getting it installed can be inconvenient.
|
||||
#
|
||||
# If you are working on changes to protobuf interfaces you may either use
|
||||
# this target or run the individual scripts below directly.
|
||||
# If you are working on changes to protobuf interfaces, run this Makefile
|
||||
# target to be sure to regenerate all of the protobuf stubs using the expected
|
||||
# versions of protoc and the protoc Go plugins.
|
||||
protobuf:
|
||||
bash scripts/protobuf-check.sh
|
||||
bash internal/tfplugin5/generate.sh
|
||||
bash internal/plans/internal/planproto/generate.sh
|
||||
go run ./tools/protobuf-compile .
|
||||
|
||||
fmtcheck:
|
||||
@sh -c "'$(CURDIR)/scripts/gofmtcheck.sh'"
|
||||
|
||||
staticcheck:
|
||||
@sh -c "'$(CURDIR)/scripts/staticcheck.sh'"
|
||||
|
||||
exhaustive:
|
||||
@sh -c "'$(CURDIR)/scripts/exhaustive.sh'"
|
||||
|
||||
website:
|
||||
ifeq (,$(wildcard $(GOPATH)/src/$(WEBSITE_REPO)))
|
||||
echo "$(WEBSITE_REPO) not found in your GOPATH (necessary for layouts and assets), get-ting..."
|
||||
|
@ -47,4 +52,4 @@ endif
|
|||
# under parallel conditions.
|
||||
.NOTPARALLEL:
|
||||
|
||||
.PHONY: fmtcheck generate protobuf website website-test
|
||||
.PHONY: fmtcheck generate protobuf website website-test staticcheck
|
||||
|
|
|
@ -7,7 +7,7 @@ Terraform
|
|||
- Tutorials: [HashiCorp's Learn Platform](https://learn.hashicorp.com/terraform)
|
||||
- Certification Exam: [HashiCorp Certified: Terraform Associate](https://www.hashicorp.com/certification/#hashicorp-certified-terraform-associate)
|
||||
|
||||
<img alt="Terraform" src="https://www.terraform.io/assets/images/logo-hashicorp-3f10732f.svg" width="600px">
|
||||
<img alt="Terraform" src="https://www.datocms-assets.com/2885/1629941242-logo-terraform-main.svg" width="600px">
|
||||
|
||||
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.
|
||||
|
||||
|
|
|
@ -77,10 +77,10 @@ func initCommands(
|
|||
configDir = "" // No config dir available (e.g. looking up a home directory failed)
|
||||
}
|
||||
|
||||
dataDir := os.Getenv("TF_DATA_DIR")
|
||||
wd := WorkingDir(originalWorkingDir, os.Getenv("TF_DATA_DIR"))
|
||||
|
||||
meta := command.Meta{
|
||||
OriginalWorkingDir: originalWorkingDir,
|
||||
WorkingDir: wd,
|
||||
Streams: streams,
|
||||
View: views.NewView(streams).SetRunningInAutomation(inAutomation),
|
||||
|
||||
|
@ -94,7 +94,6 @@ func initCommands(
|
|||
RunningInAutomation: inAutomation,
|
||||
CLIConfigDir: configDir,
|
||||
PluginCacheDir: config.PluginCacheDir,
|
||||
OverrideDataDir: dataDir,
|
||||
|
||||
ShutdownCh: makeShutdownCh(),
|
||||
|
||||
|
|
|
@ -0,0 +1,53 @@
|
|||
# Releasing a New Version of the Protocol
|
||||
|
||||
Terraform's plugin protocol is the contract between Terraform's plugins and
|
||||
Terraform, and as such releasing a new version requires some coordination
|
||||
between those pieces. This document is intended to be a checklist to consult
|
||||
when adding a new major version of the protocol (X in X.Y) to ensure that
|
||||
everything that needs to be is aware of it.
|
||||
|
||||
## New Protobuf File
|
||||
|
||||
The protocol is defined in protobuf files that live in the hashicorp/terraform
|
||||
repository. Adding a new version of the protocol involves creating a new
|
||||
`.proto` file in that directory. It is recommended that you copy the latest
|
||||
protocol file, and modify it accordingly.
|
||||
|
||||
## New terraform-plugin-go Package
|
||||
|
||||
The
|
||||
[hashicorp/terraform-plugin-go](https://github.com/hashicorp/terraform-plugin-go)
|
||||
repository serves as the foundation for Terraform's plugin ecosystem. It needs
|
||||
to know about the new major protocol version. Either open an issue in that repo
|
||||
to have the Plugin SDK team add the new package, or if you would like to
|
||||
contribute it yourself, open a PR. It is recommended that you copy the package
|
||||
for the latest protocol version and modify it accordingly.
|
||||
|
||||
## Update the Registry's List of Allowed Versions
|
||||
|
||||
The Terraform Registry validates the protocol versions a provider advertises
|
||||
support for when ingesting providers. Providers will not be able to advertise
|
||||
support for the new protocol version until it is added to that list.
|
||||
|
||||
## Update Terraform's Version Constraints
|
||||
|
||||
Terraform only downloads providers that speak protocol versions it is
|
||||
compatible with from the Registry during `terraform init`. When adding support
|
||||
for a new protocol, you need to tell Terraform it knows that protocol version.
|
||||
Modify the `SupportedPluginProtocols` variable in hashicorp/terraform's
|
||||
`internal/getproviders/registry_client.go` file to include the new protocol.
|
||||
|
||||
## Test Running a Provider With the Test Framework
|
||||
|
||||
Use the provider test framework to test a provider written with the new
|
||||
protocol. This end-to-end test ensures that providers written with the new
|
||||
protocol work correctly wtih the test framework, especially in communicating
|
||||
the protocol version between the test framework and Terraform.
|
||||
|
||||
## Test Retrieving and Running a Provider From the Registry
|
||||
|
||||
Publish a provider, either to the public registry or to the staging registry,
|
||||
and test running `terraform init` and `terraform apply`, along with exercising
|
||||
any of the new functionality the protocol version introduces. This end-to-end
|
||||
test ensures that all the pieces needing to be updated before practitioners can
|
||||
use providers built with the new protocol have been updated.
|
155
go.mod
155
go.mod
|
@ -4,10 +4,7 @@ require (
|
|||
cloud.google.com/go/storage v1.10.0
|
||||
github.com/Azure/azure-sdk-for-go v52.5.0+incompatible
|
||||
github.com/Azure/go-autorest/autorest v0.11.18
|
||||
github.com/Azure/go-ntlmssp v0.0.0-20200615164410-66371956d46c // indirect
|
||||
github.com/ChrisTrenkamp/goxpath v0.0.0-20190607011252-c5096ec8773d // indirect
|
||||
github.com/abdullin/seq v0.0.0-20160510034733-d5467c17e7af // indirect
|
||||
github.com/agext/levenshtein v1.2.2
|
||||
github.com/agext/levenshtein v1.2.3
|
||||
github.com/aliyun/alibaba-cloud-sdk-go v0.0.0-20190329064014-6e358769c32a
|
||||
github.com/aliyun/aliyun-oss-go-sdk v0.0.0-20190103054945-8205d1f41e70
|
||||
github.com/aliyun/aliyun-tablestore-go-sdk v4.1.2+incompatible
|
||||
|
@ -17,37 +14,31 @@ require (
|
|||
github.com/apparentlymart/go-userdirs v0.0.0-20200915174352-b0c018a67c13
|
||||
github.com/apparentlymart/go-versions v1.0.1
|
||||
github.com/armon/circbuf v0.0.0-20190214190532-5111143e8da2
|
||||
github.com/aws/aws-sdk-go v1.37.0
|
||||
github.com/baiyubin/aliyun-sts-go-sdk v0.0.0-20180326062324-cfa1a18b161f // indirect
|
||||
github.com/aws/aws-sdk-go v1.40.25
|
||||
github.com/bgentry/speakeasy v0.1.0
|
||||
github.com/bmatcuk/doublestar v1.1.5
|
||||
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e
|
||||
github.com/coreos/go-systemd v0.0.0-20181012123002-c6f51f82210d // indirect
|
||||
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f
|
||||
github.com/davecgh/go-spew v1.1.1
|
||||
github.com/dylanmei/iso8601 v0.1.0 // indirect
|
||||
github.com/dylanmei/winrmtest v0.0.0-20190225150635-99b7fe2fddf1
|
||||
github.com/go-test/deep v1.0.3
|
||||
github.com/gofrs/uuid v3.3.0+incompatible // indirect
|
||||
github.com/golang/mock v1.5.0
|
||||
github.com/golang/protobuf v1.4.3
|
||||
github.com/golang/protobuf v1.5.2
|
||||
github.com/google/go-cmp v0.5.5
|
||||
github.com/google/uuid v1.2.0
|
||||
github.com/gophercloud/gophercloud v0.10.1-0.20200424014253-c3bfe50899e5
|
||||
github.com/gophercloud/utils v0.0.0-20200423144003-7c72efc7435d
|
||||
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 // indirect
|
||||
github.com/hashicorp/aws-sdk-go-base v0.6.0
|
||||
github.com/hashicorp/aws-sdk-go-base v0.7.1
|
||||
github.com/hashicorp/consul/api v1.9.1
|
||||
github.com/hashicorp/consul/sdk v0.8.0
|
||||
github.com/hashicorp/errwrap v1.1.0
|
||||
github.com/hashicorp/go-azure-helpers v0.14.0
|
||||
github.com/hashicorp/go-checkpoint v0.5.0
|
||||
github.com/hashicorp/go-cleanhttp v0.5.1
|
||||
github.com/hashicorp/go-cleanhttp v0.5.2
|
||||
github.com/hashicorp/go-getter v1.5.2
|
||||
github.com/hashicorp/go-hclog v0.15.0
|
||||
github.com/hashicorp/go-msgpack v0.5.4 // indirect
|
||||
github.com/hashicorp/go-multierror v1.1.1
|
||||
github.com/hashicorp/go-plugin v1.4.1
|
||||
github.com/hashicorp/go-plugin v1.4.3
|
||||
github.com/hashicorp/go-retryablehttp v0.5.2
|
||||
github.com/hashicorp/go-tfe v0.15.0
|
||||
github.com/hashicorp/go-uuid v1.0.1
|
||||
|
@ -56,67 +47,155 @@ require (
|
|||
github.com/hashicorp/hcl/v2 v2.10.1
|
||||
github.com/hashicorp/terraform-config-inspect v0.0.0-20210209133302-4fd17a0faac2
|
||||
github.com/hashicorp/terraform-svchost v0.0.0-20200729002733-f050f53b9734
|
||||
github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d // indirect
|
||||
github.com/jmespath/go-jmespath v0.4.0
|
||||
github.com/joyent/triton-go v0.0.0-20180313100802-d8f9c0314926
|
||||
github.com/jtolds/gls v4.2.1+incompatible // indirect
|
||||
github.com/kardianos/osext v0.0.0-20190222173326-2bc1f35cddc0
|
||||
github.com/lib/pq v1.8.0
|
||||
github.com/likexian/gokit v0.20.15
|
||||
github.com/lib/pq v1.10.3
|
||||
github.com/lusis/go-artifactory v0.0.0-20160115162124-7e4ce345df82
|
||||
github.com/masterzen/simplexml v0.0.0-20190410153822-31eea3082786 // indirect
|
||||
github.com/masterzen/winrm v0.0.0-20200615185753-c42b5136ff88
|
||||
github.com/mattn/go-isatty v0.0.12
|
||||
github.com/mattn/go-shellwords v1.0.4
|
||||
github.com/mitchellh/cli v1.1.2
|
||||
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db
|
||||
github.com/mitchellh/copystructure v1.0.0
|
||||
github.com/mitchellh/copystructure v1.2.0
|
||||
github.com/mitchellh/go-homedir v1.1.0
|
||||
github.com/mitchellh/go-linereader v0.0.0-20190213213312-1b945b3263eb
|
||||
github.com/mitchellh/go-wordwrap v1.0.0
|
||||
github.com/mitchellh/go-wordwrap v1.0.1
|
||||
github.com/mitchellh/gox v1.0.1
|
||||
github.com/mitchellh/mapstructure v1.1.2
|
||||
github.com/mitchellh/panicwrap v1.0.0
|
||||
github.com/mitchellh/reflectwalk v1.0.1
|
||||
github.com/nu7hatch/gouuid v0.0.0-20131221200532-179d4d0c4d8d // indirect
|
||||
github.com/mitchellh/reflectwalk v1.0.2
|
||||
github.com/nishanths/exhaustive v0.2.3
|
||||
github.com/packer-community/winrmcp v0.0.0-20180921211025-c76d91c1e7db
|
||||
github.com/pkg/browser v0.0.0-20201207095918-0426ae3fba23
|
||||
github.com/pkg/errors v0.9.1
|
||||
github.com/posener/complete v1.2.3
|
||||
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d // indirect
|
||||
github.com/smartystreets/goconvey v0.0.0-20180222194500-ef6db91d284a // indirect
|
||||
github.com/spf13/afero v1.2.2
|
||||
github.com/tencentcloud/tencentcloud-sdk-go v3.0.82+incompatible
|
||||
github.com/tencentyun/cos-go-sdk-v5 v0.0.0-20190808065407-f07404cefc8c
|
||||
github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common v1.0.232
|
||||
github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/tag v1.0.233
|
||||
github.com/tencentyun/cos-go-sdk-v5 v0.7.29
|
||||
github.com/tombuildsstuff/giovanni v0.15.1
|
||||
github.com/xanzy/ssh-agent v0.2.1
|
||||
github.com/xanzy/ssh-agent v0.3.1
|
||||
github.com/xlab/treeprint v0.0.0-20161029104018-1d6e34225557
|
||||
github.com/zclconf/go-cty v1.9.0
|
||||
github.com/zclconf/go-cty v1.9.1
|
||||
github.com/zclconf/go-cty-debug v0.0.0-20191215020915-b22d67c1ba0b
|
||||
github.com/zclconf/go-cty-yaml v1.0.2
|
||||
go.etcd.io/etcd v0.5.0-alpha.5.0.20210428180535-15715dcf1ace
|
||||
golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2
|
||||
golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97
|
||||
golang.org/x/mod v0.4.2
|
||||
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110
|
||||
golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d
|
||||
golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84
|
||||
golang.org/x/sys v0.0.0-20210403161142-5e06dd20ab57
|
||||
golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e
|
||||
golang.org/x/term v0.0.0-20201210144234-2321bbc49cbf
|
||||
golang.org/x/text v0.3.5
|
||||
golang.org/x/tools v0.1.0
|
||||
golang.org/x/text v0.3.6
|
||||
golang.org/x/tools v0.1.7
|
||||
google.golang.org/api v0.44.0-impersonate-preview
|
||||
google.golang.org/grpc v1.36.0
|
||||
google.golang.org/protobuf v1.25.0
|
||||
gopkg.in/ini.v1 v1.42.0 // indirect
|
||||
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0
|
||||
google.golang.org/protobuf v1.27.1
|
||||
honnef.co/go/tools v0.3.0-0.dev
|
||||
k8s.io/api v0.0.0-20190620084959-7cf5895f2711
|
||||
k8s.io/apimachinery v0.0.0-20190913080033-27d36303b655
|
||||
k8s.io/client-go v10.0.0+incompatible
|
||||
k8s.io/utils v0.0.0-20200411171748-3d5a2fe318e4
|
||||
)
|
||||
|
||||
require (
|
||||
cloud.google.com/go v0.79.0 // indirect
|
||||
github.com/Azure/go-autorest v14.2.0+incompatible // indirect
|
||||
github.com/Azure/go-autorest/autorest/adal v0.9.13 // indirect
|
||||
github.com/Azure/go-autorest/autorest/azure/cli v0.4.2 // indirect
|
||||
github.com/Azure/go-autorest/autorest/date v0.3.0 // indirect
|
||||
github.com/Azure/go-autorest/autorest/to v0.4.0 // indirect
|
||||
github.com/Azure/go-autorest/autorest/validation v0.3.1 // indirect
|
||||
github.com/Azure/go-autorest/logger v0.2.1 // indirect
|
||||
github.com/Azure/go-autorest/tracing v0.6.0 // indirect
|
||||
github.com/Azure/go-ntlmssp v0.0.0-20200615164410-66371956d46c // indirect
|
||||
github.com/BurntSushi/toml v0.3.1 // indirect
|
||||
github.com/ChrisTrenkamp/goxpath v0.0.0-20190607011252-c5096ec8773d // indirect
|
||||
github.com/Masterminds/goutils v1.1.0 // indirect
|
||||
github.com/Masterminds/semver v1.5.0 // indirect
|
||||
github.com/Masterminds/sprig v2.22.0+incompatible // indirect
|
||||
github.com/Microsoft/go-winio v0.5.0 // indirect
|
||||
github.com/abdullin/seq v0.0.0-20160510034733-d5467c17e7af // indirect
|
||||
github.com/antchfx/xpath v0.0.0-20190129040759-c8489ed3251e // indirect
|
||||
github.com/antchfx/xquery v0.0.0-20180515051857-ad5b8c7a47b0 // indirect
|
||||
github.com/apparentlymart/go-textseg/v13 v13.0.0 // indirect
|
||||
github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da // indirect
|
||||
github.com/armon/go-radix v1.0.0 // indirect
|
||||
github.com/baiyubin/aliyun-sts-go-sdk v0.0.0-20180326062324-cfa1a18b161f // indirect
|
||||
github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d // indirect
|
||||
github.com/coreos/go-semver v0.2.0 // indirect
|
||||
github.com/coreos/go-systemd v0.0.0-20181012123002-c6f51f82210d // indirect
|
||||
github.com/dimchansky/utfbom v1.1.1 // indirect
|
||||
github.com/dylanmei/iso8601 v0.1.0 // indirect
|
||||
github.com/fatih/color v1.9.0 // indirect
|
||||
github.com/form3tech-oss/jwt-go v3.2.2+incompatible // indirect
|
||||
github.com/gofrs/uuid v3.3.0+incompatible // indirect
|
||||
github.com/gogo/protobuf v1.2.2-0.20190723190241-65acae22fc9d // indirect
|
||||
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e // indirect
|
||||
github.com/google/go-querystring v1.1.0 // indirect
|
||||
github.com/google/gofuzz v1.0.0 // indirect
|
||||
github.com/googleapis/gax-go/v2 v2.0.5 // indirect
|
||||
github.com/googleapis/gnostic v0.0.0-20170729233727-0c5108395e2d // indirect
|
||||
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 // indirect
|
||||
github.com/hashicorp/go-immutable-radix v1.0.0 // indirect
|
||||
github.com/hashicorp/go-msgpack v0.5.4 // indirect
|
||||
github.com/hashicorp/go-rootcerts v1.0.2 // indirect
|
||||
github.com/hashicorp/go-safetemp v1.0.0 // indirect
|
||||
github.com/hashicorp/go-slug v0.4.1 // indirect
|
||||
github.com/hashicorp/golang-lru v0.5.1 // indirect
|
||||
github.com/hashicorp/jsonapi v0.0.0-20210518035559-1e50d74c8db3 // indirect
|
||||
github.com/hashicorp/serf v0.9.5 // indirect
|
||||
github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d // indirect
|
||||
github.com/huandu/xstrings v1.3.2 // indirect
|
||||
github.com/imdario/mergo v0.3.11 // indirect
|
||||
github.com/json-iterator/go v1.1.7 // indirect
|
||||
github.com/jstemmer/go-junit-report v0.9.1 // indirect
|
||||
github.com/jtolds/gls v4.2.1+incompatible // indirect
|
||||
github.com/klauspost/compress v1.11.2 // indirect
|
||||
github.com/masterzen/simplexml v0.0.0-20190410153822-31eea3082786 // indirect
|
||||
github.com/mattn/go-colorable v0.1.6 // indirect
|
||||
github.com/mitchellh/go-testing-interface v1.0.0 // indirect
|
||||
github.com/mitchellh/iochan v1.0.0 // indirect
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
|
||||
github.com/modern-go/reflect2 v1.0.1 // indirect
|
||||
github.com/mozillazg/go-httpheader v0.3.0 // indirect
|
||||
github.com/nu7hatch/gouuid v0.0.0-20131221200532-179d4d0c4d8d // indirect
|
||||
github.com/oklog/run v1.0.0 // indirect
|
||||
github.com/satori/go.uuid v1.2.0 // indirect
|
||||
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d // indirect
|
||||
github.com/smartystreets/goconvey v0.0.0-20180222194500-ef6db91d284a // indirect
|
||||
github.com/spf13/pflag v1.0.3 // indirect
|
||||
github.com/ulikunitz/xz v0.5.8 // indirect
|
||||
github.com/vmihailenco/msgpack/v4 v4.3.12 // indirect
|
||||
github.com/vmihailenco/tagparser v0.1.1 // indirect
|
||||
go.opencensus.io v0.23.0 // indirect
|
||||
go.uber.org/atomic v1.3.2 // indirect
|
||||
go.uber.org/multierr v1.1.0 // indirect
|
||||
go.uber.org/zap v1.10.0 // indirect
|
||||
golang.org/x/lint v0.0.0-20201208152925-83fdc39ff7b5 // indirect
|
||||
golang.org/x/time v0.0.0-20191024005414-555d28b269f0 // indirect
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
|
||||
google.golang.org/appengine v1.6.7 // indirect
|
||||
google.golang.org/genproto v0.0.0-20210319143718-93e7006c17a6 // indirect
|
||||
gopkg.in/inf.v0 v0.9.0 // indirect
|
||||
gopkg.in/ini.v1 v1.42.0 // indirect
|
||||
gopkg.in/yaml.v2 v2.3.0 // indirect
|
||||
k8s.io/klog v0.4.0 // indirect
|
||||
sigs.k8s.io/yaml v1.1.0 // indirect
|
||||
)
|
||||
|
||||
replace google.golang.org/grpc v1.36.0 => google.golang.org/grpc v1.27.1
|
||||
|
||||
replace github.com/golang/mock v1.5.0 => github.com/golang/mock v1.4.4
|
||||
|
||||
replace k8s.io/client-go => k8s.io/client-go v0.0.0-20190620085101-78d2af792bab
|
||||
|
||||
go 1.14
|
||||
// github.com/dgrijalva/jwt-go is no longer maintained but is an indirect
|
||||
// dependency of the old etcdv2 backend, and so we need to keep this working
|
||||
// until that backend is removed. github.com/golang-jwt/jwt/v3 is a drop-in
|
||||
// replacement that includes a fix for CVE-2020-26160.
|
||||
replace github.com/dgrijalva/jwt-go => github.com/golang-jwt/jwt v3.2.1+incompatible
|
||||
|
||||
go 1.17
|
||||
|
|
124
go.sum
124
go.sum
|
@ -74,6 +74,7 @@ github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBp
|
|||
github.com/Azure/go-ntlmssp v0.0.0-20180810175552-4a21cbd618b4/go.mod h1:chxPXzSsl7ZWRAuOIE23GDNzjWuZquvFlgA8xmpunjU=
|
||||
github.com/Azure/go-ntlmssp v0.0.0-20200615164410-66371956d46c h1:/IBSNwUN8+eKzUzbJPqhK839ygXJ82sde8x3ogr6R28=
|
||||
github.com/Azure/go-ntlmssp v0.0.0-20200615164410-66371956d46c/go.mod h1:chxPXzSsl7ZWRAuOIE23GDNzjWuZquvFlgA8xmpunjU=
|
||||
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
|
||||
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
||||
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
|
||||
github.com/ChrisTrenkamp/goxpath v0.0.0-20170922090931-c385f95c6022/go.mod h1:nuWgzSkT5PnyOd+272uUmV0dnAnAn42Mk7PiQC5VzN4=
|
||||
|
@ -85,6 +86,8 @@ github.com/Masterminds/semver v1.5.0 h1:H65muMkzWKEuNDnfl9d70GUjFniHKHRbFPGBuZ3Q
|
|||
github.com/Masterminds/semver v1.5.0/go.mod h1:MB6lktGJrhw8PrUyiEoblNEGEQ+RzHPF078ddwwvV3Y=
|
||||
github.com/Masterminds/sprig v2.22.0+incompatible h1:z4yfnGrZ7netVz+0EDJ0Wi+5VZCSYp4Z0m2dk6cEM60=
|
||||
github.com/Masterminds/sprig v2.22.0+incompatible/go.mod h1:y6hNFY5UBTIWBxnzTeuNhlNS5hqE0NB0E6fgfo2Br3o=
|
||||
github.com/Microsoft/go-winio v0.5.0 h1:Elr9Wn+sGKPlkaBvwu4mTrxtmOp3F3yV9qhaHbXGjwU=
|
||||
github.com/Microsoft/go-winio v0.5.0/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
|
||||
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
|
||||
github.com/PuerkitoBio/purell v1.0.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
|
||||
github.com/PuerkitoBio/urlesc v0.0.0-20160726150825-5bd2802263f2/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
|
||||
|
@ -92,8 +95,9 @@ github.com/QcloudApi/qcloud_sign_golang v0.0.0-20141224014652-e4130a326409/go.mo
|
|||
github.com/abdullin/seq v0.0.0-20160510034733-d5467c17e7af h1:DBNMBMuMiWYu0b+8KMJuWmfCkcxl09JwdlqwDZZ6U14=
|
||||
github.com/abdullin/seq v0.0.0-20160510034733-d5467c17e7af/go.mod h1:5Jv4cbFiHJMsVxt52+i0Ha45fjshj6wxYr1r19tB9bw=
|
||||
github.com/agext/levenshtein v1.2.1/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558=
|
||||
github.com/agext/levenshtein v1.2.2 h1:0S/Yg6LYmFJ5stwQeRp6EeOcCbj7xiqQSdNelsXvaqE=
|
||||
github.com/agext/levenshtein v1.2.2/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558=
|
||||
github.com/agext/levenshtein v1.2.3 h1:YB2fHEn0UJagG8T1rrWknE3ZQzWM06O8AMAatNn7lmo=
|
||||
github.com/agext/levenshtein v1.2.3/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558=
|
||||
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
|
||||
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
|
||||
github.com/aliyun/alibaba-cloud-sdk-go v0.0.0-20190329064014-6e358769c32a h1:APorzFpCcv6wtD5vmRWYqNm4N55kbepL7c7kTq9XI6A=
|
||||
|
@ -131,8 +135,8 @@ github.com/armon/go-radix v1.0.0 h1:F4z6KzEeeQIMeLFa97iZU6vupzoecKdU5TX24SNppXI=
|
|||
github.com/armon/go-radix v1.0.0/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
|
||||
github.com/aws/aws-sdk-go v1.15.78/go.mod h1:E3/ieXAlvM0XWO57iftYVDLLvQ824smPP3ATZkfNZeM=
|
||||
github.com/aws/aws-sdk-go v1.31.9/go.mod h1:5zCpMtNQVjRREroY7sYe8lOMRSxkhG6MZveU8YkpAk0=
|
||||
github.com/aws/aws-sdk-go v1.37.0 h1:GzFnhOIsrGyQ69s7VgqtrG2BG8v7X7vwB3Xpbd/DBBk=
|
||||
github.com/aws/aws-sdk-go v1.37.0/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
|
||||
github.com/aws/aws-sdk-go v1.40.25 h1:Depnx7O86HWgOCLD5nMto6F9Ju85Q1QuFDnbpZYQWno=
|
||||
github.com/aws/aws-sdk-go v1.40.25/go.mod h1:585smgzpB/KqRA+K3y/NL/oYRqQvpNJYvLm+LY1U59Q=
|
||||
github.com/baiyubin/aliyun-sts-go-sdk v0.0.0-20180326062324-cfa1a18b161f h1:ZNv7On9kyUzm7fvRZumSyy/IUiSC7AzL0I1jKKtwooA=
|
||||
github.com/baiyubin/aliyun-sts-go-sdk v0.0.0-20180326062324-cfa1a18b161f/go.mod h1:AuiFmCCPBSrqvVMvuqFuk0qogytodnVFVSN5CeJB8Gc=
|
||||
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
|
||||
|
@ -171,9 +175,6 @@ github.com/davecgh/go-spew v0.0.0-20151105211317-5215b55f46b2/go.mod h1:J7Y8YcW2
|
|||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/dgrijalva/jwt-go v0.0.0-20160705203006-01aeca54ebda/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
|
||||
github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumCAMpl/TFQ4/5kLM=
|
||||
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
|
||||
github.com/dimchansky/utfbom v1.1.0/go.mod h1:rO41eb7gLfo8SF1jd9F8HplJm1Fewwi4mQvIirEdv+8=
|
||||
github.com/dimchansky/utfbom v1.1.1 h1:vV6w1AhK4VMnhBno/TPVCoK9U/LP0PkLCS9tbxHdi/U=
|
||||
github.com/dimchansky/utfbom v1.1.1/go.mod h1:SxdoEBH5qIqFocHMyGOXVAybYJdr71b1Q/j0mACtrfE=
|
||||
|
@ -225,6 +226,8 @@ github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7a
|
|||
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
|
||||
github.com/gogo/protobuf v1.2.2-0.20190723190241-65acae22fc9d h1:3PaI8p3seN09VjbTYC/QWlUZdZ1qS1zGjy7LH2Wt07I=
|
||||
github.com/gogo/protobuf v1.2.2-0.20190723190241-65acae22fc9d/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
|
||||
github.com/golang-jwt/jwt v3.2.1+incompatible h1:73Z+4BJcrTC+KczS6WvTPvRGOp1WmfEP4Q1lOd9Z/+c=
|
||||
github.com/golang-jwt/jwt v3.2.1+incompatible/go.mod h1:8pz2t5EyA70fFQQSrl6XZXzqecmYZeUEB8OUGHkxJ+I=
|
||||
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58=
|
||||
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
|
||||
github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||
|
@ -255,8 +258,10 @@ github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:W
|
|||
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
|
||||
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
|
||||
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
|
||||
github.com/golang/protobuf v1.4.3 h1:JjCZWpVbqXDqFVmTfYWEVTMIYrL/NPdPSCHPJ0T/raM=
|
||||
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
|
||||
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
|
||||
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
|
||||
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
|
||||
github.com/google/btree v0.0.0-20160524151835-7d79101e329e/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
|
||||
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
|
||||
github.com/google/btree v1.0.0 h1:0udJVsspx3VBr5FwtLhQQtuAsVc79tTq0ocGIPAU6qo=
|
||||
|
@ -273,8 +278,9 @@ github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
|
|||
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU=
|
||||
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-querystring v1.0.0 h1:Xkwi/a1rcvNg1PPYe5vI8GbeBY/jrVuDX5ASuANWTrk=
|
||||
github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck=
|
||||
github.com/google/go-querystring v1.1.0 h1:AnCroh3fv4ZBgVIf1Iwtovgjaw/GiKJo8M8yD/fhyJ8=
|
||||
github.com/google/go-querystring v1.1.0/go.mod h1:Kcdr2DB4koayq7X8pmAG4sNG59So17icRSOU623lUBU=
|
||||
github.com/google/gofuzz v0.0.0-20161122191042-44d81051d367/go.mod h1:HP5RmnzzSNb993RKQDq4+1A4ia9nllfqcQFTQJedwGI=
|
||||
github.com/google/gofuzz v0.0.0-20170612174753-24818f796faf/go.mod h1:HP5RmnzzSNb993RKQDq4+1A4ia9nllfqcQFTQJedwGI=
|
||||
github.com/google/gofuzz v1.0.0 h1:A8PeW59pxE9IoFRqBp37U+mSNaQoZ46F1f0f863XSXw=
|
||||
|
@ -323,8 +329,8 @@ github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 h1:Ovs26xHkKqVztRpIrF/92Bcuy
|
|||
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
|
||||
github.com/grpc-ecosystem/grpc-gateway v1.9.5 h1:UImYN5qQ8tuGpGE16ZmjvcTtTw24zw1QAp/SlnNrZhI=
|
||||
github.com/grpc-ecosystem/grpc-gateway v1.9.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
|
||||
github.com/hashicorp/aws-sdk-go-base v0.6.0 h1:qmUbzM36msbBF59YctwuO5w0M2oNXjlilgKpnEhx1uw=
|
||||
github.com/hashicorp/aws-sdk-go-base v0.6.0/go.mod h1:2fRjWDv3jJBeN6mVWFHV6hFTNeFBx2gpDLQaZNxUVAY=
|
||||
github.com/hashicorp/aws-sdk-go-base v0.7.1 h1:7s/aR3hFn74tYPVihzDyZe7y/+BorN70rr9ZvpV3j3o=
|
||||
github.com/hashicorp/aws-sdk-go-base v0.7.1/go.mod h1:2fRjWDv3jJBeN6mVWFHV6hFTNeFBx2gpDLQaZNxUVAY=
|
||||
github.com/hashicorp/consul/api v1.9.1 h1:SngrdG2L62qqLsUz85qcPhFZ78rPf8tcD5qjMgs6MME=
|
||||
github.com/hashicorp/consul/api v1.9.1/go.mod h1:XjsvQN+RJGWI2TWy1/kqaE16HrR2J/FWgkYjdZQsX9M=
|
||||
github.com/hashicorp/consul/sdk v0.8.0 h1:OJtKBtEjboEZvG6AOUdh4Z1Zbyu0WcxQ0qatRrZHTVU=
|
||||
|
@ -338,8 +344,9 @@ github.com/hashicorp/go-azure-helpers v0.14.0/go.mod h1:kR7+sTDEb9TOp/O80ss1UEJg
|
|||
github.com/hashicorp/go-checkpoint v0.5.0 h1:MFYpPZCnQqQTE18jFwSII6eUQrD/oxMFp3mlgcqk5mU=
|
||||
github.com/hashicorp/go-checkpoint v0.5.0/go.mod h1:7nfLNL10NsxqO4iWuW6tWW0HjZuDrwkBuEQsVcpCOgg=
|
||||
github.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
|
||||
github.com/hashicorp/go-cleanhttp v0.5.1 h1:dH3aiDG9Jvb5r5+bYHsikaOUIpcM0xvgMXVoDkXMzJM=
|
||||
github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
|
||||
github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9neXJWAZQ=
|
||||
github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48=
|
||||
github.com/hashicorp/go-getter v1.5.2 h1:XDo8LiAcDisiqZdv0TKgz+HtX3WN7zA2JD1R1tjsabE=
|
||||
github.com/hashicorp/go-getter v1.5.2/go.mod h1:orNH3BTYLu/fIxGIdLjLoAJHWMDQ/UKQr5O4m3iBuoo=
|
||||
github.com/hashicorp/go-hclog v0.12.0/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ=
|
||||
|
@ -355,8 +362,8 @@ github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHh
|
|||
github.com/hashicorp/go-multierror v1.1.0/go.mod h1:spPvp8C1qA32ftKqdAHm4hHTbPw+vmowP0z+KUhOZdA=
|
||||
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
|
||||
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
|
||||
github.com/hashicorp/go-plugin v1.4.1 h1:6UltRQlLN9iZO513VveELp5xyaFxVD2+1OVylE+2E+w=
|
||||
github.com/hashicorp/go-plugin v1.4.1/go.mod h1:5fGEH17QVwTTcR0zV7yhDPLLmFX9YSZ38b18Udy6vYQ=
|
||||
github.com/hashicorp/go-plugin v1.4.3 h1:DXmvivbWD5qdiBts9TpBC7BYL1Aia5sxbRgQB+v6UZM=
|
||||
github.com/hashicorp/go-plugin v1.4.3/go.mod h1:5fGEH17QVwTTcR0zV7yhDPLLmFX9YSZ38b18Udy6vYQ=
|
||||
github.com/hashicorp/go-retryablehttp v0.5.2 h1:AoISa4P4IsW0/m4T6St8Yw38gTl5GtBAgfkhYh1xAz4=
|
||||
github.com/hashicorp/go-retryablehttp v0.5.2/go.mod h1:9B5zBasrRhHXnJnui7y6sL7es7NDiJgTc6Er0maI1Xs=
|
||||
github.com/hashicorp/go-rootcerts v1.0.2 h1:jzhAVGtqPKbwpyCPELlgNWhE1znq+qwJtW5Oi2viEzc=
|
||||
|
@ -440,7 +447,6 @@ github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQL
|
|||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||
github.com/klauspost/compress v1.11.2 h1:MiK62aErc3gIiVEtyzKfeOHgW7atJb5g/KNX5m3c2nQ=
|
||||
github.com/klauspost/compress v1.11.2/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
|
||||
github.com/konsorten/go-windows-terminal-sequences v1.0.1 h1:mweAR1A6xJ3oS2pRaGiHgQ4OO8tzTaLawm8vnODuwDk=
|
||||
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
|
||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||
|
@ -452,16 +458,8 @@ github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
|||
github.com/kylelemons/godebug v0.0.0-20170820004349-d65d576e9348/go.mod h1:B69LEHPfb2qLo0BaaOLcbitczOKLWTsrBG9LczfCD4k=
|
||||
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
|
||||
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
|
||||
github.com/lib/pq v1.8.0 h1:9xohqzkUwzR4Ga4ivdTcawVS89YSDVxXMa3xJX3cGzg=
|
||||
github.com/lib/pq v1.8.0/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
|
||||
github.com/likexian/gokit v0.0.0-20190309162924-0a377eecf7aa/go.mod h1:QdfYv6y6qPA9pbBA2qXtoT8BMKha6UyNbxWGWl/9Jfk=
|
||||
github.com/likexian/gokit v0.0.0-20190418170008-ace88ad0983b/go.mod h1:KKqSnk/VVSW8kEyO2vVCXoanzEutKdlBAPohmGXkxCk=
|
||||
github.com/likexian/gokit v0.0.0-20190501133040-e77ea8b19cdc/go.mod h1:3kvONayqCaj+UgrRZGpgfXzHdMYCAO0KAt4/8n0L57Y=
|
||||
github.com/likexian/gokit v0.20.15 h1:DgtIqqTRFqtbiLJFzuRESwVrxWxfs8OlY6hnPYBa3BM=
|
||||
github.com/likexian/gokit v0.20.15/go.mod h1:kn+nTv3tqh6yhor9BC4Lfiu58SmH8NmQ2PmEl+uM6nU=
|
||||
github.com/likexian/simplejson-go v0.0.0-20190409170913-40473a74d76d/go.mod h1:Typ1BfnATYtZ/+/shXfFYLrovhFyuKvzwrdOnIDHlmg=
|
||||
github.com/likexian/simplejson-go v0.0.0-20190419151922-c1f9f0b4f084/go.mod h1:U4O1vIJvIKwbMZKUJ62lppfdvkCdVd2nfMimHK81eec=
|
||||
github.com/likexian/simplejson-go v0.0.0-20190502021454-d8787b4bfa0b/go.mod h1:3BWwtmKP9cXWwYCr5bkoVDEfLywacOv0s06OBEDpyt8=
|
||||
github.com/lib/pq v1.10.3 h1:v9QZf2Sn6AmjXtQeFpdoq/eaNtYP6IN+7lcrygsIAtg=
|
||||
github.com/lib/pq v1.10.3/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
|
||||
github.com/lusis/go-artifactory v0.0.0-20160115162124-7e4ce345df82 h1:wnfcqULT+N2seWf6y4yHzmi7GD2kNx4Ute0qArktD48=
|
||||
github.com/lusis/go-artifactory v0.0.0-20160115162124-7e4ce345df82/go.mod h1:y54tfGmO3NKssKveTEFFzH8C/akrSOy/iW9qEAUDV84=
|
||||
github.com/mailru/easyjson v0.0.0-20160728113105-d5b7844b561a/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
|
||||
|
@ -495,8 +493,9 @@ github.com/mitchellh/cli v1.1.2 h1:PvH+lL2B7IQ101xQL63Of8yFS2y+aDlsFcsqNc+u/Kw=
|
|||
github.com/mitchellh/cli v1.1.2/go.mod h1:6iaV0fGdElS6dPBx0EApTxHrcWvmJphyh2n8YBLPPZ4=
|
||||
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db h1:62I3jR2EmQ4l5rM/4FEfDWcRD+abF5XlKShorW5LRoQ=
|
||||
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db/go.mod h1:l0dey0ia/Uv7NcFFVbCLtqEBQbrT4OCwCSKTEv6enCw=
|
||||
github.com/mitchellh/copystructure v1.0.0 h1:Laisrj+bAB6b/yJwB5Bt3ITZhGJdqmxquMKeZ+mmkFQ=
|
||||
github.com/mitchellh/copystructure v1.0.0/go.mod h1:SNtv71yrdKgLRyLFxmLdkAbkKEFWgYaq1OVrnRcwhnw=
|
||||
github.com/mitchellh/copystructure v1.2.0 h1:vpKXTN4ewci03Vljg/q9QvCGUDttBOGBIa15WveJJGw=
|
||||
github.com/mitchellh/copystructure v1.2.0/go.mod h1:qLl+cE2AmVv+CoeAwDPye/v+N2HKCj9FbZEVFJRxO9s=
|
||||
github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
|
||||
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
|
||||
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
|
||||
|
@ -506,8 +505,9 @@ github.com/mitchellh/go-testing-interface v0.0.0-20171004221916-a61a99592b77/go.
|
|||
github.com/mitchellh/go-testing-interface v1.0.0 h1:fzU/JVNcaqHQEcVFAKeR41fkiLdIPrefOvVG1VZ96U0=
|
||||
github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI=
|
||||
github.com/mitchellh/go-wordwrap v0.0.0-20150314170334-ad45545899c7/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo=
|
||||
github.com/mitchellh/go-wordwrap v1.0.0 h1:6GlHJ/LTGMrIJbwgdqdl2eEH8o+Exx/0m8ir9Gns0u4=
|
||||
github.com/mitchellh/go-wordwrap v1.0.0/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo=
|
||||
github.com/mitchellh/go-wordwrap v1.0.1 h1:TLuKupo69TCn6TQSyGxwI1EblZZEsQ0vMlAFQflz0v0=
|
||||
github.com/mitchellh/go-wordwrap v1.0.1/go.mod h1:R62XHJLzvMFRBbcrT7m7WgmE1eOyTSsCt+hzestvNj0=
|
||||
github.com/mitchellh/gox v1.0.1 h1:x0jD3dcHk9a9xPSDN6YEL4xL6Qz0dvNYm8yZqui5chI=
|
||||
github.com/mitchellh/gox v1.0.1/go.mod h1:ED6BioOGXMswlXa2zxfh/xdd5QhwYliBFn9V18Ap4z4=
|
||||
github.com/mitchellh/iochan v1.0.0 h1:C+X3KsSTLFVBr/tK1eYN/vs4rJcvsiLU338UhYPJWeY=
|
||||
|
@ -518,8 +518,8 @@ github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh
|
|||
github.com/mitchellh/panicwrap v1.0.0 h1:67zIyVakCIvcs69A0FGfZjBdPleaonSgGlXRSRlb6fE=
|
||||
github.com/mitchellh/panicwrap v1.0.0/go.mod h1:pKvZHwWrZowLUzftuFq7coarnxbBXU4aQh3N0BJOeeA=
|
||||
github.com/mitchellh/reflectwalk v1.0.0/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw=
|
||||
github.com/mitchellh/reflectwalk v1.0.1 h1:FVzMWA5RllMAKIdUSC8mdWo3XtwoecrH79BY70sEEpE=
|
||||
github.com/mitchellh/reflectwalk v1.0.1/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw=
|
||||
github.com/mitchellh/reflectwalk v1.0.2 h1:G2LzWKi524PWgd3mLHV8Y5k7s6XUvT0Gef6zxSIeXaQ=
|
||||
github.com/mitchellh/reflectwalk v1.0.2/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw=
|
||||
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
|
@ -527,11 +527,14 @@ github.com/modern-go/reflect2 v0.0.0-20180320133207-05fbef0ca5da/go.mod h1:bx2lN
|
|||
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
||||
github.com/modern-go/reflect2 v1.0.1 h1:9f412s+6RmYXLWZSEzVVgPGK7C2PphHj5RJrvfx9AWI=
|
||||
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
||||
github.com/mozillazg/go-httpheader v0.2.1 h1:geV7TrjbL8KXSyvghnFm+NyTux/hxwueTSrwhe88TQQ=
|
||||
github.com/mozillazg/go-httpheader v0.2.1/go.mod h1:jJ8xECTlalr6ValeXYdOF8fFUISeBAdw6E61aqQma60=
|
||||
github.com/mozillazg/go-httpheader v0.3.0 h1:3brX5z8HTH+0RrNA1362Rc3HsaxyWEKtGY45YrhuINM=
|
||||
github.com/mozillazg/go-httpheader v0.3.0/go.mod h1:PuT8h0pw6efvp8ZeUec1Rs7dwjK08bt6gKSReGMqtdA=
|
||||
github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
|
||||
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
|
||||
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
|
||||
github.com/nishanths/exhaustive v0.2.3 h1:+ANTMqRNrqwInnP9aszg/0jDo+zbXa4x66U19Bx/oTk=
|
||||
github.com/nishanths/exhaustive v0.2.3/go.mod h1:bhIX678Nx8inLM9PbpvK1yv6oGtoP8BfaIeMzgBNKvc=
|
||||
github.com/nu7hatch/gouuid v0.0.0-20131221200532-179d4d0c4d8d h1:VhgPp6v9qf9Agr/56bj7Y/xa04UccTW04VP0Qed4vnQ=
|
||||
github.com/nu7hatch/gouuid v0.0.0-20131221200532-179d4d0c4d8d/go.mod h1:YUTz3bUH2ZwIWBy3CJBeOBEugqcmXREj14T+iG/4k4U=
|
||||
github.com/oklog/run v1.0.0 h1:Ru7dDtJNOyC66gQ5dQmaCa0qIsAUFY3sFpK1Xk8igrw=
|
||||
|
@ -582,8 +585,9 @@ github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg
|
|||
github.com/sergi/go-diff v1.0.0 h1:Kpca3qRNrduNnOQeazBd0ysaKrUJiIuISHxogkT9RPQ=
|
||||
github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
|
||||
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
|
||||
github.com/sirupsen/logrus v1.4.2 h1:SPIRibHv4MatM3XXNO2BJeFLZwZ2LvZgfQ5+UNI2im4=
|
||||
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
|
||||
github.com/sirupsen/logrus v1.7.0 h1:ShrD1U9pZB12TX0cVy0DtePoCH97K8EtX+mg7ZARUtM=
|
||||
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
|
||||
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d h1:zE9ykElWQ6/NYmHa3jpm/yHnI4xSofP+UP6SpjHcSeM=
|
||||
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
|
||||
github.com/smartystreets/goconvey v0.0.0-20180222194500-ef6db91d284a h1:JSvGDIbmil4Ui/dDdFBExb7/cmkNjyX5F97oglmvCDo=
|
||||
|
@ -608,10 +612,14 @@ github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81P
|
|||
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
|
||||
github.com/stretchr/testify v1.6.1 h1:hDPOHmpOpP40lSULcqw7IrRb/u7w6RpDC9399XyoNd0=
|
||||
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/tencentcloud/tencentcloud-sdk-go v3.0.82+incompatible h1:5Td2b0yfaOvw9M9nZ5Oav6Li9bxUNxt4DgxMfIPpsa0=
|
||||
github.com/tencentcloud/tencentcloud-sdk-go v3.0.82+incompatible/go.mod h1:0PfYow01SHPMhKY31xa+EFz2RStxIqj6JFAJS+IkCi4=
|
||||
github.com/tencentyun/cos-go-sdk-v5 v0.0.0-20190808065407-f07404cefc8c h1:iRD1CqtWUjgEVEmjwTMbP1DMzz1HRytOsgx/rlw/vNs=
|
||||
github.com/tencentyun/cos-go-sdk-v5 v0.0.0-20190808065407-f07404cefc8c/go.mod h1:wk2XFUg6egk4tSDNZtXeKfe2G6690UVyt163PuUxBZk=
|
||||
github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common v1.0.194/go.mod h1:7sCQWVkxcsR38nffDW057DRGk8mUjK1Ing/EFOK8s8Y=
|
||||
github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common v1.0.232 h1:kwsWbh4rEw42ZDe9/812ebhbwNZxlQyZ2sTmxBOKhN4=
|
||||
github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common v1.0.232/go.mod h1:7sCQWVkxcsR38nffDW057DRGk8mUjK1Ing/EFOK8s8Y=
|
||||
github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/kms v1.0.194/go.mod h1:yrBKWhChnDqNz1xuXdSbWXG56XawEq0G5j1lg4VwBD4=
|
||||
github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/tag v1.0.233 h1:5Tbi+jyZ2MojC6GK8V6hchwtnkP2IuENUTqSisbYOlA=
|
||||
github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/tag v1.0.233/go.mod h1:sX14+NSvMjOhNFaMtP2aDy6Bss8PyFXij21gpY6+DAs=
|
||||
github.com/tencentyun/cos-go-sdk-v5 v0.7.29 h1:uwRBzc70Wgtc5iQQCowqecfRT0OpCXUOZzodZHOOEDs=
|
||||
github.com/tencentyun/cos-go-sdk-v5 v0.7.29/go.mod h1:4E4+bQ2gBVJcgEC9Cufwylio4mXOct2iu05WjgEBx1o=
|
||||
github.com/tmc/grpc-websocket-proxy v0.0.0-20200427203606-3cfed13b9966 h1:j6JEOq5QWFker+d7mFQYOhjTZonQ7YkLTHm56dbn+yM=
|
||||
github.com/tmc/grpc-websocket-proxy v0.0.0-20200427203606-3cfed13b9966/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
|
||||
github.com/tombuildsstuff/giovanni v0.15.1 h1:CVRaLOJ7C/eercCrKIsarfJ4SZoGMdBL9Q2deFDUXco=
|
||||
|
@ -625,8 +633,8 @@ github.com/vmihailenco/msgpack/v4 v4.3.12 h1:07s4sz9IReOgdikxLTKNbBdqDMLsjPKXwvC
|
|||
github.com/vmihailenco/msgpack/v4 v4.3.12/go.mod h1:gborTTJjAo/GWTqqRjrLCn9pgNN+NXzzngzBKDPIqw4=
|
||||
github.com/vmihailenco/tagparser v0.1.1 h1:quXMXlA39OCbd2wAdTsGDlK9RkOk6Wuw+x37wVyIuWY=
|
||||
github.com/vmihailenco/tagparser v0.1.1/go.mod h1:OeAg3pn3UbLjkWt+rN9oFYB6u/cQgqMEUPoW2WPyhdI=
|
||||
github.com/xanzy/ssh-agent v0.2.1 h1:TCbipTQL2JiiCprBWx9frJ2eJlCYT00NmctrHxVAr70=
|
||||
github.com/xanzy/ssh-agent v0.2.1/go.mod h1:mLlQY/MoOhWBj+gOGMQkOeiEvkx+8pJSI+0Bx9h2kr4=
|
||||
github.com/xanzy/ssh-agent v0.3.1 h1:AmzO1SSWxw73zxFZPRwaMN1MohDw8UyHnmuxyceTEGo=
|
||||
github.com/xanzy/ssh-agent v0.3.1/go.mod h1:QIE4lCeL7nkC25x+yA3LBIYfwCc1TFziCtG7cBAac6w=
|
||||
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2 h1:eY9dn8+vbi4tKz5Qo6v2eYzo7kUS51QINcR5jNpbZS8=
|
||||
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
|
||||
github.com/xlab/treeprint v0.0.0-20161029104018-1d6e34225557 h1:Jpn2j6wHkC9wJv5iMfJhKqrZJx3TahFx+7sbZ7zQdxs=
|
||||
|
@ -635,12 +643,14 @@ github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9de
|
|||
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
|
||||
github.com/yuin/goldmark v1.4.0/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
|
||||
github.com/zclconf/go-cty v1.0.0/go.mod h1:xnAOWiHeOqg2nWS62VtQ7pbOu17FtxJNW8RLEih+O3s=
|
||||
github.com/zclconf/go-cty v1.1.0/go.mod h1:xnAOWiHeOqg2nWS62VtQ7pbOu17FtxJNW8RLEih+O3s=
|
||||
github.com/zclconf/go-cty v1.2.0/go.mod h1:hOPWgoHbaTUnI5k4D2ld+GRpFJSCe6bCM7m1q/N4PQ8=
|
||||
github.com/zclconf/go-cty v1.8.0/go.mod h1:vVKLxnk3puL4qRAv72AO+W99LUD4da90g3uUAzyuvAk=
|
||||
github.com/zclconf/go-cty v1.9.0 h1:IgJxw5b4LPXCPeqFjjhLaNEA8NKXMyaEUdAd399acts=
|
||||
github.com/zclconf/go-cty v1.9.0/go.mod h1:vVKLxnk3puL4qRAv72AO+W99LUD4da90g3uUAzyuvAk=
|
||||
github.com/zclconf/go-cty v1.9.1 h1:viqrgQwFl5UpSxc046qblj78wZXVDFnSOufaOTER+cc=
|
||||
github.com/zclconf/go-cty v1.9.1/go.mod h1:vVKLxnk3puL4qRAv72AO+W99LUD4da90g3uUAzyuvAk=
|
||||
github.com/zclconf/go-cty-debug v0.0.0-20191215020915-b22d67c1ba0b h1:FosyBZYxY34Wul7O/MSKey3txpPYyCqVO5ZyceuQJEI=
|
||||
github.com/zclconf/go-cty-debug v0.0.0-20191215020915-b22d67c1ba0b/go.mod h1:ZRKQfBXbGkpdV6QMzT3rU1kSTAnfu1dO8dPKjYprgj8=
|
||||
github.com/zclconf/go-cty-yaml v1.0.2 h1:dNyg4QLTrv2IfJpm7Wtxi55ed5gLGOlPrZ6kMd51hY0=
|
||||
|
@ -666,7 +676,6 @@ go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
|
|||
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20181025213731-e84da0312774/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20190219172222-a4c6cb3142f2/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20190222235706-ffb98f73852f/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20190426145343-a29dc8fdc734/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
|
@ -680,8 +689,8 @@ golang.org/x/crypto v0.0.0-20200820211705-5c72a883971a/go.mod h1:LzIPMQfyMNhhGPh
|
|||
golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20201016220609-9e8e0b390897/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20210220033148-5ea612d1eb83/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
|
||||
golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2 h1:It14KIkyBFYkHkwZ7k45minvA9aorojkyjGk9KJ5B/w=
|
||||
golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
|
||||
golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97 h1:/UOmuWzQfxxo9UtlXMwuQU8CMgg1eZXqTRwkSQJWKOI=
|
||||
golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
|
||||
|
@ -761,8 +770,11 @@ golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwY
|
|||
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||
golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||
golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110 h1:qWPm9rbaAMKs8Bq/9LRpbMqxWRVUAQwMI9fVrssnTfw=
|
||||
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
|
||||
golang.org/x/net v0.0.0-20210614182718-04defd469f4e/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d h1:20cMwl2fHAzkJMEA+8J4JgqBQcQGzbisXo31MIeenXI=
|
||||
golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20190402181905-9f3314589c9a/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
|
@ -786,7 +798,6 @@ golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJ
|
|||
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c h1:5KslGYwFpkhGh+Q16bwMP3cOontH8FOep7tGV86Y7SQ=
|
||||
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sys v0.0.0-20170830134202-bb24a47a89ea/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
|
@ -797,7 +808,6 @@ golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5h
|
|||
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190221075227-b4e8571b14e0/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
|
@ -839,11 +849,17 @@ golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7w
|
|||
golang.org/x/sys v0.0.0-20201201145000-ef89a241ccb3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210104204734-6f8348627aad/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210305230114-8fe3ee5dd75b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210403161142-5e06dd20ab57 h1:F5Gozwx4I1xtr/sr/8CFbb57iKi3297KFs0QDbGN60A=
|
||||
golang.org/x/sys v0.0.0-20210403161142-5e06dd20ab57/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e h1:WUoyKPm6nCo1BnNUvPGnFG3T5DUVem42yDJZZ4CNxMA=
|
||||
golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.0.0-20201210144234-2321bbc49cbf h1:MZ2shdL+ZM/XzY3ZGOnh4Nlpnxz5GSOhOmtHo3iPU6M=
|
||||
|
@ -856,8 +872,9 @@ golang.org/x/text v0.3.1-0.20181227161524-e6919f6577db/go.mod h1:bEr9sfX3Q8Zfm5f
|
|||
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.5 h1:i6eZZ+zk0SOf0xgBpEpPD18qWcJda6q1sxt3S0kzyUQ=
|
||||
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.6 h1:aRYxNxv6iGQlyVaZmk6ZgYEDa+Jg18DxebPSrd6bg1M=
|
||||
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/time v0.0.0-20161028155119-f51c12702a4d/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
|
@ -914,8 +931,10 @@ golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd/go.mod h1:emZCQorbCU4vsT4f
|
|||
golang.org/x/tools v0.0.0-20201201161351-ac6f37ff4c2a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||
golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||
golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||
golang.org/x/tools v0.1.0 h1:po9/4sTYwZU9lPhi1tOrb4hCv3qrhiQ77LZfGa2OjwY=
|
||||
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
|
||||
golang.org/x/tools v0.1.4/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
|
||||
golang.org/x/tools v0.1.7 h1:6j8CgantCy3yc8JGBqkDLMKWqZ0RDU2g1HVgacojGWQ=
|
||||
golang.org/x/tools v0.1.7/go.mod h1:LGqMHiF4EqQNHR1JncWGqT5BVaXmza+X+BDGol+dOxo=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
|
@ -1009,6 +1028,8 @@ google.golang.org/grpc v1.31.1/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM
|
|||
google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
|
||||
google.golang.org/grpc v1.34.0/go.mod h1:WotjhfgOW/POjDeRt8vscBtXq+2VjORFy659qA51WJ8=
|
||||
google.golang.org/grpc v1.35.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
|
||||
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0 h1:M1YKkFIboKNieVO5DLUEVzQfGwJD30Nv2jfUgzb5UcE=
|
||||
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw=
|
||||
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
|
||||
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
|
||||
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
|
||||
|
@ -1018,8 +1039,11 @@ google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2
|
|||
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
|
||||
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
|
||||
google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4=
|
||||
google.golang.org/protobuf v1.25.0 h1:Ejskq+SyPohKW+1uil0JJMtmHCgJPJ/qWTxr8qp+R4c=
|
||||
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
|
||||
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
|
||||
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
|
||||
google.golang.org/protobuf v1.27.1 h1:SnqbnDw1V7RiZcXPx5MEeqPv2s79L9i7BJUlG/+RurQ=
|
||||
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
|
||||
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
|
@ -1052,6 +1076,8 @@ honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWh
|
|||
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
|
||||
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
|
||||
honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
|
||||
honnef.co/go/tools v0.3.0-0.dev h1:6vzhjcOJu1nJRa2G8QLXf3DPeg601NuerY16vrb01GY=
|
||||
honnef.co/go/tools v0.3.0-0.dev/go.mod h1:lPVVZ2BS5TfnjLyizF7o7hv7j9/L+8cZY2hLyjP9cGY=
|
||||
k8s.io/api v0.0.0-20190620084959-7cf5895f2711 h1:BblVYz/wE5WtBsD/Gvu54KyBUTJMflolzc5I2DTvh50=
|
||||
k8s.io/api v0.0.0-20190620084959-7cf5895f2711/go.mod h1:TBhBqb1AWbBQbW3XRusr7n7E4v2+5ZY8r8sAMnyFC5A=
|
||||
k8s.io/apimachinery v0.0.0-20190612205821-1799e75a0719/go.mod h1:I4A+glKBHiTgiEjQiCCQfCAIcIMFGt291SmsvcrFzJA=
|
||||
|
|
|
@ -37,6 +37,10 @@ func (c ModuleCall) Absolute(moduleAddr ModuleInstance) AbsModuleCall {
|
|||
}
|
||||
}
|
||||
|
||||
func (c ModuleCall) Equal(other ModuleCall) bool {
|
||||
return c.Name == other.Name
|
||||
}
|
||||
|
||||
// AbsModuleCall is the address of a "module" block relative to the root
|
||||
// of the configuration.
|
||||
//
|
||||
|
@ -70,6 +74,10 @@ func (c AbsModuleCall) Instance(key InstanceKey) ModuleInstance {
|
|||
return ret
|
||||
}
|
||||
|
||||
func (c AbsModuleCall) Equal(other AbsModuleCall) bool {
|
||||
return c.Module.Equal(other.Module) && c.Call.Equal(other.Call)
|
||||
}
|
||||
|
||||
type absModuleCallInstanceKey string
|
||||
|
||||
func (c AbsModuleCall) UniqueKey() UniqueKey {
|
||||
|
|
|
@ -2,11 +2,33 @@ package addrs
|
|||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strings"
|
||||
|
||||
"github.com/hashicorp/terraform/internal/tfdiags"
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
)
|
||||
|
||||
// anyKeyImpl is the InstanceKey representation indicating a wildcard, which
|
||||
// matches all possible keys. This is only used internally for matching
|
||||
// combinations of address types, where only portions of the path contain key
|
||||
// information.
|
||||
type anyKeyImpl rune
|
||||
|
||||
func (k anyKeyImpl) instanceKeySigil() {
|
||||
}
|
||||
|
||||
func (k anyKeyImpl) String() string {
|
||||
return fmt.Sprintf("[%s]", string(k))
|
||||
}
|
||||
|
||||
func (k anyKeyImpl) Value() cty.Value {
|
||||
return cty.StringVal(string(k))
|
||||
}
|
||||
|
||||
// anyKey is the only valid value of anyKeyImpl
|
||||
var anyKey = anyKeyImpl('*')
|
||||
|
||||
// MoveEndpointInModule annotates a MoveEndpoint with the address of the
|
||||
// module where it was declared, which is the form we use for resolving
|
||||
// whether move statements chain from or are nested within other move
|
||||
|
@ -30,6 +52,27 @@ type MoveEndpointInModule struct {
|
|||
relSubject AbsMoveable
|
||||
}
|
||||
|
||||
// ImpliedMoveStatementEndpoint is a special constructor for MoveEndpointInModule
|
||||
// which is suitable only for constructing "implied" move statements, which
|
||||
// means that we inferred the statement automatically rather than building it
|
||||
// from an explicit block in the configuration.
|
||||
//
|
||||
// Implied move endpoints, just as for the statements they are embedded in,
|
||||
// have somewhat-related-but-imprecise source ranges, typically referring to
|
||||
// some general configuration construct that implied the statement, because
|
||||
// by definition there is no explicit move endpoint expression in this case.
|
||||
func ImpliedMoveStatementEndpoint(addr AbsResourceInstance, rng tfdiags.SourceRange) *MoveEndpointInModule {
|
||||
// implied move endpoints always belong to the root module, because each
|
||||
// one refers to a single resource instance inside a specific module
|
||||
// instance, rather than all instances of the module where the resource
|
||||
// was declared.
|
||||
return &MoveEndpointInModule{
|
||||
SourceRange: rng,
|
||||
module: RootModule,
|
||||
relSubject: addr,
|
||||
}
|
||||
}
|
||||
|
||||
func (e *MoveEndpointInModule) ObjectKind() MoveEndpointKind {
|
||||
return absMoveableEndpointKind(e.relSubject)
|
||||
}
|
||||
|
@ -64,6 +107,24 @@ func (e *MoveEndpointInModule) String() string {
|
|||
return buf.String()
|
||||
}
|
||||
|
||||
// Equal returns true if the reciever represents the same matching pattern
|
||||
// as the other given endpoint, ignoring the source location information.
|
||||
//
|
||||
// This is not an optimized function and is here primarily to help with
|
||||
// writing concise assertions in test code.
|
||||
func (e *MoveEndpointInModule) Equal(other *MoveEndpointInModule) bool {
|
||||
if (e == nil) != (other == nil) {
|
||||
return false
|
||||
}
|
||||
if !e.module.Equal(other.module) {
|
||||
return false
|
||||
}
|
||||
// This assumes that all of our possible "movables" are trivially
|
||||
// comparable with reflect, which is true for all of them at the time
|
||||
// of writing.
|
||||
return reflect.DeepEqual(e.relSubject, other.relSubject)
|
||||
}
|
||||
|
||||
// Module returns the address of the module where the receiving address was
|
||||
// declared.
|
||||
func (e *MoveEndpointInModule) Module() Module {
|
||||
|
@ -149,6 +210,36 @@ func (e *MoveEndpointInModule) ModuleCallTraversals() (Module, []ModuleCall) {
|
|||
return e.module, ret
|
||||
}
|
||||
|
||||
// synthModuleInstance constructs a module instance out of the module path and
|
||||
// any module portion of the relSubject, substituting Module and Call segments
|
||||
// with ModuleInstanceStep using the anyKey value.
|
||||
// This is only used internally for comparison of these complete paths, but
|
||||
// does not represent how the individual parts are handled elsewhere in the
|
||||
// code.
|
||||
func (e *MoveEndpointInModule) synthModuleInstance() ModuleInstance {
|
||||
var inst ModuleInstance
|
||||
|
||||
for _, mod := range e.module {
|
||||
inst = append(inst, ModuleInstanceStep{Name: mod, InstanceKey: anyKey})
|
||||
}
|
||||
|
||||
switch sub := e.relSubject.(type) {
|
||||
case ModuleInstance:
|
||||
inst = append(inst, sub...)
|
||||
case AbsModuleCall:
|
||||
inst = append(inst, sub.Module...)
|
||||
inst = append(inst, ModuleInstanceStep{Name: sub.Call.Name, InstanceKey: anyKey})
|
||||
case AbsResource:
|
||||
inst = append(inst, sub.Module...)
|
||||
case AbsResourceInstance:
|
||||
inst = append(inst, sub.Module...)
|
||||
default:
|
||||
panic(fmt.Sprintf("unhandled relative address type %T", sub))
|
||||
}
|
||||
|
||||
return inst
|
||||
}
|
||||
|
||||
// SelectsModule returns true if the reciever directly selects either
|
||||
// the given module or a resource nested directly inside that module.
|
||||
//
|
||||
|
@ -158,55 +249,79 @@ func (e *MoveEndpointInModule) ModuleCallTraversals() (Module, []ModuleCall) {
|
|||
// resource move indicates that we should search each of the resources in
|
||||
// the given module to see if they match.
|
||||
func (e *MoveEndpointInModule) SelectsModule(addr ModuleInstance) bool {
|
||||
// In order to match the given module path should be at least as
|
||||
// long as the path to the module where the move endpoint was defined.
|
||||
if len(addr) < len(e.module) {
|
||||
synthInst := e.synthModuleInstance()
|
||||
|
||||
// In order to match the given module instance, our combined path must be
|
||||
// equal in length.
|
||||
if len(synthInst) != len(addr) {
|
||||
return false
|
||||
}
|
||||
|
||||
containerPart := addr[:len(e.module)]
|
||||
relPart := addr[len(e.module):]
|
||||
|
||||
// The names of all of the steps that align with e.module must match,
|
||||
// though the instance keys are wildcards for this part.
|
||||
for i := range e.module {
|
||||
if containerPart[i].Name != e.module[i] {
|
||||
for i, step := range synthInst {
|
||||
switch step.InstanceKey {
|
||||
case anyKey:
|
||||
// we can match any key as long as the name matches
|
||||
if step.Name != addr[i].Name {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
// The remaining module address steps must match both name and key.
|
||||
// The logic for all of these is similar but we will retrieve the
|
||||
// module address differently for each type.
|
||||
var relMatch ModuleInstance
|
||||
switch relAddr := e.relSubject.(type) {
|
||||
case ModuleInstance:
|
||||
relMatch = relAddr
|
||||
case AbsModuleCall:
|
||||
// This one requires a little more fuss because the call effectively
|
||||
// slices in two the final step of the module address.
|
||||
if len(relPart) != len(relAddr.Module)+1 {
|
||||
return false
|
||||
}
|
||||
callPart := relPart[len(relPart)-1]
|
||||
if callPart.Name != relAddr.Call.Name {
|
||||
return false
|
||||
}
|
||||
case AbsResource:
|
||||
relMatch = relAddr.Module
|
||||
case AbsResourceInstance:
|
||||
relMatch = relAddr.Module
|
||||
default:
|
||||
panic(fmt.Sprintf("unhandled relative address type %T", relAddr))
|
||||
if step != addr[i] {
|
||||
return false
|
||||
}
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
if len(relPart) != len(relMatch) {
|
||||
// SelectsResource returns true if the receiver directly selects either
|
||||
// the given resource or one of its instances.
|
||||
func (e *MoveEndpointInModule) SelectsResource(addr AbsResource) bool {
|
||||
// Only a subset of subject types can possibly select a resource, so
|
||||
// we'll take care of those quickly before we do anything more expensive.
|
||||
switch e.relSubject.(type) {
|
||||
case AbsResource, AbsResourceInstance:
|
||||
// okay
|
||||
default:
|
||||
return false // can't possibly match
|
||||
}
|
||||
|
||||
if !e.SelectsModule(addr.Module) {
|
||||
return false
|
||||
}
|
||||
for i := range relMatch {
|
||||
if relPart[i] != relMatch[i] {
|
||||
|
||||
// If we get here then we know the module part matches, so we only need
|
||||
// to worry about the relative resource part.
|
||||
switch relSubject := e.relSubject.(type) {
|
||||
case AbsResource:
|
||||
return addr.Resource.Equal(relSubject.Resource)
|
||||
case AbsResourceInstance:
|
||||
// We intentionally ignore the instance key, because we consider
|
||||
// instances to be part of the resource they belong to.
|
||||
return addr.Resource.Equal(relSubject.Resource.Resource)
|
||||
default:
|
||||
// We should've filtered out all other types above
|
||||
panic(fmt.Sprintf("unsupported relSubject type %T", relSubject))
|
||||
}
|
||||
}
|
||||
|
||||
// moduleInstanceCanMatch indicates that modA can match modB taking into
|
||||
// account steps with an anyKey InstanceKey as wildcards. The comparison of
|
||||
// wildcard steps is done symmetrically, because varying portions of either
|
||||
// instance's path could have been derived from configuration vs evaluation.
|
||||
// The length of modA must be equal or shorter than the length of modB.
|
||||
func moduleInstanceCanMatch(modA, modB ModuleInstance) bool {
|
||||
for i, step := range modA {
|
||||
switch {
|
||||
case step.InstanceKey == anyKey || modB[i].InstanceKey == anyKey:
|
||||
// we can match any key as long as the names match
|
||||
if step.Name != modB[i].Name {
|
||||
return false
|
||||
}
|
||||
default:
|
||||
if step != modB[i] {
|
||||
return false
|
||||
}
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
@ -218,7 +333,43 @@ func (e *MoveEndpointInModule) SelectsModule(addr ModuleInstance) bool {
|
|||
// the reciever is the "to" from one statement and the other given address
|
||||
// is the "from" of another statement.
|
||||
func (e *MoveEndpointInModule) CanChainFrom(other *MoveEndpointInModule) bool {
|
||||
// TODO: implement
|
||||
eMod := e.synthModuleInstance()
|
||||
oMod := other.synthModuleInstance()
|
||||
|
||||
// if the complete paths are different lengths, these cannot refer to the
|
||||
// same value.
|
||||
if len(eMod) != len(oMod) {
|
||||
return false
|
||||
}
|
||||
if !moduleInstanceCanMatch(oMod, eMod) {
|
||||
return false
|
||||
}
|
||||
|
||||
eSub := e.relSubject
|
||||
oSub := other.relSubject
|
||||
|
||||
switch oSub := oSub.(type) {
|
||||
case AbsModuleCall, ModuleInstance:
|
||||
switch eSub.(type) {
|
||||
case AbsModuleCall, ModuleInstance:
|
||||
// we already know the complete module path including any final
|
||||
// module call name is equal.
|
||||
return true
|
||||
}
|
||||
|
||||
case AbsResource:
|
||||
switch eSub := eSub.(type) {
|
||||
case AbsResource:
|
||||
return eSub.Resource.Equal(oSub.Resource)
|
||||
}
|
||||
|
||||
case AbsResourceInstance:
|
||||
switch eSub := eSub.(type) {
|
||||
case AbsResourceInstance:
|
||||
return eSub.Resource.Equal(oSub.Resource)
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
|
@ -226,7 +377,55 @@ func (e *MoveEndpointInModule) CanChainFrom(other *MoveEndpointInModule) bool {
|
|||
// contained within one of the objects that the given other address could
|
||||
// select.
|
||||
func (e *MoveEndpointInModule) NestedWithin(other *MoveEndpointInModule) bool {
|
||||
// TODO: implement
|
||||
eMod := e.synthModuleInstance()
|
||||
oMod := other.synthModuleInstance()
|
||||
|
||||
// In order to be nested within the given endpoint, the module path must be
|
||||
// shorter or equal.
|
||||
if len(oMod) > len(eMod) {
|
||||
return false
|
||||
}
|
||||
|
||||
if !moduleInstanceCanMatch(oMod, eMod) {
|
||||
return false
|
||||
}
|
||||
|
||||
eSub := e.relSubject
|
||||
oSub := other.relSubject
|
||||
|
||||
switch oSub := oSub.(type) {
|
||||
case AbsModuleCall:
|
||||
switch eSub.(type) {
|
||||
case AbsModuleCall:
|
||||
// we know the other endpoint selects our module, but if we are
|
||||
// also a module call our path must be longer to be nested.
|
||||
return len(eMod) > len(oMod)
|
||||
}
|
||||
|
||||
return true
|
||||
|
||||
case ModuleInstance:
|
||||
switch eSub.(type) {
|
||||
case ModuleInstance, AbsModuleCall:
|
||||
// a nested module must have a longer path
|
||||
return len(eMod) > len(oMod)
|
||||
}
|
||||
|
||||
return true
|
||||
|
||||
case AbsResource:
|
||||
if len(eMod) != len(oMod) {
|
||||
// these resources are from different modules
|
||||
return false
|
||||
}
|
||||
|
||||
// A resource can only contain a resource instance.
|
||||
switch eSub := eSub.(type) {
|
||||
case AbsResourceInstance:
|
||||
return eSub.Resource.Resource.Equal(oSub.Resource)
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
|
|
|
@ -1074,3 +1074,520 @@ func TestAbsResourceMoveDestination(t *testing.T) {
|
|||
)
|
||||
}
|
||||
}
|
||||
|
||||
func TestMoveEndpointChainAndNested(t *testing.T) {
|
||||
tests := []struct {
|
||||
Endpoint, Other AbsMoveable
|
||||
EndpointMod, OtherMod Module
|
||||
CanChainFrom, NestedWithin bool
|
||||
}{
|
||||
{
|
||||
Endpoint: AbsModuleCall{
|
||||
Module: mustParseModuleInstanceStr("module.foo[2]"),
|
||||
Call: ModuleCall{Name: "bar"},
|
||||
},
|
||||
Other: AbsModuleCall{
|
||||
Module: mustParseModuleInstanceStr("module.foo[2]"),
|
||||
Call: ModuleCall{Name: "bar"},
|
||||
},
|
||||
CanChainFrom: true,
|
||||
NestedWithin: false,
|
||||
},
|
||||
|
||||
{
|
||||
Endpoint: mustParseModuleInstanceStr("module.foo[2]"),
|
||||
Other: AbsModuleCall{
|
||||
Module: mustParseModuleInstanceStr("module.foo[2]"),
|
||||
Call: ModuleCall{Name: "bar"},
|
||||
},
|
||||
CanChainFrom: false,
|
||||
NestedWithin: false,
|
||||
},
|
||||
|
||||
{
|
||||
Endpoint: mustParseModuleInstanceStr("module.foo[2].module.bar[2]"),
|
||||
Other: AbsModuleCall{
|
||||
Module: RootModuleInstance,
|
||||
Call: ModuleCall{Name: "foo"},
|
||||
},
|
||||
CanChainFrom: false,
|
||||
NestedWithin: true,
|
||||
},
|
||||
|
||||
{
|
||||
Endpoint: mustParseAbsResourceInstanceStr("module.foo[2].module.bar.resource.baz").ContainingResource(),
|
||||
Other: AbsModuleCall{
|
||||
Module: mustParseModuleInstanceStr("module.foo[2]"),
|
||||
Call: ModuleCall{Name: "bar"},
|
||||
},
|
||||
CanChainFrom: false,
|
||||
NestedWithin: true,
|
||||
},
|
||||
|
||||
{
|
||||
Endpoint: mustParseAbsResourceInstanceStr("module.foo[2].module.bar[3].resource.baz[2]"),
|
||||
Other: AbsModuleCall{
|
||||
Module: mustParseModuleInstanceStr("module.foo[2]"),
|
||||
Call: ModuleCall{Name: "bar"},
|
||||
},
|
||||
CanChainFrom: false,
|
||||
NestedWithin: true,
|
||||
},
|
||||
|
||||
{
|
||||
Endpoint: AbsModuleCall{
|
||||
Module: mustParseModuleInstanceStr("module.foo[2]"),
|
||||
Call: ModuleCall{Name: "bar"},
|
||||
},
|
||||
Other: mustParseModuleInstanceStr("module.foo[2]"),
|
||||
CanChainFrom: false,
|
||||
NestedWithin: true,
|
||||
},
|
||||
|
||||
{
|
||||
Endpoint: mustParseModuleInstanceStr("module.foo[2]"),
|
||||
Other: mustParseModuleInstanceStr("module.foo[2]"),
|
||||
CanChainFrom: true,
|
||||
NestedWithin: false,
|
||||
},
|
||||
|
||||
{
|
||||
Endpoint: mustParseAbsResourceInstanceStr("module.foo[2].resource.baz").ContainingResource(),
|
||||
Other: mustParseModuleInstanceStr("module.foo[2]"),
|
||||
CanChainFrom: false,
|
||||
NestedWithin: true,
|
||||
},
|
||||
|
||||
{
|
||||
Endpoint: mustParseAbsResourceInstanceStr("module.foo[2].module.bar.resource.baz"),
|
||||
Other: mustParseModuleInstanceStr("module.foo[2]"),
|
||||
CanChainFrom: false,
|
||||
NestedWithin: true,
|
||||
},
|
||||
|
||||
{
|
||||
Endpoint: AbsModuleCall{
|
||||
Module: mustParseModuleInstanceStr("module.foo[2]"),
|
||||
Call: ModuleCall{Name: "bar"},
|
||||
},
|
||||
Other: mustParseAbsResourceInstanceStr("module.foo[2].resource.baz").ContainingResource(),
|
||||
CanChainFrom: false,
|
||||
NestedWithin: false,
|
||||
},
|
||||
|
||||
{
|
||||
Endpoint: mustParseModuleInstanceStr("module.foo[2]"),
|
||||
Other: mustParseAbsResourceInstanceStr("module.foo[2].resource.baz").ContainingResource(),
|
||||
CanChainFrom: false,
|
||||
NestedWithin: false,
|
||||
},
|
||||
|
||||
{
|
||||
Endpoint: mustParseAbsResourceInstanceStr("module.foo[2].resource.baz").ContainingResource(),
|
||||
Other: mustParseAbsResourceInstanceStr("module.foo[2].resource.baz").ContainingResource(),
|
||||
CanChainFrom: true,
|
||||
NestedWithin: false,
|
||||
},
|
||||
|
||||
{
|
||||
Endpoint: mustParseAbsResourceInstanceStr("module.foo[2].resource.baz"),
|
||||
Other: mustParseAbsResourceInstanceStr("module.foo[2].resource.baz[2]").ContainingResource(),
|
||||
CanChainFrom: false,
|
||||
NestedWithin: true,
|
||||
},
|
||||
|
||||
{
|
||||
Endpoint: AbsModuleCall{
|
||||
Module: mustParseModuleInstanceStr("module.foo[2]"),
|
||||
Call: ModuleCall{Name: "bar"},
|
||||
},
|
||||
Other: mustParseAbsResourceInstanceStr("module.foo[2].resource.baz"),
|
||||
CanChainFrom: false,
|
||||
},
|
||||
|
||||
{
|
||||
Endpoint: mustParseModuleInstanceStr("module.foo[2]"),
|
||||
Other: mustParseAbsResourceInstanceStr("module.foo[2].resource.baz"),
|
||||
CanChainFrom: false,
|
||||
},
|
||||
{
|
||||
Endpoint: mustParseAbsResourceInstanceStr("module.foo[2].resource.baz").ContainingResource(),
|
||||
Other: mustParseAbsResourceInstanceStr("module.foo[2].resource.baz"),
|
||||
CanChainFrom: false,
|
||||
},
|
||||
|
||||
{
|
||||
Endpoint: mustParseAbsResourceInstanceStr("module.foo[2].resource.baz"),
|
||||
Other: mustParseAbsResourceInstanceStr("module.foo[2].resource.baz"),
|
||||
CanChainFrom: true,
|
||||
},
|
||||
|
||||
{
|
||||
Endpoint: mustParseAbsResourceInstanceStr("resource.baz"),
|
||||
EndpointMod: Module{"foo"},
|
||||
Other: mustParseAbsResourceInstanceStr("module.foo[2].resource.baz"),
|
||||
CanChainFrom: true,
|
||||
},
|
||||
|
||||
{
|
||||
Endpoint: mustParseAbsResourceInstanceStr("module.foo[2].resource.baz"),
|
||||
Other: mustParseAbsResourceInstanceStr("resource.baz"),
|
||||
OtherMod: Module{"foo"},
|
||||
CanChainFrom: true,
|
||||
},
|
||||
|
||||
{
|
||||
Endpoint: mustParseAbsResourceInstanceStr("resource.baz"),
|
||||
EndpointMod: Module{"foo"},
|
||||
Other: mustParseAbsResourceInstanceStr("resource.baz"),
|
||||
OtherMod: Module{"foo"},
|
||||
CanChainFrom: true,
|
||||
},
|
||||
|
||||
{
|
||||
Endpoint: mustParseAbsResourceInstanceStr("resource.baz").ContainingResource(),
|
||||
EndpointMod: Module{"foo"},
|
||||
Other: mustParseAbsResourceInstanceStr("module.foo[2].resource.baz").ContainingResource(),
|
||||
CanChainFrom: true,
|
||||
},
|
||||
|
||||
{
|
||||
Endpoint: mustParseModuleInstanceStr("module.foo[2].module.baz"),
|
||||
Other: mustParseModuleInstanceStr("module.baz"),
|
||||
OtherMod: Module{"foo"},
|
||||
CanChainFrom: true,
|
||||
},
|
||||
|
||||
{
|
||||
Endpoint: AbsModuleCall{
|
||||
Call: ModuleCall{Name: "bing"},
|
||||
},
|
||||
EndpointMod: Module{"foo", "baz"},
|
||||
Other: AbsModuleCall{
|
||||
Module: mustParseModuleInstanceStr("module.baz"),
|
||||
Call: ModuleCall{Name: "bing"},
|
||||
},
|
||||
OtherMod: Module{"foo"},
|
||||
CanChainFrom: true,
|
||||
},
|
||||
|
||||
{
|
||||
Endpoint: mustParseAbsResourceInstanceStr("resource.baz"),
|
||||
EndpointMod: Module{"foo"},
|
||||
Other: mustParseAbsResourceInstanceStr("module.foo[2].resource.baz").ContainingResource(),
|
||||
NestedWithin: true,
|
||||
},
|
||||
|
||||
{
|
||||
Endpoint: mustParseAbsResourceInstanceStr("module.foo[2].resource.baz"),
|
||||
Other: mustParseAbsResourceInstanceStr("resource.baz").ContainingResource(),
|
||||
OtherMod: Module{"foo"},
|
||||
NestedWithin: true,
|
||||
},
|
||||
|
||||
{
|
||||
Endpoint: mustParseAbsResourceInstanceStr("resource.baz"),
|
||||
EndpointMod: Module{"foo"},
|
||||
Other: mustParseAbsResourceInstanceStr("resource.baz").ContainingResource(),
|
||||
OtherMod: Module{"foo"},
|
||||
NestedWithin: true,
|
||||
},
|
||||
|
||||
{
|
||||
Endpoint: mustParseAbsResourceInstanceStr("ressurce.baz").ContainingResource(),
|
||||
EndpointMod: Module{"foo"},
|
||||
Other: mustParseModuleInstanceStr("module.foo[2]"),
|
||||
NestedWithin: true,
|
||||
},
|
||||
|
||||
{
|
||||
Endpoint: AbsModuleCall{
|
||||
Call: ModuleCall{Name: "bang"},
|
||||
},
|
||||
EndpointMod: Module{"foo", "baz", "bing"},
|
||||
Other: AbsModuleCall{
|
||||
Module: mustParseModuleInstanceStr("module.baz"),
|
||||
Call: ModuleCall{Name: "bing"},
|
||||
},
|
||||
OtherMod: Module{"foo"},
|
||||
NestedWithin: true,
|
||||
},
|
||||
|
||||
{
|
||||
Endpoint: AbsModuleCall{
|
||||
Module: mustParseModuleInstanceStr("module.bing"),
|
||||
Call: ModuleCall{Name: "bang"},
|
||||
},
|
||||
EndpointMod: Module{"foo", "baz"},
|
||||
Other: AbsModuleCall{
|
||||
Module: mustParseModuleInstanceStr("module.foo.module.baz"),
|
||||
Call: ModuleCall{Name: "bing"},
|
||||
},
|
||||
NestedWithin: true,
|
||||
},
|
||||
}
|
||||
|
||||
for i, test := range tests {
|
||||
t.Run(fmt.Sprintf("[%02d]%s.CanChainFrom(%s)", i, test.Endpoint, test.Other),
|
||||
func(t *testing.T) {
|
||||
endpoint := &MoveEndpointInModule{
|
||||
relSubject: test.Endpoint,
|
||||
module: test.EndpointMod,
|
||||
}
|
||||
|
||||
other := &MoveEndpointInModule{
|
||||
relSubject: test.Other,
|
||||
module: test.OtherMod,
|
||||
}
|
||||
|
||||
if endpoint.CanChainFrom(other) != test.CanChainFrom {
|
||||
t.Errorf("expected %s CanChainFrom %s == %t", endpoint, other, test.CanChainFrom)
|
||||
}
|
||||
|
||||
if endpoint.NestedWithin(other) != test.NestedWithin {
|
||||
t.Errorf("expected %s NestedWithin %s == %t", endpoint, other, test.NestedWithin)
|
||||
}
|
||||
},
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSelectsModule(t *testing.T) {
|
||||
tests := []struct {
|
||||
Endpoint *MoveEndpointInModule
|
||||
Addr ModuleInstance
|
||||
Selects bool
|
||||
}{
|
||||
{
|
||||
Endpoint: &MoveEndpointInModule{
|
||||
relSubject: AbsModuleCall{
|
||||
Module: mustParseModuleInstanceStr("module.foo[2]"),
|
||||
Call: ModuleCall{Name: "bar"},
|
||||
},
|
||||
},
|
||||
Addr: mustParseModuleInstanceStr("module.foo[2].module.bar[1]"),
|
||||
Selects: true,
|
||||
},
|
||||
{
|
||||
Endpoint: &MoveEndpointInModule{
|
||||
module: mustParseModuleInstanceStr("module.foo").Module(),
|
||||
relSubject: AbsModuleCall{
|
||||
Module: mustParseModuleInstanceStr("module.bar[2]"),
|
||||
Call: ModuleCall{Name: "baz"},
|
||||
},
|
||||
},
|
||||
Addr: mustParseModuleInstanceStr("module.foo[2].module.bar[2].module.baz"),
|
||||
Selects: true,
|
||||
},
|
||||
{
|
||||
Endpoint: &MoveEndpointInModule{
|
||||
module: mustParseModuleInstanceStr("module.foo").Module(),
|
||||
relSubject: AbsModuleCall{
|
||||
Module: mustParseModuleInstanceStr("module.bar[2]"),
|
||||
Call: ModuleCall{Name: "baz"},
|
||||
},
|
||||
},
|
||||
Addr: mustParseModuleInstanceStr("module.foo[2].module.bar[1].module.baz"),
|
||||
Selects: false,
|
||||
},
|
||||
{
|
||||
Endpoint: &MoveEndpointInModule{
|
||||
relSubject: AbsModuleCall{
|
||||
Module: mustParseModuleInstanceStr("module.bar"),
|
||||
Call: ModuleCall{Name: "baz"},
|
||||
},
|
||||
},
|
||||
Addr: mustParseModuleInstanceStr("module.bar[1].module.baz"),
|
||||
Selects: false,
|
||||
},
|
||||
{
|
||||
Endpoint: &MoveEndpointInModule{
|
||||
module: mustParseModuleInstanceStr("module.foo").Module(),
|
||||
relSubject: mustParseAbsResourceInstanceStr(`module.bar.resource.name["key"]`),
|
||||
},
|
||||
Addr: mustParseModuleInstanceStr(`module.foo[1].module.bar`),
|
||||
Selects: true,
|
||||
},
|
||||
{
|
||||
Endpoint: &MoveEndpointInModule{
|
||||
relSubject: mustParseModuleInstanceStr(`module.bar.module.baz["key"]`),
|
||||
},
|
||||
Addr: mustParseModuleInstanceStr(`module.bar.module.baz["key"]`),
|
||||
Selects: true,
|
||||
},
|
||||
{
|
||||
Endpoint: &MoveEndpointInModule{
|
||||
relSubject: mustParseAbsResourceInstanceStr(`module.bar.module.baz["key"].resource.name`).ContainingResource(),
|
||||
},
|
||||
Addr: mustParseModuleInstanceStr(`module.bar.module.baz["key"]`),
|
||||
Selects: true,
|
||||
},
|
||||
{
|
||||
Endpoint: &MoveEndpointInModule{
|
||||
module: mustParseModuleInstanceStr("module.nope").Module(),
|
||||
relSubject: mustParseAbsResourceInstanceStr(`module.bar.resource.name["key"]`),
|
||||
},
|
||||
Addr: mustParseModuleInstanceStr(`module.foo[1].module.bar`),
|
||||
Selects: false,
|
||||
},
|
||||
{
|
||||
Endpoint: &MoveEndpointInModule{
|
||||
relSubject: mustParseModuleInstanceStr(`module.bar.module.baz["key"]`),
|
||||
},
|
||||
Addr: mustParseModuleInstanceStr(`module.bar.module.baz["nope"]`),
|
||||
Selects: false,
|
||||
},
|
||||
{
|
||||
Endpoint: &MoveEndpointInModule{
|
||||
relSubject: mustParseAbsResourceInstanceStr(`module.nope.module.baz["key"].resource.name`).ContainingResource(),
|
||||
},
|
||||
Addr: mustParseModuleInstanceStr(`module.bar.module.baz["key"]`),
|
||||
Selects: false,
|
||||
},
|
||||
}
|
||||
|
||||
for i, test := range tests {
|
||||
t.Run(fmt.Sprintf("[%02d]%s.SelectsModule(%s)", i, test.Endpoint, test.Addr),
|
||||
func(t *testing.T) {
|
||||
if test.Endpoint.SelectsModule(test.Addr) != test.Selects {
|
||||
t.Errorf("expected %s SelectsModule %s == %t", test.Endpoint, test.Addr, test.Selects)
|
||||
}
|
||||
},
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSelectsResource(t *testing.T) {
|
||||
matchingResource := Resource{
|
||||
Mode: ManagedResourceMode,
|
||||
Type: "foo",
|
||||
Name: "matching",
|
||||
}
|
||||
unmatchingResource := Resource{
|
||||
Mode: ManagedResourceMode,
|
||||
Type: "foo",
|
||||
Name: "unmatching",
|
||||
}
|
||||
childMod := Module{
|
||||
"child",
|
||||
}
|
||||
childModMatchingInst := ModuleInstance{
|
||||
ModuleInstanceStep{Name: "child", InstanceKey: StringKey("matching")},
|
||||
}
|
||||
childModUnmatchingInst := ModuleInstance{
|
||||
ModuleInstanceStep{Name: "child", InstanceKey: StringKey("unmatching")},
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
Endpoint *MoveEndpointInModule
|
||||
Addr AbsResource
|
||||
Selects bool
|
||||
}{
|
||||
{
|
||||
Endpoint: &MoveEndpointInModule{
|
||||
relSubject: matchingResource.Absolute(nil),
|
||||
},
|
||||
Addr: matchingResource.Absolute(nil),
|
||||
Selects: true, // exact match
|
||||
},
|
||||
{
|
||||
Endpoint: &MoveEndpointInModule{
|
||||
relSubject: unmatchingResource.Absolute(nil),
|
||||
},
|
||||
Addr: matchingResource.Absolute(nil),
|
||||
Selects: false, // wrong resource name
|
||||
},
|
||||
{
|
||||
Endpoint: &MoveEndpointInModule{
|
||||
relSubject: unmatchingResource.Instance(IntKey(1)).Absolute(nil),
|
||||
},
|
||||
Addr: matchingResource.Absolute(nil),
|
||||
Selects: false, // wrong resource name
|
||||
},
|
||||
{
|
||||
Endpoint: &MoveEndpointInModule{
|
||||
relSubject: matchingResource.Instance(NoKey).Absolute(nil),
|
||||
},
|
||||
Addr: matchingResource.Absolute(nil),
|
||||
Selects: true, // matches one instance
|
||||
},
|
||||
{
|
||||
Endpoint: &MoveEndpointInModule{
|
||||
relSubject: matchingResource.Instance(IntKey(0)).Absolute(nil),
|
||||
},
|
||||
Addr: matchingResource.Absolute(nil),
|
||||
Selects: true, // matches one instance
|
||||
},
|
||||
{
|
||||
Endpoint: &MoveEndpointInModule{
|
||||
relSubject: matchingResource.Instance(StringKey("a")).Absolute(nil),
|
||||
},
|
||||
Addr: matchingResource.Absolute(nil),
|
||||
Selects: true, // matches one instance
|
||||
},
|
||||
{
|
||||
Endpoint: &MoveEndpointInModule{
|
||||
module: childMod,
|
||||
relSubject: matchingResource.Absolute(nil),
|
||||
},
|
||||
Addr: matchingResource.Absolute(childModMatchingInst),
|
||||
Selects: true, // in one of the instances of the module where the statement was written
|
||||
},
|
||||
{
|
||||
Endpoint: &MoveEndpointInModule{
|
||||
relSubject: matchingResource.Absolute(childModMatchingInst),
|
||||
},
|
||||
Addr: matchingResource.Absolute(childModMatchingInst),
|
||||
Selects: true, // exact match
|
||||
},
|
||||
{
|
||||
Endpoint: &MoveEndpointInModule{
|
||||
relSubject: matchingResource.Instance(IntKey(2)).Absolute(childModMatchingInst),
|
||||
},
|
||||
Addr: matchingResource.Absolute(childModMatchingInst),
|
||||
Selects: true, // matches one instance
|
||||
},
|
||||
{
|
||||
Endpoint: &MoveEndpointInModule{
|
||||
relSubject: matchingResource.Absolute(childModMatchingInst),
|
||||
},
|
||||
Addr: matchingResource.Absolute(childModUnmatchingInst),
|
||||
Selects: false, // the containing module instance doesn't match
|
||||
},
|
||||
{
|
||||
Endpoint: &MoveEndpointInModule{
|
||||
relSubject: AbsModuleCall{
|
||||
Module: mustParseModuleInstanceStr("module.foo[2]"),
|
||||
Call: ModuleCall{Name: "bar"},
|
||||
},
|
||||
},
|
||||
Addr: matchingResource.Absolute(mustParseModuleInstanceStr("module.foo[2]")),
|
||||
Selects: false, // a module call can't match a resource
|
||||
},
|
||||
{
|
||||
Endpoint: &MoveEndpointInModule{
|
||||
relSubject: mustParseModuleInstanceStr("module.foo[2]"),
|
||||
},
|
||||
Addr: matchingResource.Absolute(mustParseModuleInstanceStr("module.foo[2]")),
|
||||
Selects: false, // a module instance can't match a resource
|
||||
},
|
||||
}
|
||||
|
||||
for i, test := range tests {
|
||||
t.Run(fmt.Sprintf("[%02d]%s SelectsResource(%s)", i, test.Endpoint, test.Addr),
|
||||
func(t *testing.T) {
|
||||
if got, want := test.Endpoint.SelectsResource(test.Addr), test.Selects; got != want {
|
||||
t.Errorf("wrong result\nReceiver: %s\nArgument: %s\ngot: %t\nwant: %t", test.Endpoint, test.Addr, got, want)
|
||||
}
|
||||
},
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
func mustParseAbsResourceInstanceStr(s string) AbsResourceInstance {
|
||||
r, diags := ParseAbsResourceInstanceStr(s)
|
||||
if diags.HasErrors() {
|
||||
panic(diags.ErrWithWarnings().Error())
|
||||
}
|
||||
return r
|
||||
}
|
||||
|
|
|
@ -196,11 +196,11 @@ func (r AbsResource) absMoveableSigil() {
|
|||
|
||||
type absResourceKey string
|
||||
|
||||
func (r AbsResource) UniqueKey() UniqueKey {
|
||||
return absResourceInstanceKey(r.String())
|
||||
}
|
||||
func (r absResourceKey) uniqueKeySigil() {}
|
||||
|
||||
func (rk absResourceKey) uniqueKeySigil() {}
|
||||
func (r AbsResource) UniqueKey() UniqueKey {
|
||||
return absResourceKey(r.String())
|
||||
}
|
||||
|
||||
// AbsResourceInstance is an absolute address for a resource instance under a
|
||||
// given module path.
|
||||
|
|
|
@ -215,6 +215,77 @@ func TestAbsResourceInstanceEqual_false(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestAbsResourceUniqueKey(t *testing.T) {
|
||||
resourceAddr1 := Resource{
|
||||
Mode: ManagedResourceMode,
|
||||
Type: "a",
|
||||
Name: "b1",
|
||||
}.Absolute(RootModuleInstance)
|
||||
resourceAddr2 := Resource{
|
||||
Mode: ManagedResourceMode,
|
||||
Type: "a",
|
||||
Name: "b2",
|
||||
}.Absolute(RootModuleInstance)
|
||||
resourceAddr3 := Resource{
|
||||
Mode: ManagedResourceMode,
|
||||
Type: "a",
|
||||
Name: "in_module",
|
||||
}.Absolute(RootModuleInstance.Child("boop", NoKey))
|
||||
|
||||
tests := []struct {
|
||||
Reciever AbsResource
|
||||
Other UniqueKeyer
|
||||
WantEqual bool
|
||||
}{
|
||||
{
|
||||
resourceAddr1,
|
||||
resourceAddr1,
|
||||
true,
|
||||
},
|
||||
{
|
||||
resourceAddr1,
|
||||
resourceAddr2,
|
||||
false,
|
||||
},
|
||||
{
|
||||
resourceAddr1,
|
||||
resourceAddr3,
|
||||
false,
|
||||
},
|
||||
{
|
||||
resourceAddr3,
|
||||
resourceAddr3,
|
||||
true,
|
||||
},
|
||||
{
|
||||
resourceAddr1,
|
||||
resourceAddr1.Instance(NoKey),
|
||||
false, // no-key instance key is distinct from its resource even though they have the same String result
|
||||
},
|
||||
{
|
||||
resourceAddr1,
|
||||
resourceAddr1.Instance(IntKey(1)),
|
||||
false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
t.Run(fmt.Sprintf("%s matches %T %s?", test.Reciever, test.Other, test.Other), func(t *testing.T) {
|
||||
rKey := test.Reciever.UniqueKey()
|
||||
oKey := test.Other.UniqueKey()
|
||||
|
||||
gotEqual := rKey == oKey
|
||||
if gotEqual != test.WantEqual {
|
||||
t.Errorf(
|
||||
"wrong result\nreceiver: %s\nother: %s (%T)\ngot: %t\nwant: %t",
|
||||
test.Reciever, test.Other, test.Other,
|
||||
gotEqual, test.WantEqual,
|
||||
)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigResourceEqual_true(t *testing.T) {
|
||||
resources := []ConfigResource{
|
||||
{
|
||||
|
|
|
@ -17,6 +17,7 @@ import (
|
|||
"github.com/hashicorp/terraform/internal/configs"
|
||||
"github.com/hashicorp/terraform/internal/configs/configload"
|
||||
"github.com/hashicorp/terraform/internal/configs/configschema"
|
||||
"github.com/hashicorp/terraform/internal/depsfile"
|
||||
"github.com/hashicorp/terraform/internal/plans"
|
||||
"github.com/hashicorp/terraform/internal/plans/planfile"
|
||||
"github.com/hashicorp/terraform/internal/states"
|
||||
|
@ -141,9 +142,63 @@ type Enhanced interface {
|
|||
// configurations, variables, and more. Not all backends may support this
|
||||
// so we separate it out into its own optional interface.
|
||||
type Local interface {
|
||||
// Context returns a runnable terraform Context. The operation parameter
|
||||
// doesn't need a Type set but it needs other options set such as Module.
|
||||
Context(*Operation) (*terraform.Context, statemgr.Full, tfdiags.Diagnostics)
|
||||
// LocalRun uses information in the Operation to prepare a set of objects
|
||||
// needed to start running that operation.
|
||||
//
|
||||
// The operation doesn't need a Type set, but it needs various other
|
||||
// options set. This is a rather odd API that tries to treat all
|
||||
// operations as the same when they really aren't; see the local and remote
|
||||
// backend's implementations of this to understand what this actually
|
||||
// does, because this operation has no well-defined contract aside from
|
||||
// "whatever it already does".
|
||||
LocalRun(*Operation) (*LocalRun, statemgr.Full, tfdiags.Diagnostics)
|
||||
}
|
||||
|
||||
// LocalRun represents the assortment of objects that we can collect or
|
||||
// calculate from an Operation object, which we can then use for local
|
||||
// operations.
|
||||
//
|
||||
// The operation methods on terraform.Context (Plan, Apply, Import, etc) each
|
||||
// generate new artifacts which supersede parts of the LocalRun object that
|
||||
// started the operation, so callers should be careful to use those subsequent
|
||||
// artifacts instead of the fields of LocalRun where appropriate. The LocalRun
|
||||
// data intentionally doesn't update as a result of calling methods on Context,
|
||||
// in order to make data flow explicit.
|
||||
//
|
||||
// This type is a weird architectural wart resulting from the overly-general
|
||||
// way our backend API models operations, whereby we behave as if all
|
||||
// Terraform operations have the same inputs and outputs even though they
|
||||
// are actually all rather different. The exact meaning of the fields in
|
||||
// this type therefore vary depending on which OperationType was passed to
|
||||
// Local.Context in order to create an object of this type.
|
||||
type LocalRun struct {
|
||||
// Core is an already-initialized Terraform Core context, ready to be
|
||||
// used to run operations such as Plan and Apply.
|
||||
Core *terraform.Context
|
||||
|
||||
// Config is the configuration we're working with, which typically comes
|
||||
// from either config files directly on local disk (when we're creating
|
||||
// a plan, or similar) or from a snapshot embedded in a plan file
|
||||
// (when we're applying a saved plan).
|
||||
Config *configs.Config
|
||||
|
||||
// InputState is the state that should be used for whatever is the first
|
||||
// method call to a context created with CoreOpts. When creating a plan
|
||||
// this will be the previous run state, but when applying a saved plan
|
||||
// this will be the prior state recorded in that plan.
|
||||
InputState *states.State
|
||||
|
||||
// PlanOpts are options to pass to a Plan or Plan-like operation.
|
||||
//
|
||||
// This is nil when we're applying a saved plan, because the plan itself
|
||||
// contains enough information about its options to apply it.
|
||||
PlanOpts *terraform.PlanOpts
|
||||
|
||||
// Plan is a plan loaded from a saved plan file, if our operation is to
|
||||
// apply that saved plan.
|
||||
//
|
||||
// This is nil when we're not applying a saved plan.
|
||||
Plan *plans.Plan
|
||||
}
|
||||
|
||||
// An operation represents an operation for Terraform to execute.
|
||||
|
@ -183,6 +238,16 @@ type Operation struct {
|
|||
// configuration from ConfigDir.
|
||||
ConfigLoader *configload.Loader
|
||||
|
||||
// DependencyLocks represents the locked dependencies associated with
|
||||
// the configuration directory given in ConfigDir.
|
||||
//
|
||||
// Note that if field PlanFile is set then the plan file should contain
|
||||
// its own dependency locks. The backend is responsible for correctly
|
||||
// selecting between these two sets of locks depending on whether it
|
||||
// will be using ConfigDir or PlanFile to get the configuration for
|
||||
// this operation.
|
||||
DependencyLocks *depsfile.Locks
|
||||
|
||||
// Hooks can be used to perform actions triggered by various events during
|
||||
// the operation's lifecycle.
|
||||
Hooks []terraform.Hook
|
||||
|
@ -195,7 +260,6 @@ type Operation struct {
|
|||
// behavior of the operation.
|
||||
PlanMode plans.Mode
|
||||
AutoApprove bool
|
||||
Parallelism int
|
||||
Targets []addrs.Targetable
|
||||
ForceReplace []addrs.AbsResourceInstance
|
||||
Variables map[string]UnparsedVariableValue
|
||||
|
|
|
@ -284,7 +284,7 @@ func (b *Local) Operation(ctx context.Context, op *backend.Operation) (*backend.
|
|||
f = b.opApply
|
||||
default:
|
||||
return nil, fmt.Errorf(
|
||||
"Unsupported operation type: %s\n\n"+
|
||||
"unsupported operation type: %s\n\n"+
|
||||
"This is a bug in Terraform and should be reported. The local backend\n"+
|
||||
"is built-in to Terraform and should always support all operations.",
|
||||
op.Type)
|
||||
|
|
|
@ -5,7 +5,6 @@ import (
|
|||
"fmt"
|
||||
"log"
|
||||
|
||||
"github.com/hashicorp/errwrap"
|
||||
"github.com/hashicorp/terraform/internal/backend"
|
||||
"github.com/hashicorp/terraform/internal/command/views"
|
||||
"github.com/hashicorp/terraform/internal/plans"
|
||||
|
@ -23,7 +22,7 @@ func (b *Local) opApply(
|
|||
runningOp *backend.RunningOperation) {
|
||||
log.Printf("[INFO] backend/local: starting Apply operation")
|
||||
|
||||
var diags tfdiags.Diagnostics
|
||||
var diags, moreDiags tfdiags.Diagnostics
|
||||
|
||||
// If we have a nil module at this point, then set it to an empty tree
|
||||
// to avoid any potential crashes.
|
||||
|
@ -43,7 +42,7 @@ func (b *Local) opApply(
|
|||
op.Hooks = append(op.Hooks, stateHook)
|
||||
|
||||
// Get our context
|
||||
tfCtx, _, opState, contextDiags := b.context(op)
|
||||
lr, _, opState, contextDiags := b.localRun(op)
|
||||
diags = diags.Append(contextDiags)
|
||||
if contextDiags.HasErrors() {
|
||||
op.ReportResult(runningOp, diags)
|
||||
|
@ -59,15 +58,26 @@ func (b *Local) opApply(
|
|||
}
|
||||
}()
|
||||
|
||||
runningOp.State = tfCtx.State()
|
||||
// We'll start off with our result being the input state, and replace it
|
||||
// with the result state only if we eventually complete the apply
|
||||
// operation.
|
||||
runningOp.State = lr.InputState
|
||||
|
||||
var plan *plans.Plan
|
||||
// If we weren't given a plan, then we refresh/plan
|
||||
if op.PlanFile == nil {
|
||||
// Perform the plan
|
||||
log.Printf("[INFO] backend/local: apply calling Plan")
|
||||
plan, planDiags := tfCtx.Plan()
|
||||
diags = diags.Append(planDiags)
|
||||
if planDiags.HasErrors() {
|
||||
plan, moreDiags = lr.Core.Plan(lr.Config, lr.InputState, lr.PlanOpts)
|
||||
diags = diags.Append(moreDiags)
|
||||
if moreDiags.HasErrors() {
|
||||
op.ReportResult(runningOp, diags)
|
||||
return
|
||||
}
|
||||
|
||||
schemas, moreDiags := lr.Core.Schemas(lr.Config, lr.InputState)
|
||||
diags = diags.Append(moreDiags)
|
||||
if moreDiags.HasErrors() {
|
||||
op.ReportResult(runningOp, diags)
|
||||
return
|
||||
}
|
||||
|
@ -75,7 +85,7 @@ func (b *Local) opApply(
|
|||
trivialPlan := !plan.CanApply()
|
||||
hasUI := op.UIOut != nil && op.UIIn != nil
|
||||
mustConfirm := hasUI && !op.AutoApprove && !trivialPlan
|
||||
op.View.Plan(plan, tfCtx.Schemas())
|
||||
op.View.Plan(plan, schemas)
|
||||
|
||||
if mustConfirm {
|
||||
var desc, query string
|
||||
|
@ -119,7 +129,7 @@ func (b *Local) opApply(
|
|||
Description: desc,
|
||||
})
|
||||
if err != nil {
|
||||
diags = diags.Append(errwrap.Wrapf("Error asking for approval: {{err}}", err))
|
||||
diags = diags.Append(fmt.Errorf("error asking for approval: %w", err))
|
||||
op.ReportResult(runningOp, diags)
|
||||
return
|
||||
}
|
||||
|
@ -130,16 +140,7 @@ func (b *Local) opApply(
|
|||
}
|
||||
}
|
||||
} else {
|
||||
plan, err := op.PlanFile.ReadPlan()
|
||||
if err != nil {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Invalid plan file",
|
||||
fmt.Sprintf("Failed to read plan from plan file: %s.", err),
|
||||
))
|
||||
op.ReportResult(runningOp, diags)
|
||||
return
|
||||
}
|
||||
plan = lr.Plan
|
||||
for _, change := range plan.Changes.Resources {
|
||||
if change.Action != plans.NoOp {
|
||||
op.View.PlannedChange(change)
|
||||
|
@ -157,12 +158,10 @@ func (b *Local) opApply(
|
|||
go func() {
|
||||
defer close(doneCh)
|
||||
log.Printf("[INFO] backend/local: apply calling Apply")
|
||||
_, applyDiags = tfCtx.Apply()
|
||||
// we always want the state, even if apply failed
|
||||
applyState = tfCtx.State()
|
||||
applyState, applyDiags = lr.Core.Apply(plan, lr.Config)
|
||||
}()
|
||||
|
||||
if b.opWait(doneCh, stopCtx, cancelCtx, tfCtx, opState, op.View) {
|
||||
if b.opWait(doneCh, stopCtx, cancelCtx, lr.Core, opState, op.View) {
|
||||
return
|
||||
}
|
||||
diags = diags.Append(applyDiags)
|
||||
|
|
|
@ -11,11 +11,13 @@ import (
|
|||
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
|
||||
"github.com/hashicorp/terraform/internal/addrs"
|
||||
"github.com/hashicorp/terraform/internal/backend"
|
||||
"github.com/hashicorp/terraform/internal/command/arguments"
|
||||
"github.com/hashicorp/terraform/internal/command/clistate"
|
||||
"github.com/hashicorp/terraform/internal/command/views"
|
||||
"github.com/hashicorp/terraform/internal/configs/configschema"
|
||||
"github.com/hashicorp/terraform/internal/depsfile"
|
||||
"github.com/hashicorp/terraform/internal/initwd"
|
||||
"github.com/hashicorp/terraform/internal/plans"
|
||||
"github.com/hashicorp/terraform/internal/providers"
|
||||
|
@ -27,8 +29,7 @@ import (
|
|||
)
|
||||
|
||||
func TestLocal_applyBasic(t *testing.T) {
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
b := TestLocal(t)
|
||||
|
||||
p := TestLocalProvider(t, b, "test", applyFixtureSchema())
|
||||
p.ApplyResourceChangeResponse = &providers.ApplyResourceChangeResponse{NewState: cty.ObjectVal(map[string]cty.Value{
|
||||
|
@ -73,8 +74,7 @@ test_instance.foo:
|
|||
}
|
||||
|
||||
func TestLocal_applyEmptyDir(t *testing.T) {
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
b := TestLocal(t)
|
||||
|
||||
p := TestLocalProvider(t, b, "test", &terraform.ProviderSchema{})
|
||||
p.ApplyResourceChangeResponse = &providers.ApplyResourceChangeResponse{NewState: cty.ObjectVal(map[string]cty.Value{"id": cty.StringVal("yes")})}
|
||||
|
@ -108,8 +108,7 @@ func TestLocal_applyEmptyDir(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestLocal_applyEmptyDirDestroy(t *testing.T) {
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
b := TestLocal(t)
|
||||
|
||||
p := TestLocalProvider(t, b, "test", &terraform.ProviderSchema{})
|
||||
p.ApplyResourceChangeResponse = &providers.ApplyResourceChangeResponse{}
|
||||
|
@ -139,8 +138,7 @@ func TestLocal_applyEmptyDirDestroy(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestLocal_applyError(t *testing.T) {
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
b := TestLocal(t)
|
||||
|
||||
schema := &terraform.ProviderSchema{
|
||||
ResourceTypes: map[string]*configschema.Block{
|
||||
|
@ -208,8 +206,7 @@ test_instance.foo:
|
|||
}
|
||||
|
||||
func TestLocal_applyBackendFail(t *testing.T) {
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
b := TestLocal(t)
|
||||
|
||||
p := TestLocalProvider(t, b, "test", applyFixtureSchema())
|
||||
|
||||
|
@ -272,8 +269,7 @@ test_instance.foo: (tainted)
|
|||
}
|
||||
|
||||
func TestLocal_applyRefreshFalse(t *testing.T) {
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
b := TestLocal(t)
|
||||
|
||||
p := TestLocalProvider(t, b, "test", planFixtureSchema())
|
||||
testStateFile(t, b.StatePath, testPlanState())
|
||||
|
@ -325,12 +321,18 @@ func testOperationApply(t *testing.T, configDir string) (*backend.Operation, fun
|
|||
streams, done := terminal.StreamsForTesting(t)
|
||||
view := views.NewOperation(arguments.ViewHuman, false, views.NewView(streams))
|
||||
|
||||
// Many of our tests use an overridden "test" provider that's just in-memory
|
||||
// inside the test process, not a separate plugin on disk.
|
||||
depLocks := depsfile.NewLocks()
|
||||
depLocks.SetProviderOverridden(addrs.MustParseProviderSourceString("registry.terraform.io/hashicorp/test"))
|
||||
|
||||
return &backend.Operation{
|
||||
Type: backend.OperationTypeApply,
|
||||
ConfigDir: configDir,
|
||||
ConfigLoader: configLoader,
|
||||
StateLocker: clistate.NewNoopLocker(),
|
||||
View: view,
|
||||
DependencyLocks: depLocks,
|
||||
}, configCleanup, done
|
||||
}
|
||||
|
||||
|
|
|
@ -5,8 +5,8 @@ import (
|
|||
"fmt"
|
||||
"log"
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
"github.com/hashicorp/errwrap"
|
||||
"github.com/hashicorp/terraform/internal/backend"
|
||||
"github.com/hashicorp/terraform/internal/configs"
|
||||
"github.com/hashicorp/terraform/internal/configs/configload"
|
||||
|
@ -18,25 +18,29 @@ import (
|
|||
)
|
||||
|
||||
// backend.Local implementation.
|
||||
func (b *Local) Context(op *backend.Operation) (*terraform.Context, statemgr.Full, tfdiags.Diagnostics) {
|
||||
func (b *Local) LocalRun(op *backend.Operation) (*backend.LocalRun, statemgr.Full, tfdiags.Diagnostics) {
|
||||
// Make sure the type is invalid. We use this as a way to know not
|
||||
// to ask for input/validate.
|
||||
// to ask for input/validate. We're modifying this through a pointer,
|
||||
// so we're mutating an object that belongs to the caller here, which
|
||||
// seems bad but we're preserving it for now until we have time to
|
||||
// properly design this API, vs. just preserving whatever it currently
|
||||
// happens to do.
|
||||
op.Type = backend.OperationTypeInvalid
|
||||
|
||||
op.StateLocker = op.StateLocker.WithContext(context.Background())
|
||||
|
||||
ctx, _, stateMgr, diags := b.context(op)
|
||||
return ctx, stateMgr, diags
|
||||
lr, _, stateMgr, diags := b.localRun(op)
|
||||
return lr, stateMgr, diags
|
||||
}
|
||||
|
||||
func (b *Local) context(op *backend.Operation) (*terraform.Context, *configload.Snapshot, statemgr.Full, tfdiags.Diagnostics) {
|
||||
func (b *Local) localRun(op *backend.Operation) (*backend.LocalRun, *configload.Snapshot, statemgr.Full, tfdiags.Diagnostics) {
|
||||
var diags tfdiags.Diagnostics
|
||||
|
||||
// Get the latest state.
|
||||
log.Printf("[TRACE] backend/local: requesting state manager for workspace %q", op.Workspace)
|
||||
s, err := b.StateMgr(op.Workspace)
|
||||
if err != nil {
|
||||
diags = diags.Append(errwrap.Wrapf("Error loading state: {{err}}", err))
|
||||
diags = diags.Append(fmt.Errorf("error loading state: %w", err))
|
||||
return nil, nil, nil, diags
|
||||
}
|
||||
log.Printf("[TRACE] backend/local: requesting state lock for workspace %q", op.Workspace)
|
||||
|
@ -54,35 +58,20 @@ func (b *Local) context(op *backend.Operation) (*terraform.Context, *configload.
|
|||
|
||||
log.Printf("[TRACE] backend/local: reading remote state for workspace %q", op.Workspace)
|
||||
if err := s.RefreshState(); err != nil {
|
||||
diags = diags.Append(errwrap.Wrapf("Error loading state: {{err}}", err))
|
||||
diags = diags.Append(fmt.Errorf("error loading state: %w", err))
|
||||
return nil, nil, nil, diags
|
||||
}
|
||||
|
||||
ret := &backend.LocalRun{}
|
||||
|
||||
// Initialize our context options
|
||||
var opts terraform.ContextOpts
|
||||
var coreOpts terraform.ContextOpts
|
||||
if v := b.ContextOpts; v != nil {
|
||||
opts = *v
|
||||
coreOpts = *v
|
||||
}
|
||||
coreOpts.UIInput = op.UIIn
|
||||
coreOpts.Hooks = op.Hooks
|
||||
|
||||
// Copy set options from the operation
|
||||
opts.PlanMode = op.PlanMode
|
||||
opts.Targets = op.Targets
|
||||
opts.ForceReplace = op.ForceReplace
|
||||
opts.UIInput = op.UIIn
|
||||
opts.Hooks = op.Hooks
|
||||
|
||||
opts.SkipRefresh = op.Type != backend.OperationTypeRefresh && !op.PlanRefresh
|
||||
if opts.SkipRefresh {
|
||||
log.Printf("[DEBUG] backend/local: skipping refresh of managed resources")
|
||||
}
|
||||
|
||||
// Load the latest state. If we enter contextFromPlanFile below then the
|
||||
// state snapshot in the plan file must match this, or else it'll return
|
||||
// error diagnostics.
|
||||
log.Printf("[TRACE] backend/local: retrieving local state snapshot for workspace %q", op.Workspace)
|
||||
opts.State = s.State()
|
||||
|
||||
var tfCtx *terraform.Context
|
||||
var ctxDiags tfdiags.Diagnostics
|
||||
var configSnap *configload.Snapshot
|
||||
if op.PlanFile != nil {
|
||||
|
@ -94,8 +83,8 @@ func (b *Local) context(op *backend.Operation) (*terraform.Context, *configload.
|
|||
m := sm.StateSnapshotMeta()
|
||||
stateMeta = &m
|
||||
}
|
||||
log.Printf("[TRACE] backend/local: building context from plan file")
|
||||
tfCtx, configSnap, ctxDiags = b.contextFromPlanFile(op.PlanFile, opts, stateMeta)
|
||||
log.Printf("[TRACE] backend/local: populating backend.LocalRun from plan file")
|
||||
ret, configSnap, ctxDiags = b.localRunForPlanFile(op, op.PlanFile, ret, &coreOpts, stateMeta)
|
||||
if ctxDiags.HasErrors() {
|
||||
diags = diags.Append(ctxDiags)
|
||||
return nil, nil, nil, diags
|
||||
|
@ -105,14 +94,13 @@ func (b *Local) context(op *backend.Operation) (*terraform.Context, *configload.
|
|||
// available if we need to generate diagnostic message snippets.
|
||||
op.ConfigLoader.ImportSourcesFromSnapshot(configSnap)
|
||||
} else {
|
||||
log.Printf("[TRACE] backend/local: building context for current working directory")
|
||||
tfCtx, configSnap, ctxDiags = b.contextDirect(op, opts)
|
||||
log.Printf("[TRACE] backend/local: populating backend.LocalRun for current working directory")
|
||||
ret, configSnap, ctxDiags = b.localRunDirect(op, ret, &coreOpts, s)
|
||||
}
|
||||
diags = diags.Append(ctxDiags)
|
||||
if diags.HasErrors() {
|
||||
return nil, nil, nil, diags
|
||||
}
|
||||
log.Printf("[TRACE] backend/local: finished building terraform.Context")
|
||||
|
||||
// If we have an operation, then we automatically do the input/validate
|
||||
// here since every option requires this.
|
||||
|
@ -122,7 +110,7 @@ func (b *Local) context(op *backend.Operation) (*terraform.Context, *configload.
|
|||
mode := terraform.InputModeProvider
|
||||
|
||||
log.Printf("[TRACE] backend/local: requesting interactive input, if necessary")
|
||||
inputDiags := tfCtx.Input(mode)
|
||||
inputDiags := ret.Core.Input(ret.Config, mode)
|
||||
diags = diags.Append(inputDiags)
|
||||
if inputDiags.HasErrors() {
|
||||
return nil, nil, nil, diags
|
||||
|
@ -132,15 +120,15 @@ func (b *Local) context(op *backend.Operation) (*terraform.Context, *configload.
|
|||
// If validation is enabled, validate
|
||||
if b.OpValidation {
|
||||
log.Printf("[TRACE] backend/local: running validation operation")
|
||||
validateDiags := tfCtx.Validate()
|
||||
validateDiags := ret.Core.Validate(ret.Config)
|
||||
diags = diags.Append(validateDiags)
|
||||
}
|
||||
}
|
||||
|
||||
return tfCtx, configSnap, s, diags
|
||||
return ret, configSnap, s, diags
|
||||
}
|
||||
|
||||
func (b *Local) contextDirect(op *backend.Operation, opts terraform.ContextOpts) (*terraform.Context, *configload.Snapshot, tfdiags.Diagnostics) {
|
||||
func (b *Local) localRunDirect(op *backend.Operation, run *backend.LocalRun, coreOpts *terraform.ContextOpts, s statemgr.Full) (*backend.LocalRun, *configload.Snapshot, tfdiags.Diagnostics) {
|
||||
var diags tfdiags.Diagnostics
|
||||
|
||||
// Load the configuration using the caller-provided configuration loader.
|
||||
|
@ -149,7 +137,33 @@ func (b *Local) contextDirect(op *backend.Operation, opts terraform.ContextOpts)
|
|||
if configDiags.HasErrors() {
|
||||
return nil, nil, diags
|
||||
}
|
||||
opts.Config = config
|
||||
run.Config = config
|
||||
|
||||
if errs := config.VerifyDependencySelections(op.DependencyLocks); len(errs) > 0 {
|
||||
var buf strings.Builder
|
||||
for _, err := range errs {
|
||||
fmt.Fprintf(&buf, "\n - %s", err.Error())
|
||||
}
|
||||
var suggestion string
|
||||
switch {
|
||||
case op.DependencyLocks == nil:
|
||||
// If we get here then it suggests that there's a caller that we
|
||||
// didn't yet update to populate DependencyLocks, which is a bug.
|
||||
suggestion = "This run has no dependency lock information provided at all, which is a bug in Terraform; please report it!"
|
||||
case op.DependencyLocks.Empty():
|
||||
suggestion = "To make the initial dependency selections that will initialize the dependency lock file, run:\n terraform init"
|
||||
default:
|
||||
suggestion = "To update the locked dependency selections to match a changed configuration, run:\n terraform init -upgrade"
|
||||
}
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Inconsistent dependency lock file",
|
||||
fmt.Sprintf(
|
||||
"The following dependency selections recorded in the lock file are inconsistent with the current configuration:%s\n\n%s",
|
||||
buf.String(), suggestion,
|
||||
),
|
||||
))
|
||||
}
|
||||
|
||||
var rawVariables map[string]backend.UnparsedVariableValue
|
||||
if op.AllowUnsetVariables {
|
||||
|
@ -163,7 +177,7 @@ func (b *Local) contextDirect(op *backend.Operation, opts terraform.ContextOpts)
|
|||
// values through interactive prompts.
|
||||
// TODO: Need to route the operation context through into here, so that
|
||||
// the interactive prompts can be sensitive to its timeouts/etc.
|
||||
rawVariables = b.interactiveCollectVariables(context.TODO(), op.Variables, config.Module.Variables, opts.UIInput)
|
||||
rawVariables = b.interactiveCollectVariables(context.TODO(), op.Variables, config.Module.Variables, op.UIIn)
|
||||
}
|
||||
|
||||
variables, varDiags := backend.ParseVariableValues(rawVariables, config.Module.Variables)
|
||||
|
@ -171,14 +185,30 @@ func (b *Local) contextDirect(op *backend.Operation, opts terraform.ContextOpts)
|
|||
if diags.HasErrors() {
|
||||
return nil, nil, diags
|
||||
}
|
||||
opts.Variables = variables
|
||||
|
||||
tfCtx, ctxDiags := terraform.NewContext(&opts)
|
||||
diags = diags.Append(ctxDiags)
|
||||
return tfCtx, configSnap, diags
|
||||
planOpts := &terraform.PlanOpts{
|
||||
Mode: op.PlanMode,
|
||||
Targets: op.Targets,
|
||||
ForceReplace: op.ForceReplace,
|
||||
SetVariables: variables,
|
||||
SkipRefresh: op.Type != backend.OperationTypeRefresh && !op.PlanRefresh,
|
||||
}
|
||||
run.PlanOpts = planOpts
|
||||
|
||||
// For a "direct" local run, the input state is the most recently stored
|
||||
// snapshot, from the previous run.
|
||||
run.InputState = s.State()
|
||||
|
||||
tfCtx, moreDiags := terraform.NewContext(coreOpts)
|
||||
diags = diags.Append(moreDiags)
|
||||
if moreDiags.HasErrors() {
|
||||
return nil, nil, diags
|
||||
}
|
||||
run.Core = tfCtx
|
||||
return run, configSnap, diags
|
||||
}
|
||||
|
||||
func (b *Local) contextFromPlanFile(pf *planfile.Reader, opts terraform.ContextOpts, currentStateMeta *statemgr.SnapshotMeta) (*terraform.Context, *configload.Snapshot, tfdiags.Diagnostics) {
|
||||
func (b *Local) localRunForPlanFile(op *backend.Operation, pf *planfile.Reader, run *backend.LocalRun, coreOpts *terraform.ContextOpts, currentStateMeta *statemgr.SnapshotMeta) (*backend.LocalRun, *configload.Snapshot, tfdiags.Diagnostics) {
|
||||
var diags tfdiags.Diagnostics
|
||||
|
||||
const errSummary = "Invalid plan file"
|
||||
|
@ -201,7 +231,47 @@ func (b *Local) contextFromPlanFile(pf *planfile.Reader, opts terraform.ContextO
|
|||
if configDiags.HasErrors() {
|
||||
return nil, snap, diags
|
||||
}
|
||||
opts.Config = config
|
||||
run.Config = config
|
||||
|
||||
// NOTE: We're intentionally comparing the current locks with the
|
||||
// configuration snapshot, rather than the lock snapshot in the plan file,
|
||||
// because it's the current locks which dictate our plugin selections
|
||||
// in coreOpts below. However, we'll also separately check that the
|
||||
// plan file has identical locked plugins below, and thus we're effectively
|
||||
// checking consistency with both here.
|
||||
if errs := config.VerifyDependencySelections(op.DependencyLocks); len(errs) > 0 {
|
||||
var buf strings.Builder
|
||||
for _, err := range errs {
|
||||
fmt.Fprintf(&buf, "\n - %s", err.Error())
|
||||
}
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Inconsistent dependency lock file",
|
||||
fmt.Sprintf(
|
||||
"The following dependency selections recorded in the lock file are inconsistent with the configuration in the saved plan:%s\n\nA saved plan can be applied only to the same configuration it was created from. Create a new plan from the updated configuration.",
|
||||
buf.String(),
|
||||
),
|
||||
))
|
||||
}
|
||||
|
||||
// This check is an important complement to the check above: the locked
|
||||
// dependencies in the configuration must match the configuration, and
|
||||
// the locked dependencies in the plan must match the locked dependencies
|
||||
// in the configuration, and so transitively we ensure that the locked
|
||||
// dependencies in the plan match the configuration too. However, this
|
||||
// additionally catches any inconsistency between the two sets of locks
|
||||
// even if they both happen to be valid per the current configuration,
|
||||
// which is one of several ways we try to catch the mistake of applying
|
||||
// a saved plan file in a different place than where we created it.
|
||||
depLocksFromPlan, moreDiags := pf.ReadDependencyLocks()
|
||||
diags = diags.Append(moreDiags)
|
||||
if depLocksFromPlan != nil && !op.DependencyLocks.Equal(depLocksFromPlan) {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Inconsistent dependency lock file",
|
||||
"The given plan file was created with a different set of external dependency selections than the current configuration. A saved plan can be applied only to the same configuration it was created from.\n\nCreate a new plan from the updated configuration.",
|
||||
))
|
||||
}
|
||||
|
||||
// A plan file also contains a snapshot of the prior state the changes
|
||||
// are intended to apply to.
|
||||
|
@ -220,8 +290,22 @@ func (b *Local) contextFromPlanFile(pf *planfile.Reader, opts terraform.ContextO
|
|||
// has changed since the plan was created. (All of the "real-world"
|
||||
// state manager implementations support this, but simpler test backends
|
||||
// may not.)
|
||||
if currentStateMeta.Lineage != "" && priorStateFile.Lineage != "" {
|
||||
if priorStateFile.Serial != currentStateMeta.Serial || priorStateFile.Lineage != currentStateMeta.Lineage {
|
||||
|
||||
// Because the plan always contains a state, even if it is empty, the
|
||||
// first plan to be applied will have empty snapshot metadata. In this
|
||||
// case we compare only the serial in order to provide a more correct
|
||||
// error.
|
||||
firstPlan := priorStateFile.Lineage == "" && priorStateFile.Serial == 0
|
||||
|
||||
switch {
|
||||
case !firstPlan && priorStateFile.Lineage != currentStateMeta.Lineage:
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Saved plan does not match the given state",
|
||||
"The given plan file can not be applied because it was created from a different state lineage.",
|
||||
))
|
||||
|
||||
case priorStateFile.Serial != currentStateMeta.Serial:
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Saved plan is stale",
|
||||
|
@ -229,12 +313,10 @@ func (b *Local) contextFromPlanFile(pf *planfile.Reader, opts terraform.ContextO
|
|||
))
|
||||
}
|
||||
}
|
||||
}
|
||||
// The caller already wrote the "current state" here, but we're overriding
|
||||
// it here with the prior state. These two should actually be identical in
|
||||
// normal use, particularly if we validated the state meta above, but
|
||||
// we do this here anyway to ensure consistent behavior.
|
||||
opts.State = priorStateFile.State
|
||||
// When we're applying a saved plan, the input state is the "prior state"
|
||||
// recorded in the plan, which incorporates the result of all of the
|
||||
// refreshing we did while building the plan.
|
||||
run.InputState = priorStateFile.State
|
||||
|
||||
plan, err := pf.ReadPlan()
|
||||
if err != nil {
|
||||
|
@ -245,33 +327,18 @@ func (b *Local) contextFromPlanFile(pf *planfile.Reader, opts terraform.ContextO
|
|||
))
|
||||
return nil, snap, diags
|
||||
}
|
||||
// When we're applying a saved plan, we populate Plan instead of PlanOpts,
|
||||
// because a plan object incorporates the subset of data from PlanOps that
|
||||
// we need to apply the plan.
|
||||
run.Plan = plan
|
||||
|
||||
variables := terraform.InputValues{}
|
||||
for name, dyVal := range plan.VariableValues {
|
||||
val, err := dyVal.Decode(cty.DynamicPseudoType)
|
||||
if err != nil {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
errSummary,
|
||||
fmt.Sprintf("Invalid value for variable %q recorded in plan file: %s.", name, err),
|
||||
))
|
||||
continue
|
||||
tfCtx, moreDiags := terraform.NewContext(coreOpts)
|
||||
diags = diags.Append(moreDiags)
|
||||
if moreDiags.HasErrors() {
|
||||
return nil, nil, diags
|
||||
}
|
||||
|
||||
variables[name] = &terraform.InputValue{
|
||||
Value: val,
|
||||
SourceType: terraform.ValueFromPlan,
|
||||
}
|
||||
}
|
||||
opts.Variables = variables
|
||||
opts.Changes = plan.Changes
|
||||
opts.Targets = plan.TargetAddrs
|
||||
opts.ForceReplace = plan.ForceReplaceAddrs
|
||||
opts.ProviderSHA256s = plan.ProviderSHA256s
|
||||
|
||||
tfCtx, ctxDiags := terraform.NewContext(&opts)
|
||||
diags = diags.Append(ctxDiags)
|
||||
return tfCtx, snap, diags
|
||||
run.Core = tfCtx
|
||||
return run, snap, diags
|
||||
}
|
||||
|
||||
// interactiveCollectVariables attempts to complete the given existing
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
package local
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
@ -10,19 +11,21 @@ import (
|
|||
"github.com/hashicorp/terraform/internal/command/clistate"
|
||||
"github.com/hashicorp/terraform/internal/command/views"
|
||||
"github.com/hashicorp/terraform/internal/configs/configload"
|
||||
"github.com/hashicorp/terraform/internal/configs/configschema"
|
||||
"github.com/hashicorp/terraform/internal/initwd"
|
||||
"github.com/hashicorp/terraform/internal/plans"
|
||||
"github.com/hashicorp/terraform/internal/plans/planfile"
|
||||
"github.com/hashicorp/terraform/internal/states"
|
||||
"github.com/hashicorp/terraform/internal/states/statefile"
|
||||
"github.com/hashicorp/terraform/internal/states/statemgr"
|
||||
"github.com/hashicorp/terraform/internal/terminal"
|
||||
"github.com/hashicorp/terraform/internal/tfdiags"
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
)
|
||||
|
||||
func TestLocalContext(t *testing.T) {
|
||||
func TestLocalRun(t *testing.T) {
|
||||
configDir := "./testdata/empty"
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
b := TestLocal(t)
|
||||
|
||||
_, configLoader, configCleanup := initwd.MustLoadConfigForTests(t, configDir)
|
||||
defer configCleanup()
|
||||
|
@ -38,19 +41,22 @@ func TestLocalContext(t *testing.T) {
|
|||
StateLocker: stateLocker,
|
||||
}
|
||||
|
||||
_, _, diags := b.Context(op)
|
||||
_, _, diags := b.LocalRun(op)
|
||||
if diags.HasErrors() {
|
||||
t.Fatalf("unexpected error: %s", diags.Err().Error())
|
||||
}
|
||||
|
||||
// Context() retains a lock on success
|
||||
// LocalRun() retains a lock on success
|
||||
assertBackendStateLocked(t, b)
|
||||
}
|
||||
|
||||
func TestLocalContext_error(t *testing.T) {
|
||||
configDir := "./testdata/apply"
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
func TestLocalRun_error(t *testing.T) {
|
||||
configDir := "./testdata/invalid"
|
||||
b := TestLocal(t)
|
||||
|
||||
// This backend will return an error when asked to RefreshState, which
|
||||
// should then cause LocalRun to return with the state unlocked.
|
||||
b.Backend = backendWithStateStorageThatFailsRefresh{}
|
||||
|
||||
_, configLoader, configCleanup := initwd.MustLoadConfigForTests(t, configDir)
|
||||
defer configCleanup()
|
||||
|
@ -66,19 +72,18 @@ func TestLocalContext_error(t *testing.T) {
|
|||
StateLocker: stateLocker,
|
||||
}
|
||||
|
||||
_, _, diags := b.Context(op)
|
||||
_, _, diags := b.LocalRun(op)
|
||||
if !diags.HasErrors() {
|
||||
t.Fatal("unexpected success")
|
||||
}
|
||||
|
||||
// Context() unlocks the state on failure
|
||||
// LocalRun() unlocks the state on failure
|
||||
assertBackendStateUnlocked(t, b)
|
||||
}
|
||||
|
||||
func TestLocalContext_stalePlan(t *testing.T) {
|
||||
func TestLocalRun_stalePlan(t *testing.T) {
|
||||
configDir := "./testdata/apply"
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
b := TestLocal(t)
|
||||
|
||||
_, configLoader, configCleanup := initwd.MustLoadConfigForTests(t, configDir)
|
||||
defer configCleanup()
|
||||
|
@ -124,10 +129,16 @@ func TestLocalContext_stalePlan(t *testing.T) {
|
|||
stateFile := statefile.New(plan.PriorState, "boop", 2)
|
||||
|
||||
// Roundtrip through serialization as expected by the operation
|
||||
outDir := testTempDir(t)
|
||||
outDir := t.TempDir()
|
||||
defer os.RemoveAll(outDir)
|
||||
planPath := filepath.Join(outDir, "plan.tfplan")
|
||||
if err := planfile.Create(planPath, configload.NewEmptySnapshot(), prevStateFile, stateFile, plan); err != nil {
|
||||
planfileArgs := planfile.CreateArgs{
|
||||
ConfigSnapshot: configload.NewEmptySnapshot(),
|
||||
PreviousRunStateFile: prevStateFile,
|
||||
StateFile: stateFile,
|
||||
Plan: plan,
|
||||
}
|
||||
if err := planfile.Create(planPath, planfileArgs); err != nil {
|
||||
t.Fatalf("unexpected error writing planfile: %s", err)
|
||||
}
|
||||
planFile, err := planfile.Open(planPath)
|
||||
|
@ -147,11 +158,76 @@ func TestLocalContext_stalePlan(t *testing.T) {
|
|||
StateLocker: stateLocker,
|
||||
}
|
||||
|
||||
_, _, diags := b.Context(op)
|
||||
_, _, diags := b.LocalRun(op)
|
||||
if !diags.HasErrors() {
|
||||
t.Fatal("unexpected success")
|
||||
}
|
||||
|
||||
// Context() unlocks the state on failure
|
||||
// LocalRun() unlocks the state on failure
|
||||
assertBackendStateUnlocked(t, b)
|
||||
}
|
||||
|
||||
type backendWithStateStorageThatFailsRefresh struct {
|
||||
}
|
||||
|
||||
var _ backend.Backend = backendWithStateStorageThatFailsRefresh{}
|
||||
|
||||
func (b backendWithStateStorageThatFailsRefresh) StateMgr(workspace string) (statemgr.Full, error) {
|
||||
return &stateStorageThatFailsRefresh{}, nil
|
||||
}
|
||||
|
||||
func (b backendWithStateStorageThatFailsRefresh) ConfigSchema() *configschema.Block {
|
||||
return &configschema.Block{}
|
||||
}
|
||||
|
||||
func (b backendWithStateStorageThatFailsRefresh) PrepareConfig(in cty.Value) (cty.Value, tfdiags.Diagnostics) {
|
||||
return in, nil
|
||||
}
|
||||
|
||||
func (b backendWithStateStorageThatFailsRefresh) Configure(cty.Value) tfdiags.Diagnostics {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b backendWithStateStorageThatFailsRefresh) DeleteWorkspace(name string) error {
|
||||
return fmt.Errorf("unimplemented")
|
||||
}
|
||||
|
||||
func (b backendWithStateStorageThatFailsRefresh) Workspaces() ([]string, error) {
|
||||
return []string{"default"}, nil
|
||||
}
|
||||
|
||||
type stateStorageThatFailsRefresh struct {
|
||||
locked bool
|
||||
}
|
||||
|
||||
func (s *stateStorageThatFailsRefresh) Lock(info *statemgr.LockInfo) (string, error) {
|
||||
if s.locked {
|
||||
return "", fmt.Errorf("already locked")
|
||||
}
|
||||
s.locked = true
|
||||
return "locked", nil
|
||||
}
|
||||
|
||||
func (s *stateStorageThatFailsRefresh) Unlock(id string) error {
|
||||
if !s.locked {
|
||||
return fmt.Errorf("not locked")
|
||||
}
|
||||
s.locked = false
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *stateStorageThatFailsRefresh) State() *states.State {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *stateStorageThatFailsRefresh) WriteState(*states.State) error {
|
||||
return fmt.Errorf("unimplemented")
|
||||
}
|
||||
|
||||
func (s *stateStorageThatFailsRefresh) RefreshState() error {
|
||||
return fmt.Errorf("intentionally failing for testing purposes")
|
||||
}
|
||||
|
||||
func (s *stateStorageThatFailsRefresh) PersistState() error {
|
||||
return fmt.Errorf("unimplemented")
|
||||
}
|
||||
|
|
|
@ -54,7 +54,7 @@ func (b *Local) opPlan(
|
|||
}
|
||||
|
||||
// Get our context
|
||||
tfCtx, configSnap, opState, ctxDiags := b.context(op)
|
||||
lr, configSnap, opState, ctxDiags := b.localRun(op)
|
||||
diags = diags.Append(ctxDiags)
|
||||
if ctxDiags.HasErrors() {
|
||||
op.ReportResult(runningOp, diags)
|
||||
|
@ -70,7 +70,9 @@ func (b *Local) opPlan(
|
|||
}
|
||||
}()
|
||||
|
||||
runningOp.State = tfCtx.State()
|
||||
// Since planning doesn't immediately change the persisted state, the
|
||||
// resulting state is always just the input state.
|
||||
runningOp.State = lr.InputState
|
||||
|
||||
// Perform the plan in a goroutine so we can be interrupted
|
||||
var plan *plans.Plan
|
||||
|
@ -79,10 +81,10 @@ func (b *Local) opPlan(
|
|||
go func() {
|
||||
defer close(doneCh)
|
||||
log.Printf("[INFO] backend/local: plan calling Plan")
|
||||
plan, planDiags = tfCtx.Plan()
|
||||
plan, planDiags = lr.Core.Plan(lr.Config, lr.InputState, lr.PlanOpts)
|
||||
}()
|
||||
|
||||
if b.opWait(doneCh, stopCtx, cancelCtx, tfCtx, opState, op.View) {
|
||||
if b.opWait(doneCh, stopCtx, cancelCtx, lr.Core, opState, op.View) {
|
||||
// If we get in here then the operation was cancelled, which is always
|
||||
// considered to be a failure.
|
||||
log.Printf("[INFO] backend/local: plan operation was force-cancelled by interrupt")
|
||||
|
@ -131,7 +133,13 @@ func (b *Local) opPlan(
|
|||
}
|
||||
|
||||
log.Printf("[INFO] backend/local: writing plan output to: %s", path)
|
||||
err := planfile.Create(path, configSnap, prevStateFile, plannedStateFile, plan)
|
||||
err := planfile.Create(path, planfile.CreateArgs{
|
||||
ConfigSnapshot: configSnap,
|
||||
PreviousRunStateFile: prevStateFile,
|
||||
StateFile: plannedStateFile,
|
||||
Plan: plan,
|
||||
DependencyLocks: op.DependencyLocks,
|
||||
})
|
||||
if err != nil {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
|
@ -144,7 +152,13 @@ func (b *Local) opPlan(
|
|||
}
|
||||
|
||||
// Render the plan
|
||||
op.View.Plan(plan, tfCtx.Schemas())
|
||||
schemas, moreDiags := lr.Core.Schemas(lr.Config, lr.InputState)
|
||||
diags = diags.Append(moreDiags)
|
||||
if moreDiags.HasErrors() {
|
||||
op.ReportResult(runningOp, diags)
|
||||
return
|
||||
}
|
||||
op.View.Plan(plan, schemas)
|
||||
|
||||
// If we've accumulated any warnings along the way then we'll show them
|
||||
// here just before we show the summary and next steps. If we encountered
|
||||
|
|
|
@ -13,6 +13,7 @@ import (
|
|||
"github.com/hashicorp/terraform/internal/command/clistate"
|
||||
"github.com/hashicorp/terraform/internal/command/views"
|
||||
"github.com/hashicorp/terraform/internal/configs/configschema"
|
||||
"github.com/hashicorp/terraform/internal/depsfile"
|
||||
"github.com/hashicorp/terraform/internal/initwd"
|
||||
"github.com/hashicorp/terraform/internal/plans"
|
||||
"github.com/hashicorp/terraform/internal/plans/planfile"
|
||||
|
@ -23,8 +24,7 @@ import (
|
|||
)
|
||||
|
||||
func TestLocal_planBasic(t *testing.T) {
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
b := TestLocal(t)
|
||||
p := TestLocalProvider(t, b, "test", planFixtureSchema())
|
||||
|
||||
op, configCleanup, done := testOperationPlan(t, "./testdata/plan")
|
||||
|
@ -53,8 +53,7 @@ func TestLocal_planBasic(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestLocal_planInAutomation(t *testing.T) {
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
b := TestLocal(t)
|
||||
TestLocalProvider(t, b, "test", planFixtureSchema())
|
||||
|
||||
const msg = `You didn't use the -out option`
|
||||
|
@ -85,8 +84,7 @@ func TestLocal_planInAutomation(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestLocal_planNoConfig(t *testing.T) {
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
b := TestLocal(t)
|
||||
TestLocalProvider(t, b, "test", &terraform.ProviderSchema{})
|
||||
|
||||
op, configCleanup, done := testOperationPlan(t, "./testdata/empty")
|
||||
|
@ -116,12 +114,26 @@ func TestLocal_planNoConfig(t *testing.T) {
|
|||
// This test validates the state lacking behavior when the inner call to
|
||||
// Context() fails
|
||||
func TestLocal_plan_context_error(t *testing.T) {
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
b := TestLocal(t)
|
||||
|
||||
// This is an intentionally-invalid value to make terraform.NewContext fail
|
||||
// when b.Operation calls it.
|
||||
// NOTE: This test was originally using a provider initialization failure
|
||||
// as its forced error condition, but terraform.NewContext is no longer
|
||||
// responsible for checking that. Invalid parallelism is the last situation
|
||||
// where terraform.NewContext can return error diagnostics, and arguably
|
||||
// we should be validating this argument at the UI layer anyway, so perhaps
|
||||
// in future we'll make terraform.NewContext never return errors and then
|
||||
// this test will become redundant, because its purpose is specifically
|
||||
// to test that we properly unlock the state if terraform.NewContext
|
||||
// returns an error.
|
||||
if b.ContextOpts == nil {
|
||||
b.ContextOpts = &terraform.ContextOpts{}
|
||||
}
|
||||
b.ContextOpts.Parallelism = -1
|
||||
|
||||
op, configCleanup, done := testOperationPlan(t, "./testdata/plan")
|
||||
defer configCleanup()
|
||||
op.PlanRefresh = true
|
||||
|
||||
// we coerce a failure in Context() by omitting the provider schema
|
||||
run, err := b.Operation(context.Background(), op)
|
||||
|
@ -136,14 +148,13 @@ func TestLocal_plan_context_error(t *testing.T) {
|
|||
// the backend should be unlocked after a run
|
||||
assertBackendStateUnlocked(t, b)
|
||||
|
||||
if got, want := done(t).Stderr(), "Error: Could not load plugin"; !strings.Contains(got, want) {
|
||||
if got, want := done(t).Stderr(), "Error: Invalid parallelism value"; !strings.Contains(got, want) {
|
||||
t.Fatalf("unexpected error output:\n%s\nwant: %s", got, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestLocal_planOutputsChanged(t *testing.T) {
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
b := TestLocal(t)
|
||||
testStateFile(t, b.StatePath, states.BuildState(func(ss *states.SyncState) {
|
||||
ss.SetOutputValue(addrs.AbsOutputValue{
|
||||
Module: addrs.RootModuleInstance,
|
||||
|
@ -174,7 +185,7 @@ func TestLocal_planOutputsChanged(t *testing.T) {
|
|||
// unknown" situation because that's already common for printing out
|
||||
// resource changes and we already have many tests for that.
|
||||
}))
|
||||
outDir := testTempDir(t)
|
||||
outDir := t.TempDir()
|
||||
defer os.RemoveAll(outDir)
|
||||
planPath := filepath.Join(outDir, "plan.tfplan")
|
||||
op, configCleanup, done := testOperationPlan(t, "./testdata/plan-outputs-changed")
|
||||
|
@ -224,15 +235,14 @@ state, without changing any real infrastructure.
|
|||
|
||||
// Module outputs should not cause the plan to be rendered
|
||||
func TestLocal_planModuleOutputsChanged(t *testing.T) {
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
b := TestLocal(t)
|
||||
testStateFile(t, b.StatePath, states.BuildState(func(ss *states.SyncState) {
|
||||
ss.SetOutputValue(addrs.AbsOutputValue{
|
||||
Module: addrs.RootModuleInstance.Child("mod", addrs.NoKey),
|
||||
OutputValue: addrs.OutputValue{Name: "changed"},
|
||||
}, cty.StringVal("before"), false)
|
||||
}))
|
||||
outDir := testTempDir(t)
|
||||
outDir := t.TempDir()
|
||||
defer os.RemoveAll(outDir)
|
||||
planPath := filepath.Join(outDir, "plan.tfplan")
|
||||
op, configCleanup, done := testOperationPlan(t, "./testdata/plan-module-outputs-changed")
|
||||
|
@ -271,12 +281,10 @@ No changes. Your infrastructure matches the configuration.
|
|||
}
|
||||
|
||||
func TestLocal_planTainted(t *testing.T) {
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
b := TestLocal(t)
|
||||
p := TestLocalProvider(t, b, "test", planFixtureSchema())
|
||||
testStateFile(t, b.StatePath, testPlanState_tainted())
|
||||
outDir := testTempDir(t)
|
||||
defer os.RemoveAll(outDir)
|
||||
outDir := t.TempDir()
|
||||
planPath := filepath.Join(outDir, "plan.tfplan")
|
||||
op, configCleanup, done := testOperationPlan(t, "./testdata/plan")
|
||||
defer configCleanup()
|
||||
|
@ -329,8 +337,7 @@ Plan: 1 to add, 0 to change, 1 to destroy.`
|
|||
}
|
||||
|
||||
func TestLocal_planDeposedOnly(t *testing.T) {
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
b := TestLocal(t)
|
||||
p := TestLocalProvider(t, b, "test", planFixtureSchema())
|
||||
testStateFile(t, b.StatePath, states.BuildState(func(ss *states.SyncState) {
|
||||
ss.SetResourceInstanceDeposed(
|
||||
|
@ -356,8 +363,7 @@ func TestLocal_planDeposedOnly(t *testing.T) {
|
|||
},
|
||||
)
|
||||
}))
|
||||
outDir := testTempDir(t)
|
||||
defer os.RemoveAll(outDir)
|
||||
outDir := t.TempDir()
|
||||
planPath := filepath.Join(outDir, "plan.tfplan")
|
||||
op, configCleanup, done := testOperationPlan(t, "./testdata/plan")
|
||||
defer configCleanup()
|
||||
|
@ -444,12 +450,11 @@ Plan: 1 to add, 0 to change, 1 to destroy.`
|
|||
}
|
||||
|
||||
func TestLocal_planTainted_createBeforeDestroy(t *testing.T) {
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
b := TestLocal(t)
|
||||
|
||||
p := TestLocalProvider(t, b, "test", planFixtureSchema())
|
||||
testStateFile(t, b.StatePath, testPlanState_tainted())
|
||||
outDir := testTempDir(t)
|
||||
defer os.RemoveAll(outDir)
|
||||
outDir := t.TempDir()
|
||||
planPath := filepath.Join(outDir, "plan.tfplan")
|
||||
op, configCleanup, done := testOperationPlan(t, "./testdata/plan-cbd")
|
||||
defer configCleanup()
|
||||
|
@ -502,8 +507,7 @@ Plan: 1 to add, 0 to change, 1 to destroy.`
|
|||
}
|
||||
|
||||
func TestLocal_planRefreshFalse(t *testing.T) {
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
b := TestLocal(t)
|
||||
|
||||
p := TestLocalProvider(t, b, "test", planFixtureSchema())
|
||||
testStateFile(t, b.StatePath, testPlanState())
|
||||
|
@ -534,14 +538,12 @@ func TestLocal_planRefreshFalse(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestLocal_planDestroy(t *testing.T) {
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
b := TestLocal(t)
|
||||
|
||||
TestLocalProvider(t, b, "test", planFixtureSchema())
|
||||
testStateFile(t, b.StatePath, testPlanState())
|
||||
|
||||
outDir := testTempDir(t)
|
||||
defer os.RemoveAll(outDir)
|
||||
outDir := t.TempDir()
|
||||
planPath := filepath.Join(outDir, "plan.tfplan")
|
||||
|
||||
op, configCleanup, done := testOperationPlan(t, "./testdata/plan")
|
||||
|
@ -588,14 +590,12 @@ func TestLocal_planDestroy(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestLocal_planDestroy_withDataSources(t *testing.T) {
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
b := TestLocal(t)
|
||||
|
||||
TestLocalProvider(t, b, "test", planFixtureSchema())
|
||||
testStateFile(t, b.StatePath, testPlanState_withDataSource())
|
||||
|
||||
outDir := testTempDir(t)
|
||||
defer os.RemoveAll(outDir)
|
||||
outDir := t.TempDir()
|
||||
planPath := filepath.Join(outDir, "plan.tfplan")
|
||||
|
||||
op, configCleanup, done := testOperationPlan(t, "./testdata/destroy-with-ds")
|
||||
|
@ -665,13 +665,11 @@ func getAddrs(resources []*plans.ResourceInstanceChangeSrc) []string {
|
|||
}
|
||||
|
||||
func TestLocal_planOutPathNoChange(t *testing.T) {
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
b := TestLocal(t)
|
||||
TestLocalProvider(t, b, "test", planFixtureSchema())
|
||||
testStateFile(t, b.StatePath, testPlanState())
|
||||
|
||||
outDir := testTempDir(t)
|
||||
defer os.RemoveAll(outDir)
|
||||
outDir := t.TempDir()
|
||||
planPath := filepath.Join(outDir, "plan.tfplan")
|
||||
|
||||
op, configCleanup, done := testOperationPlan(t, "./testdata/plan")
|
||||
|
@ -719,12 +717,18 @@ func testOperationPlan(t *testing.T, configDir string) (*backend.Operation, func
|
|||
streams, done := terminal.StreamsForTesting(t)
|
||||
view := views.NewOperation(arguments.ViewHuman, false, views.NewView(streams))
|
||||
|
||||
// Many of our tests use an overridden "test" provider that's just in-memory
|
||||
// inside the test process, not a separate plugin on disk.
|
||||
depLocks := depsfile.NewLocks()
|
||||
depLocks.SetProviderOverridden(addrs.MustParseProviderSourceString("registry.terraform.io/hashicorp/test"))
|
||||
|
||||
return &backend.Operation{
|
||||
Type: backend.OperationTypePlan,
|
||||
ConfigDir: configDir,
|
||||
ConfigLoader: configLoader,
|
||||
StateLocker: clistate.NewNoopLocker(),
|
||||
View: view,
|
||||
DependencyLocks: depLocks,
|
||||
}, configCleanup, done
|
||||
}
|
||||
|
||||
|
@ -737,7 +741,7 @@ func testPlanState() *states.State {
|
|||
Mode: addrs.ManagedResourceMode,
|
||||
Type: "test_instance",
|
||||
Name: "foo",
|
||||
}.Instance(addrs.IntKey(0)),
|
||||
}.Instance(addrs.NoKey),
|
||||
&states.ResourceInstanceObjectSrc{
|
||||
Status: states.ObjectReady,
|
||||
AttrsJSON: []byte(`{
|
||||
|
|
|
@ -6,7 +6,6 @@ import (
|
|||
"log"
|
||||
"os"
|
||||
|
||||
"github.com/hashicorp/errwrap"
|
||||
"github.com/hashicorp/terraform/internal/backend"
|
||||
"github.com/hashicorp/terraform/internal/states"
|
||||
"github.com/hashicorp/terraform/internal/states/statemgr"
|
||||
|
@ -45,7 +44,7 @@ func (b *Local) opRefresh(
|
|||
op.PlanRefresh = true
|
||||
|
||||
// Get our context
|
||||
tfCtx, _, opState, contextDiags := b.context(op)
|
||||
lr, _, opState, contextDiags := b.localRun(op)
|
||||
diags = diags.Append(contextDiags)
|
||||
if contextDiags.HasErrors() {
|
||||
op.ReportResult(runningOp, diags)
|
||||
|
@ -62,13 +61,14 @@ func (b *Local) opRefresh(
|
|||
}
|
||||
}()
|
||||
|
||||
// Set our state
|
||||
runningOp.State = opState.State()
|
||||
if !runningOp.State.HasResources() {
|
||||
// If we succeed then we'll overwrite this with the resulting state below,
|
||||
// but otherwise the resulting state is just the input state.
|
||||
runningOp.State = lr.InputState
|
||||
if !runningOp.State.HasManagedResourceInstanceObjects() {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Warning,
|
||||
"Empty or non-existent state",
|
||||
"There are currently no resources tracked in the state, so there is nothing to refresh.",
|
||||
"There are currently no remote objects tracked in the state, so there is nothing to refresh.",
|
||||
))
|
||||
}
|
||||
|
||||
|
@ -78,11 +78,11 @@ func (b *Local) opRefresh(
|
|||
doneCh := make(chan struct{})
|
||||
go func() {
|
||||
defer close(doneCh)
|
||||
newState, refreshDiags = tfCtx.Refresh()
|
||||
newState, refreshDiags = lr.Core.Refresh(lr.Config, lr.InputState, lr.PlanOpts)
|
||||
log.Printf("[INFO] backend/local: refresh calling Refresh")
|
||||
}()
|
||||
|
||||
if b.opWait(doneCh, stopCtx, cancelCtx, tfCtx, opState, op.View) {
|
||||
if b.opWait(doneCh, stopCtx, cancelCtx, lr.Core, opState, op.View) {
|
||||
return
|
||||
}
|
||||
|
||||
|
@ -96,7 +96,7 @@ func (b *Local) opRefresh(
|
|||
|
||||
err := statemgr.WriteAndPersist(opState, newState)
|
||||
if err != nil {
|
||||
diags = diags.Append(errwrap.Wrapf("Failed to write state: {{err}}", err))
|
||||
diags = diags.Append(fmt.Errorf("failed to write state: %w", err))
|
||||
op.ReportResult(runningOp, diags)
|
||||
return
|
||||
}
|
||||
|
|
|
@ -12,6 +12,7 @@ import (
|
|||
"github.com/hashicorp/terraform/internal/command/clistate"
|
||||
"github.com/hashicorp/terraform/internal/command/views"
|
||||
"github.com/hashicorp/terraform/internal/configs/configschema"
|
||||
"github.com/hashicorp/terraform/internal/depsfile"
|
||||
"github.com/hashicorp/terraform/internal/initwd"
|
||||
"github.com/hashicorp/terraform/internal/providers"
|
||||
"github.com/hashicorp/terraform/internal/states"
|
||||
|
@ -22,8 +23,7 @@ import (
|
|||
)
|
||||
|
||||
func TestLocal_refresh(t *testing.T) {
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
b := TestLocal(t)
|
||||
|
||||
p := TestLocalProvider(t, b, "test", refreshFixtureSchema())
|
||||
testStateFile(t, b.StatePath, testRefreshState())
|
||||
|
@ -58,8 +58,7 @@ test_instance.foo:
|
|||
}
|
||||
|
||||
func TestLocal_refreshInput(t *testing.T) {
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
b := TestLocal(t)
|
||||
|
||||
schema := &terraform.ProviderSchema{
|
||||
Provider: &configschema.Block{
|
||||
|
@ -121,8 +120,7 @@ test_instance.foo:
|
|||
}
|
||||
|
||||
func TestLocal_refreshValidate(t *testing.T) {
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
b := TestLocal(t)
|
||||
p := TestLocalProvider(t, b, "test", refreshFixtureSchema())
|
||||
testStateFile(t, b.StatePath, testRefreshState())
|
||||
p.ReadResourceFn = nil
|
||||
|
@ -151,8 +149,7 @@ test_instance.foo:
|
|||
}
|
||||
|
||||
func TestLocal_refreshValidateProviderConfigured(t *testing.T) {
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
b := TestLocal(t)
|
||||
|
||||
schema := &terraform.ProviderSchema{
|
||||
Provider: &configschema.Block{
|
||||
|
@ -204,8 +201,7 @@ test_instance.foo:
|
|||
// This test validates the state lacking behavior when the inner call to
|
||||
// Context() fails
|
||||
func TestLocal_refresh_context_error(t *testing.T) {
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
b := TestLocal(t)
|
||||
testStateFile(t, b.StatePath, testRefreshState())
|
||||
op, configCleanup, done := testOperationRefresh(t, "./testdata/apply")
|
||||
defer configCleanup()
|
||||
|
@ -225,8 +221,7 @@ func TestLocal_refresh_context_error(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestLocal_refreshEmptyState(t *testing.T) {
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
b := TestLocal(t)
|
||||
|
||||
p := TestLocalProvider(t, b, "test", refreshFixtureSchema())
|
||||
testStateFile(t, b.StatePath, states.NewState())
|
||||
|
@ -266,12 +261,18 @@ func testOperationRefresh(t *testing.T, configDir string) (*backend.Operation, f
|
|||
streams, done := terminal.StreamsForTesting(t)
|
||||
view := views.NewOperation(arguments.ViewHuman, false, views.NewView(streams))
|
||||
|
||||
// Many of our tests use an overridden "test" provider that's just in-memory
|
||||
// inside the test process, not a separate plugin on disk.
|
||||
depLocks := depsfile.NewLocks()
|
||||
depLocks.SetProviderOverridden(addrs.MustParseProviderSourceString("registry.terraform.io/hashicorp/test"))
|
||||
|
||||
return &backend.Operation{
|
||||
Type: backend.OperationTypeRefresh,
|
||||
ConfigDir: configDir,
|
||||
ConfigLoader: configLoader,
|
||||
StateLocker: clistate.NewNoopLocker(),
|
||||
View: view,
|
||||
DependencyLocks: depLocks,
|
||||
}, configCleanup, done
|
||||
}
|
||||
|
||||
|
|
|
@ -178,9 +178,9 @@ type testDelegateBackend struct {
|
|||
deleteErr bool
|
||||
}
|
||||
|
||||
var errTestDelegateState = errors.New("State called")
|
||||
var errTestDelegateStates = errors.New("States called")
|
||||
var errTestDelegateDeleteState = errors.New("Delete called")
|
||||
var errTestDelegateState = errors.New("state called")
|
||||
var errTestDelegateStates = errors.New("states called")
|
||||
var errTestDelegateDeleteState = errors.New("delete called")
|
||||
|
||||
func (b *testDelegateBackend) StateMgr(name string) (statemgr.Full, error) {
|
||||
if b.stateErr {
|
||||
|
|
|
@ -0,0 +1,6 @@
|
|||
# This configuration is intended to be loadable (valid syntax, etc) but to
|
||||
# fail terraform.Context.Validate.
|
||||
|
||||
locals {
|
||||
a = local.nonexist
|
||||
}
|
|
@ -1,8 +1,6 @@
|
|||
package local
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
|
@ -22,9 +20,12 @@ import (
|
|||
//
|
||||
// No operations will be called on the returned value, so you can still set
|
||||
// public fields without any locks.
|
||||
func TestLocal(t *testing.T) (*Local, func()) {
|
||||
func TestLocal(t *testing.T) *Local {
|
||||
t.Helper()
|
||||
tempDir := testTempDir(t)
|
||||
tempDir, err := filepath.EvalSymlinks(t.TempDir())
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
local := New()
|
||||
local.StatePath = filepath.Join(tempDir, "state.tfstate")
|
||||
|
@ -33,13 +34,7 @@ func TestLocal(t *testing.T) (*Local, func()) {
|
|||
local.StateWorkspaceDir = filepath.Join(tempDir, "state.tfstate.d")
|
||||
local.ContextOpts = &terraform.ContextOpts{}
|
||||
|
||||
cleanup := func() {
|
||||
if err := os.RemoveAll(tempDir); err != nil {
|
||||
t.Fatal("error cleanup up test:", err)
|
||||
}
|
||||
}
|
||||
|
||||
return local, cleanup
|
||||
return local
|
||||
}
|
||||
|
||||
// TestLocalProvider modifies the ContextOpts of the *Local parameter to
|
||||
|
@ -189,15 +184,6 @@ func (b *TestLocalNoDefaultState) StateMgr(name string) (statemgr.Full, error) {
|
|||
return b.Local.StateMgr(name)
|
||||
}
|
||||
|
||||
func testTempDir(t *testing.T) string {
|
||||
d, err := ioutil.TempDir("", "tf")
|
||||
if err != nil {
|
||||
t.Fatalf("err: %s", err)
|
||||
}
|
||||
|
||||
return d
|
||||
}
|
||||
|
||||
func testStateFile(t *testing.T, path string, s *states.State) {
|
||||
stateFile := statemgr.NewFilesystem(path)
|
||||
stateFile.WriteState(s)
|
||||
|
|
|
@ -11,7 +11,6 @@ import (
|
|||
"github.com/hashicorp/terraform/internal/states"
|
||||
"github.com/hashicorp/terraform/internal/states/remote"
|
||||
"github.com/hashicorp/terraform/internal/states/statemgr"
|
||||
"github.com/likexian/gokit/assert"
|
||||
)
|
||||
|
||||
// Define file suffix
|
||||
|
@ -88,7 +87,15 @@ func (b *Backend) StateMgr(name string) (statemgr.Full, error) {
|
|||
return nil, err
|
||||
}
|
||||
|
||||
if !assert.IsContains(ws, name) {
|
||||
exists := false
|
||||
for _, candidate := range ws {
|
||||
if candidate == name {
|
||||
exists = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if !exists {
|
||||
log.Printf("[DEBUG] workspace %v not exists", name)
|
||||
|
||||
// take a lock on this state while we write it
|
||||
|
|
|
@ -9,7 +9,6 @@ import (
|
|||
|
||||
"github.com/hashicorp/terraform/internal/backend"
|
||||
"github.com/hashicorp/terraform/internal/states/remote"
|
||||
"github.com/likexian/gokit/assert"
|
||||
)
|
||||
|
||||
const (
|
||||
|
@ -38,12 +37,18 @@ func TestStateFile(t *testing.T) {
|
|||
}
|
||||
|
||||
for _, c := range cases {
|
||||
t.Run(fmt.Sprintf("%s %s %s", c.prefix, c.key, c.stateName), func(t *testing.T) {
|
||||
b := &Backend{
|
||||
prefix: c.prefix,
|
||||
key: c.key,
|
||||
}
|
||||
assert.Equal(t, b.stateFile(c.stateName), c.wantStateFile)
|
||||
assert.Equal(t, b.lockFile(c.stateName), c.wantLockFile)
|
||||
if got, want := b.stateFile(c.stateName), c.wantStateFile; got != want {
|
||||
t.Errorf("wrong state file name\ngot: %s\nwant: %s", got, want)
|
||||
}
|
||||
if got, want := b.lockFile(c.stateName), c.wantLockFile; got != want {
|
||||
t.Errorf("wrong lock file name\ngot: %s\nwant: %s", got, want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -56,10 +61,14 @@ func TestRemoteClient(t *testing.T) {
|
|||
defer teardownBackend(t, be)
|
||||
|
||||
ss, err := be.StateMgr(backend.DefaultStateName)
|
||||
assert.Nil(t, err)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
|
||||
rs, ok := ss.(*remote.State)
|
||||
assert.True(t, ok)
|
||||
if !ok {
|
||||
t.Fatalf("wrong state manager type\ngot: %T\nwant: %T", ss, rs)
|
||||
}
|
||||
|
||||
remote.TestClient(t, rs.Client)
|
||||
}
|
||||
|
@ -74,10 +83,14 @@ func TestRemoteClientWithPrefix(t *testing.T) {
|
|||
defer teardownBackend(t, be)
|
||||
|
||||
ss, err := be.StateMgr(backend.DefaultStateName)
|
||||
assert.Nil(t, err)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
|
||||
rs, ok := ss.(*remote.State)
|
||||
assert.True(t, ok)
|
||||
if !ok {
|
||||
t.Fatalf("wrong state manager type\ngot: %T\nwant: %T", ss, rs)
|
||||
}
|
||||
|
||||
remote.TestClient(t, rs.Client)
|
||||
}
|
||||
|
@ -91,10 +104,14 @@ func TestRemoteClientWithEncryption(t *testing.T) {
|
|||
defer teardownBackend(t, be)
|
||||
|
||||
ss, err := be.StateMgr(backend.DefaultStateName)
|
||||
assert.Nil(t, err)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
|
||||
rs, ok := ss.(*remote.State)
|
||||
assert.True(t, ok)
|
||||
if !ok {
|
||||
t.Fatalf("wrong state manager type\ngot: %T\nwant: %T", ss, rs)
|
||||
}
|
||||
|
||||
remote.TestClient(t, rs.Client)
|
||||
}
|
||||
|
@ -122,10 +139,14 @@ func TestRemoteLocks(t *testing.T) {
|
|||
}
|
||||
|
||||
c0, err := remoteClient()
|
||||
assert.Nil(t, err)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
|
||||
c1, err := remoteClient()
|
||||
assert.Nil(t, err)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
|
||||
remote.TestRemoteLocks(t, c0, c1)
|
||||
}
|
||||
|
@ -203,10 +224,14 @@ func setupBackend(t *testing.T, bucket, prefix, key string, encrypt bool) backen
|
|||
be := b.(*Backend)
|
||||
|
||||
c, err := be.client("tencentcloud")
|
||||
assert.Nil(t, err)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
|
||||
err = c.putBucket()
|
||||
assert.Nil(t, err)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
|
||||
return b
|
||||
}
|
||||
|
@ -215,10 +240,14 @@ func teardownBackend(t *testing.T, b backend.Backend) {
|
|||
t.Helper()
|
||||
|
||||
c, err := b.(*Backend).client("tencentcloud")
|
||||
assert.Nil(t, err)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
|
||||
err = c.deleteBucket(true)
|
||||
assert.Nil(t, err)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
func bucketName(t *testing.T) string {
|
||||
|
|
|
@ -4,6 +4,7 @@ package gcs
|
|||
import (
|
||||
"context"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
|
@ -141,6 +142,10 @@ func (b *Backend) configure(ctx context.Context) error {
|
|||
return fmt.Errorf("Error loading credentials: %s", err)
|
||||
}
|
||||
|
||||
if !json.Valid([]byte(contents)) {
|
||||
return fmt.Errorf("the string provided in credentials is neither valid json nor a valid file path")
|
||||
}
|
||||
|
||||
credOptions = append(credOptions, option.WithCredentialsJSON([]byte(contents)))
|
||||
}
|
||||
|
||||
|
|
|
@ -91,6 +91,8 @@ type Remote struct {
|
|||
}
|
||||
|
||||
var _ backend.Backend = (*Remote)(nil)
|
||||
var _ backend.Enhanced = (*Remote)(nil)
|
||||
var _ backend.Local = (*Remote)(nil)
|
||||
|
||||
// New creates a new initialized remote backend.
|
||||
func New(services *disco.Disco) *Remote {
|
||||
|
@ -708,6 +710,7 @@ func (b *Remote) Operation(ctx context.Context, op *backend.Operation) (*backend
|
|||
// Record that we're forced to run operations locally to allow the
|
||||
// command package UI to operate correctly
|
||||
b.forceLocal = true
|
||||
log.Printf("[DEBUG] Remote backend is delegating %s to the local backend", op.Type)
|
||||
return b.local.Operation(ctx, op)
|
||||
}
|
||||
|
||||
|
@ -886,7 +889,7 @@ func (b *Remote) VerifyWorkspaceTerraformVersion(workspaceName string) tfdiags.D
|
|||
|
||||
// If the workspace has remote operations disabled, the remote Terraform
|
||||
// version is effectively meaningless, so we'll skip version verification.
|
||||
if workspace.Operations == false {
|
||||
if !workspace.Operations {
|
||||
return nil
|
||||
}
|
||||
|
||||
|
@ -915,9 +918,9 @@ func (b *Remote) VerifyWorkspaceTerraformVersion(workspaceName string) tfdiags.D
|
|||
// are aware of are:
|
||||
//
|
||||
// - 0.14.0 is guaranteed to be compatible with versions up to but not
|
||||
// including 1.1.0
|
||||
v110 := version.Must(version.NewSemver("1.1.0"))
|
||||
if tfversion.SemVer.LessThan(v110) && remoteVersion.LessThan(v110) {
|
||||
// including 1.2.0
|
||||
v120 := version.Must(version.NewSemver("1.2.0"))
|
||||
if tfversion.SemVer.LessThan(v120) && remoteVersion.LessThan(v120) {
|
||||
return diags
|
||||
}
|
||||
// - Any new Terraform state version will require at least minor patch
|
||||
|
@ -961,22 +964,6 @@ func (b *Remote) IsLocalOperations() bool {
|
|||
return b.forceLocal
|
||||
}
|
||||
|
||||
// Colorize returns the Colorize structure that can be used for colorizing
|
||||
// output. This is guaranteed to always return a non-nil value and so useful
|
||||
// as a helper to wrap any potentially colored strings.
|
||||
//
|
||||
// TODO SvH: Rename this back to Colorize as soon as we can pass -no-color.
|
||||
func (b *Remote) cliColorize() *colorstring.Colorize {
|
||||
if b.CLIColor != nil {
|
||||
return b.CLIColor
|
||||
}
|
||||
|
||||
return &colorstring.Colorize{
|
||||
Colors: colorstring.DefaultColors,
|
||||
Disable: true,
|
||||
}
|
||||
}
|
||||
|
||||
func generalError(msg string, err error) error {
|
||||
var diags tfdiags.Diagnostics
|
||||
|
||||
|
|
|
@ -42,7 +42,7 @@ func (b *Remote) opApply(stopCtx, cancelCtx context.Context, op *backend.Operati
|
|||
return nil, diags.Err()
|
||||
}
|
||||
|
||||
if op.Parallelism != defaultParallelism {
|
||||
if b.ContextOpts != nil && b.ContextOpts.Parallelism != defaultParallelism {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Custom parallelism values are currently not supported",
|
||||
|
|
|
@ -17,6 +17,7 @@ import (
|
|||
"github.com/hashicorp/terraform/internal/command/arguments"
|
||||
"github.com/hashicorp/terraform/internal/command/clistate"
|
||||
"github.com/hashicorp/terraform/internal/command/views"
|
||||
"github.com/hashicorp/terraform/internal/depsfile"
|
||||
"github.com/hashicorp/terraform/internal/initwd"
|
||||
"github.com/hashicorp/terraform/internal/plans"
|
||||
"github.com/hashicorp/terraform/internal/plans/planfile"
|
||||
|
@ -43,14 +44,19 @@ func testOperationApplyWithTimeout(t *testing.T, configDir string, timeout time.
|
|||
stateLockerView := views.NewStateLocker(arguments.ViewHuman, view)
|
||||
operationView := views.NewOperation(arguments.ViewHuman, false, view)
|
||||
|
||||
// Many of our tests use an overridden "null" provider that's just in-memory
|
||||
// inside the test process, not a separate plugin on disk.
|
||||
depLocks := depsfile.NewLocks()
|
||||
depLocks.SetProviderOverridden(addrs.MustParseProviderSourceString("registry.terraform.io/hashicorp/null"))
|
||||
|
||||
return &backend.Operation{
|
||||
ConfigDir: configDir,
|
||||
ConfigLoader: configLoader,
|
||||
Parallelism: defaultParallelism,
|
||||
PlanRefresh: true,
|
||||
StateLocker: clistate.NewLocker(timeout, stateLockerView),
|
||||
Type: backend.OperationTypeApply,
|
||||
View: operationView,
|
||||
DependencyLocks: depLocks,
|
||||
}, configCleanup, done
|
||||
}
|
||||
|
||||
|
@ -223,7 +229,10 @@ func TestRemote_applyWithParallelism(t *testing.T) {
|
|||
op, configCleanup, done := testOperationApply(t, "./testdata/apply")
|
||||
defer configCleanup()
|
||||
|
||||
op.Parallelism = 3
|
||||
if b.ContextOpts == nil {
|
||||
b.ContextOpts = &terraform.ContextOpts{}
|
||||
}
|
||||
b.ContextOpts.Parallelism = 3
|
||||
op.Workspace = backend.DefaultStateName
|
||||
|
||||
run, err := b.Operation(context.Background(), op)
|
||||
|
@ -999,7 +1008,7 @@ func TestRemote_applyForceLocal(t *testing.T) {
|
|||
if output := done(t).Stdout(); !strings.Contains(output, "1 to add, 0 to change, 0 to destroy") {
|
||||
t.Fatalf("expected plan summary in output: %s", output)
|
||||
}
|
||||
if !run.State.HasResources() {
|
||||
if !run.State.HasManagedResourceInstanceObjects() {
|
||||
t.Fatalf("expected resources in state")
|
||||
}
|
||||
}
|
||||
|
@ -1062,7 +1071,7 @@ func TestRemote_applyWorkspaceWithoutOperations(t *testing.T) {
|
|||
if output := done(t).Stdout(); !strings.Contains(output, "1 to add, 0 to change, 0 to destroy") {
|
||||
t.Fatalf("expected plan summary in output: %s", output)
|
||||
}
|
||||
if !run.State.HasResources() {
|
||||
if !run.State.HasManagedResourceInstanceObjects() {
|
||||
t.Fatalf("expected resources in state")
|
||||
}
|
||||
}
|
||||
|
@ -1594,7 +1603,7 @@ func TestRemote_applyVersionCheck(t *testing.T) {
|
|||
}
|
||||
|
||||
// RUN: prepare the apply operation and run it
|
||||
op, configCleanup, done := testOperationApply(t, "./testdata/apply")
|
||||
op, configCleanup, _ := testOperationApply(t, "./testdata/apply")
|
||||
defer configCleanup()
|
||||
|
||||
streams, done := terminal.StreamsForTesting(t)
|
||||
|
@ -1637,7 +1646,7 @@ func TestRemote_applyVersionCheck(t *testing.T) {
|
|||
output := b.CLI.(*cli.MockUi).OutputWriter.String()
|
||||
hasRemote := strings.Contains(output, "Running apply in the remote backend")
|
||||
hasSummary := strings.Contains(output, "1 added, 0 changed, 0 destroyed")
|
||||
hasResources := run.State.HasResources()
|
||||
hasResources := run.State.HasManagedResourceInstanceObjects()
|
||||
if !tc.forceLocal && tc.hasOperations {
|
||||
if !hasRemote {
|
||||
t.Errorf("missing remote backend header in output: %s", output)
|
||||
|
|
|
@ -6,7 +6,6 @@ import (
|
|||
"log"
|
||||
"strings"
|
||||
|
||||
"github.com/hashicorp/errwrap"
|
||||
tfe "github.com/hashicorp/go-tfe"
|
||||
"github.com/hashicorp/hcl/v2"
|
||||
"github.com/hashicorp/hcl/v2/hclsyntax"
|
||||
|
@ -18,9 +17,15 @@ import (
|
|||
"github.com/zclconf/go-cty/cty"
|
||||
)
|
||||
|
||||
// Context implements backend.Enhanced.
|
||||
func (b *Remote) Context(op *backend.Operation) (*terraform.Context, statemgr.Full, tfdiags.Diagnostics) {
|
||||
// Context implements backend.Local.
|
||||
func (b *Remote) LocalRun(op *backend.Operation) (*backend.LocalRun, statemgr.Full, tfdiags.Diagnostics) {
|
||||
var diags tfdiags.Diagnostics
|
||||
ret := &backend.LocalRun{
|
||||
PlanOpts: &terraform.PlanOpts{
|
||||
Mode: op.PlanMode,
|
||||
Targets: op.Targets,
|
||||
},
|
||||
}
|
||||
|
||||
op.StateLocker = op.StateLocker.WithContext(context.Background())
|
||||
|
||||
|
@ -31,7 +36,7 @@ func (b *Remote) Context(op *backend.Operation) (*terraform.Context, statemgr.Fu
|
|||
log.Printf("[TRACE] backend/remote: requesting state manager for workspace %q", remoteWorkspaceName)
|
||||
stateMgr, err := b.StateMgr(op.Workspace)
|
||||
if err != nil {
|
||||
diags = diags.Append(errwrap.Wrapf("Error loading state: {{err}}", err))
|
||||
diags = diags.Append(fmt.Errorf("error loading state: %w", err))
|
||||
return nil, nil, diags
|
||||
}
|
||||
|
||||
|
@ -50,7 +55,7 @@ func (b *Remote) Context(op *backend.Operation) (*terraform.Context, statemgr.Fu
|
|||
|
||||
log.Printf("[TRACE] backend/remote: reading remote state for workspace %q", remoteWorkspaceName)
|
||||
if err := stateMgr.RefreshState(); err != nil {
|
||||
diags = diags.Append(errwrap.Wrapf("Error loading state: {{err}}", err))
|
||||
diags = diags.Append(fmt.Errorf("error loading state: %w", err))
|
||||
return nil, nil, diags
|
||||
}
|
||||
|
||||
|
@ -61,15 +66,13 @@ func (b *Remote) Context(op *backend.Operation) (*terraform.Context, statemgr.Fu
|
|||
}
|
||||
|
||||
// Copy set options from the operation
|
||||
opts.PlanMode = op.PlanMode
|
||||
opts.Targets = op.Targets
|
||||
opts.UIInput = op.UIIn
|
||||
|
||||
// Load the latest state. If we enter contextFromPlanFile below then the
|
||||
// state snapshot in the plan file must match this, or else it'll return
|
||||
// error diagnostics.
|
||||
log.Printf("[TRACE] backend/remote: retrieving remote state snapshot for workspace %q", remoteWorkspaceName)
|
||||
opts.State = stateMgr.State()
|
||||
ret.InputState = stateMgr.State()
|
||||
|
||||
log.Printf("[TRACE] backend/remote: loading configuration for the current working directory")
|
||||
config, configDiags := op.ConfigLoader.LoadConfig(op.ConfigDir)
|
||||
|
@ -77,21 +80,21 @@ func (b *Remote) Context(op *backend.Operation) (*terraform.Context, statemgr.Fu
|
|||
if configDiags.HasErrors() {
|
||||
return nil, nil, diags
|
||||
}
|
||||
opts.Config = config
|
||||
ret.Config = config
|
||||
|
||||
// The underlying API expects us to use the opaque workspace id to request
|
||||
// variables, so we'll need to look that up using our organization name
|
||||
// and workspace name.
|
||||
remoteWorkspaceID, err := b.getRemoteWorkspaceID(context.Background(), op.Workspace)
|
||||
if err != nil {
|
||||
diags = diags.Append(errwrap.Wrapf("Error finding remote workspace: {{err}}", err))
|
||||
diags = diags.Append(fmt.Errorf("error finding remote workspace: %w", err))
|
||||
return nil, nil, diags
|
||||
}
|
||||
|
||||
log.Printf("[TRACE] backend/remote: retrieving variables from workspace %s/%s (%s)", remoteWorkspaceName, b.organization, remoteWorkspaceID)
|
||||
tfeVariables, err := b.client.Variables.List(context.Background(), remoteWorkspaceID, tfe.VariableListOptions{})
|
||||
if err != nil && err != tfe.ErrResourceNotFound {
|
||||
diags = diags.Append(errwrap.Wrapf("Error loading variables: {{err}}", err))
|
||||
diags = diags.Append(fmt.Errorf("error loading variables: %w", err))
|
||||
return nil, nil, diags
|
||||
}
|
||||
|
||||
|
@ -100,7 +103,7 @@ func (b *Remote) Context(op *backend.Operation) (*terraform.Context, statemgr.Fu
|
|||
// more lax about them, stubbing out any unset ones as unknown.
|
||||
// This gives us enough information to produce a consistent context,
|
||||
// but not enough information to run a real operation (plan, apply, etc)
|
||||
opts.Variables = stubAllVariables(op.Variables, config.Module.Variables)
|
||||
ret.PlanOpts.SetVariables = stubAllVariables(op.Variables, config.Module.Variables)
|
||||
} else {
|
||||
if tfeVariables != nil {
|
||||
if op.Variables == nil {
|
||||
|
@ -121,16 +124,17 @@ func (b *Remote) Context(op *backend.Operation) (*terraform.Context, statemgr.Fu
|
|||
if diags.HasErrors() {
|
||||
return nil, nil, diags
|
||||
}
|
||||
opts.Variables = variables
|
||||
ret.PlanOpts.SetVariables = variables
|
||||
}
|
||||
}
|
||||
|
||||
tfCtx, ctxDiags := terraform.NewContext(&opts)
|
||||
diags = diags.Append(ctxDiags)
|
||||
ret.Core = tfCtx
|
||||
|
||||
log.Printf("[TRACE] backend/remote: finished building terraform.Context")
|
||||
|
||||
return tfCtx, stateMgr, diags
|
||||
return ret, stateMgr, diags
|
||||
}
|
||||
|
||||
func (b *Remote) getRemoteWorkspaceName(localWorkspaceName string) string {
|
||||
|
|
|
@ -204,7 +204,7 @@ func TestRemoteContextWithVars(t *testing.T) {
|
|||
}
|
||||
b.client.Variables.Create(context.TODO(), workspaceID, *v)
|
||||
|
||||
_, _, diags := b.Context(op)
|
||||
_, _, diags := b.LocalRun(op)
|
||||
|
||||
if test.WantError != "" {
|
||||
if !diags.HasErrors() {
|
||||
|
|
|
@ -38,7 +38,7 @@ func (b *Remote) opPlan(stopCtx, cancelCtx context.Context, op *backend.Operatio
|
|||
return nil, diags.Err()
|
||||
}
|
||||
|
||||
if op.Parallelism != defaultParallelism {
|
||||
if b.ContextOpts != nil && b.ContextOpts.Parallelism != defaultParallelism {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Custom parallelism values are currently not supported",
|
||||
|
|
|
@ -16,6 +16,7 @@ import (
|
|||
"github.com/hashicorp/terraform/internal/command/arguments"
|
||||
"github.com/hashicorp/terraform/internal/command/clistate"
|
||||
"github.com/hashicorp/terraform/internal/command/views"
|
||||
"github.com/hashicorp/terraform/internal/depsfile"
|
||||
"github.com/hashicorp/terraform/internal/initwd"
|
||||
"github.com/hashicorp/terraform/internal/plans"
|
||||
"github.com/hashicorp/terraform/internal/plans/planfile"
|
||||
|
@ -41,14 +42,19 @@ func testOperationPlanWithTimeout(t *testing.T, configDir string, timeout time.D
|
|||
stateLockerView := views.NewStateLocker(arguments.ViewHuman, view)
|
||||
operationView := views.NewOperation(arguments.ViewHuman, false, view)
|
||||
|
||||
// Many of our tests use an overridden "null" provider that's just in-memory
|
||||
// inside the test process, not a separate plugin on disk.
|
||||
depLocks := depsfile.NewLocks()
|
||||
depLocks.SetProviderOverridden(addrs.MustParseProviderSourceString("registry.terraform.io/hashicorp/null"))
|
||||
|
||||
return &backend.Operation{
|
||||
ConfigDir: configDir,
|
||||
ConfigLoader: configLoader,
|
||||
Parallelism: defaultParallelism,
|
||||
PlanRefresh: true,
|
||||
StateLocker: clistate.NewLocker(timeout, stateLockerView),
|
||||
Type: backend.OperationTypePlan,
|
||||
View: operationView,
|
||||
DependencyLocks: depLocks,
|
||||
}, configCleanup, done
|
||||
}
|
||||
|
||||
|
@ -198,7 +204,10 @@ func TestRemote_planWithParallelism(t *testing.T) {
|
|||
op, configCleanup, done := testOperationPlan(t, "./testdata/plan")
|
||||
defer configCleanup()
|
||||
|
||||
op.Parallelism = 3
|
||||
if b.ContextOpts == nil {
|
||||
b.ContextOpts = &terraform.ContextOpts{}
|
||||
}
|
||||
b.ContextOpts.Parallelism = 3
|
||||
op.Workspace = backend.DefaultStateName
|
||||
|
||||
run, err := b.Operation(context.Background(), op)
|
||||
|
|
|
@ -566,7 +566,8 @@ func TestRemote_VerifyWorkspaceTerraformVersion(t *testing.T) {
|
|||
{"0.14.0", "0.13.5", false, false},
|
||||
{"0.14.0", "0.14.1", true, false},
|
||||
{"0.14.0", "1.0.99", true, false},
|
||||
{"0.14.0", "1.1.0", true, true},
|
||||
{"0.14.0", "1.1.0", true, false},
|
||||
{"0.14.0", "1.2.0", true, true},
|
||||
{"1.2.0", "1.2.99", true, false},
|
||||
{"1.2.0", "1.3.0", true, true},
|
||||
{"0.15.0", "latest", true, false},
|
||||
|
|
|
@ -105,7 +105,7 @@ func TestBackendStates(t *testing.T, b Backend) {
|
|||
if err := foo.RefreshState(); err != nil {
|
||||
t.Fatalf("bad: %s", err)
|
||||
}
|
||||
if v := foo.State(); v.HasResources() {
|
||||
if v := foo.State(); v.HasManagedResourceInstanceObjects() {
|
||||
t.Fatalf("should be empty: %s", v)
|
||||
}
|
||||
|
||||
|
@ -116,7 +116,7 @@ func TestBackendStates(t *testing.T, b Backend) {
|
|||
if err := bar.RefreshState(); err != nil {
|
||||
t.Fatalf("bad: %s", err)
|
||||
}
|
||||
if v := bar.State(); v.HasResources() {
|
||||
if v := bar.State(); v.HasManagedResourceInstanceObjects() {
|
||||
t.Fatalf("should be empty: %s", v)
|
||||
}
|
||||
|
||||
|
@ -168,7 +168,7 @@ func TestBackendStates(t *testing.T, b Backend) {
|
|||
t.Fatal("error refreshing foo:", err)
|
||||
}
|
||||
fooState = foo.State()
|
||||
if fooState.HasResources() {
|
||||
if fooState.HasManagedResourceInstanceObjects() {
|
||||
t.Fatal("after writing a resource to bar, foo now has resources too")
|
||||
}
|
||||
|
||||
|
@ -181,7 +181,7 @@ func TestBackendStates(t *testing.T, b Backend) {
|
|||
t.Fatal("error refreshing foo:", err)
|
||||
}
|
||||
fooState = foo.State()
|
||||
if fooState.HasResources() {
|
||||
if fooState.HasManagedResourceInstanceObjects() {
|
||||
t.Fatal("after writing a resource to bar and re-reading foo, foo now has resources too")
|
||||
}
|
||||
|
||||
|
@ -194,7 +194,7 @@ func TestBackendStates(t *testing.T, b Backend) {
|
|||
t.Fatal("error refreshing bar:", err)
|
||||
}
|
||||
barState = bar.State()
|
||||
if !barState.HasResources() {
|
||||
if !barState.HasManagedResourceInstanceObjects() {
|
||||
t.Fatal("after writing a resource instance object to bar and re-reading it, the object has vanished")
|
||||
}
|
||||
}
|
||||
|
@ -237,7 +237,7 @@ func TestBackendStates(t *testing.T, b Backend) {
|
|||
if err := foo.RefreshState(); err != nil {
|
||||
t.Fatalf("bad: %s", err)
|
||||
}
|
||||
if v := foo.State(); v.HasResources() {
|
||||
if v := foo.State(); v.HasManagedResourceInstanceObjects() {
|
||||
t.Fatalf("should be empty: %s", v)
|
||||
}
|
||||
// and delete it again
|
||||
|
|
|
@ -26,6 +26,7 @@ func TestParseVariableValuesUndeclared(t *testing.T) {
|
|||
"declared1": {
|
||||
Name: "declared1",
|
||||
Type: cty.String,
|
||||
ConstraintType: cty.String,
|
||||
ParsingMode: configs.VariableParseLiteral,
|
||||
DeclRange: hcl.Range{
|
||||
Filename: "fake.tf",
|
||||
|
@ -36,6 +37,7 @@ func TestParseVariableValuesUndeclared(t *testing.T) {
|
|||
"missing1": {
|
||||
Name: "missing1",
|
||||
Type: cty.String,
|
||||
ConstraintType: cty.String,
|
||||
ParsingMode: configs.VariableParseLiteral,
|
||||
DeclRange: hcl.Range{
|
||||
Filename: "fake.tf",
|
||||
|
@ -46,6 +48,7 @@ func TestParseVariableValuesUndeclared(t *testing.T) {
|
|||
"missing2": {
|
||||
Name: "missing1",
|
||||
Type: cty.String,
|
||||
ConstraintType: cty.String,
|
||||
ParsingMode: configs.VariableParseLiteral,
|
||||
Default: cty.StringVal("default for missing2"),
|
||||
DeclRange: hcl.Range{
|
||||
|
|
|
@ -3,6 +3,7 @@ package command
|
|||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/hashicorp/hcl/v2"
|
||||
|
@ -33,6 +34,43 @@ func (c *AddCommand) Run(rawArgs []string) int {
|
|||
return 1
|
||||
}
|
||||
|
||||
// In case the output configuration path is specified, we should ensure the
|
||||
// target resource address doesn't exist in the module tree indicated by
|
||||
// the existing configuration files.
|
||||
if args.OutPath != "" {
|
||||
// Ensure the directory to the path exists and is accessible.
|
||||
outDir := filepath.Dir(args.OutPath)
|
||||
if _, err := os.Stat(outDir); os.IsNotExist(err) {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"The out path doesn't exist or is not accessible",
|
||||
err.Error(),
|
||||
))
|
||||
view.Diagnostics(diags)
|
||||
return 1
|
||||
}
|
||||
|
||||
config, loadDiags := c.loadConfig(outDir)
|
||||
diags = diags.Append(loadDiags)
|
||||
if diags.HasErrors() {
|
||||
view.Diagnostics(diags)
|
||||
return 1
|
||||
}
|
||||
|
||||
if config != nil && config.Module != nil {
|
||||
if rs, ok := config.Module.ManagedResources[args.Addr.ContainingResource().Config().String()]; ok {
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
Severity: hcl.DiagError,
|
||||
Summary: "Resource already in configuration",
|
||||
Detail: fmt.Sprintf("The resource %s is already in this configuration at %s. Resource names must be unique per type in each module.", args.Addr, rs.DeclRange),
|
||||
Subject: &rs.DeclRange,
|
||||
})
|
||||
c.View.Diagnostics(diags)
|
||||
return 1
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check for user-supplied plugin path
|
||||
var err error
|
||||
if c.pluginPath, err = c.loadPluginPath(); err != nil {
|
||||
|
@ -99,43 +137,41 @@ func (c *AddCommand) Run(rawArgs []string) int {
|
|||
}
|
||||
|
||||
// Get the context
|
||||
ctx, _, ctxDiags := local.Context(opReq)
|
||||
lr, _, ctxDiags := local.LocalRun(opReq)
|
||||
diags = diags.Append(ctxDiags)
|
||||
if ctxDiags.HasErrors() {
|
||||
view.Diagnostics(diags)
|
||||
return 1
|
||||
}
|
||||
|
||||
// Successfully creating the context can result in a lock, so ensure we release it
|
||||
defer func() {
|
||||
diags := opReq.StateLocker.Unlock()
|
||||
if diags.HasErrors() {
|
||||
c.showDiagnostics(diags)
|
||||
}
|
||||
}()
|
||||
|
||||
// load the configuration to verify that the resource address doesn't
|
||||
// already exist in the config.
|
||||
var module *configs.Module
|
||||
if args.Addr.Module.IsRoot() {
|
||||
module = ctx.Config().Module
|
||||
module = lr.Config.Module
|
||||
} else {
|
||||
// This is weird, but users can potentially specify non-existant module names
|
||||
cfg := ctx.Config().Root.Descendent(args.Addr.Module.Module())
|
||||
cfg := lr.Config.Root.Descendent(args.Addr.Module.Module())
|
||||
if cfg != nil {
|
||||
module = cfg.Module
|
||||
}
|
||||
}
|
||||
|
||||
if module == nil {
|
||||
// It's fine if the module doesn't actually exist; we don't need to check if the resource exists.
|
||||
} else {
|
||||
if rs, ok := module.ManagedResources[args.Addr.ContainingResource().Config().String()]; ok {
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
Severity: hcl.DiagError,
|
||||
Summary: "Resource already in configuration",
|
||||
Detail: fmt.Sprintf("The resource %s is already in this configuration at %s. Resource names must be unique per type in each module.", args.Addr, rs.DeclRange),
|
||||
Subject: &rs.DeclRange,
|
||||
})
|
||||
c.View.Diagnostics(diags)
|
||||
// Get the schemas from the context
|
||||
schemas, moreDiags := lr.Core.Schemas(lr.Config, lr.InputState)
|
||||
diags = diags.Append(moreDiags)
|
||||
if moreDiags.HasErrors() {
|
||||
view.Diagnostics(diags)
|
||||
return 1
|
||||
}
|
||||
}
|
||||
|
||||
// Get the schemas from the context
|
||||
schemas := ctx.Schemas()
|
||||
|
||||
// Determine the correct provider config address. The provider-related
|
||||
// variables may get updated below
|
||||
|
@ -146,7 +182,6 @@ func (c *AddCommand) Run(rawArgs []string) int {
|
|||
// If we are getting the values from state, get the AbsProviderConfig
|
||||
// directly from state as well.
|
||||
var resource *states.Resource
|
||||
var moreDiags tfdiags.Diagnostics
|
||||
if args.FromState {
|
||||
resource, moreDiags = c.getResource(b, args.Addr.ContainingResource())
|
||||
if moreDiags.HasErrors() {
|
||||
|
@ -248,11 +283,10 @@ func (c *AddCommand) Run(rawArgs []string) int {
|
|||
}
|
||||
|
||||
diags = diags.Append(view.Resource(args.Addr, schema, localProviderConfig, stateVal))
|
||||
if diags.HasErrors() {
|
||||
c.View.Diagnostics(diags)
|
||||
if diags.HasErrors() {
|
||||
return 1
|
||||
}
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
|
@ -260,21 +294,27 @@ func (c *AddCommand) Help() string {
|
|||
helpText := `
|
||||
Usage: terraform [global options] add [options] ADDRESS
|
||||
|
||||
Generates a blank resource template. With no additional options,
|
||||
the template will be displayed in the terminal.
|
||||
Generates a blank resource template. With no additional options, Terraform
|
||||
will write the result to standard output.
|
||||
|
||||
Options:
|
||||
|
||||
-from-state=true Fill the template with values from an existing resource.
|
||||
Defaults to false.
|
||||
-from-state Fill the template with values from an existing resource
|
||||
instance tracked in the state. By default, Terraform will
|
||||
emit only placeholder values based on the resource type.
|
||||
|
||||
-out=string Write the template to a file. If the file already
|
||||
exists, the template will be appended to the file.
|
||||
-out=string Write the template to a file, instead of to standard
|
||||
output.
|
||||
|
||||
-optional=true Include optional attributes. Defaults to false.
|
||||
-optional Include optional arguments. By default, the result will
|
||||
include only required arguments.
|
||||
|
||||
-provider=provider Override the configured provider for the resource. Conflicts
|
||||
with -from-state
|
||||
-provider=provider Override the provider configuration for the resource,
|
||||
using the absolute provider configuration address syntax.
|
||||
|
||||
This is incompatible with -from-state, because in that
|
||||
case Terraform will use the provider configuration already
|
||||
selected in the state.
|
||||
`
|
||||
return strings.TrimSpace(helpText)
|
||||
}
|
||||
|
|
|
@ -3,6 +3,7 @@ package command
|
|||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
|
@ -59,7 +60,14 @@ func TestAdd_basic(t *testing.T) {
|
|||
fmt.Println(output.Stderr())
|
||||
t.Fatalf("wrong exit status. Got %d, want 0", code)
|
||||
}
|
||||
expected := `resource "test_instance" "new" {
|
||||
expected := `# NOTE: The "terraform add" command is currently experimental and offers only a
|
||||
# starting point for your resource configuration, with some limitations.
|
||||
#
|
||||
# The behavior of this command may change in future based on feedback, possibly
|
||||
# in incompatible ways. We don't recommend building automation around this
|
||||
# command at this time. If you have feedback about this command, please open
|
||||
# a feature request issue in the Terraform GitHub repository.
|
||||
resource "test_instance" "new" {
|
||||
value = null # REQUIRED string
|
||||
}
|
||||
`
|
||||
|
@ -85,7 +93,67 @@ func TestAdd_basic(t *testing.T) {
|
|||
fmt.Println(output.Stderr())
|
||||
t.Fatalf("wrong exit status. Got %d, want 0", code)
|
||||
}
|
||||
expected := `resource "test_instance" "new" {
|
||||
expected := `# NOTE: The "terraform add" command is currently experimental and offers only a
|
||||
# starting point for your resource configuration, with some limitations.
|
||||
#
|
||||
# The behavior of this command may change in future based on feedback, possibly
|
||||
# in incompatible ways. We don't recommend building automation around this
|
||||
# command at this time. If you have feedback about this command, please open
|
||||
# a feature request issue in the Terraform GitHub repository.
|
||||
resource "test_instance" "new" {
|
||||
value = null # REQUIRED string
|
||||
}
|
||||
`
|
||||
result, err := os.ReadFile(outPath)
|
||||
if err != nil {
|
||||
t.Fatalf("error reading result file %s: %s", outPath, err.Error())
|
||||
}
|
||||
// While the entire directory will get removed once the whole test suite
|
||||
// is done, we remove this lest it gets in the way of another (not yet
|
||||
// written) test.
|
||||
os.Remove(outPath)
|
||||
|
||||
if !cmp.Equal(expected, string(result)) {
|
||||
t.Fatalf("wrong output:\n%s", cmp.Diff(expected, string(result)))
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("basic to existing file", func(t *testing.T) {
|
||||
view, done := testView(t)
|
||||
c := &AddCommand{
|
||||
Meta: Meta{
|
||||
testingOverrides: overrides,
|
||||
View: view,
|
||||
},
|
||||
}
|
||||
outPath := "add.tf"
|
||||
args := []string{fmt.Sprintf("-out=%s", outPath), "test_instance.new"}
|
||||
c.Run(args)
|
||||
args = []string{fmt.Sprintf("-out=%s", outPath), "test_instance.new2"}
|
||||
code := c.Run(args)
|
||||
output := done(t)
|
||||
if code != 0 {
|
||||
fmt.Println(output.Stderr())
|
||||
t.Fatalf("wrong exit status. Got %d, want 0", code)
|
||||
}
|
||||
expected := `# NOTE: The "terraform add" command is currently experimental and offers only a
|
||||
# starting point for your resource configuration, with some limitations.
|
||||
#
|
||||
# The behavior of this command may change in future based on feedback, possibly
|
||||
# in incompatible ways. We don't recommend building automation around this
|
||||
# command at this time. If you have feedback about this command, please open
|
||||
# a feature request issue in the Terraform GitHub repository.
|
||||
resource "test_instance" "new" {
|
||||
value = null # REQUIRED string
|
||||
}
|
||||
# NOTE: The "terraform add" command is currently experimental and offers only a
|
||||
# starting point for your resource configuration, with some limitations.
|
||||
#
|
||||
# The behavior of this command may change in future based on feedback, possibly
|
||||
# in incompatible ways. We don't recommend building automation around this
|
||||
# command at this time. If you have feedback about this command, please open
|
||||
# a feature request issue in the Terraform GitHub repository.
|
||||
resource "test_instance" "new2" {
|
||||
value = null # REQUIRED string
|
||||
}
|
||||
`
|
||||
|
@ -117,7 +185,14 @@ func TestAdd_basic(t *testing.T) {
|
|||
t.Fatalf("wrong exit status. Got %d, want 0", code)
|
||||
}
|
||||
output := done(t)
|
||||
expected := `resource "test_instance" "new" {
|
||||
expected := `# NOTE: The "terraform add" command is currently experimental and offers only a
|
||||
# starting point for your resource configuration, with some limitations.
|
||||
#
|
||||
# The behavior of this command may change in future based on feedback, possibly
|
||||
# in incompatible ways. We don't recommend building automation around this
|
||||
# command at this time. If you have feedback about this command, please open
|
||||
# a feature request issue in the Terraform GitHub repository.
|
||||
resource "test_instance" "new" {
|
||||
ami = null # OPTIONAL string
|
||||
id = null # OPTIONAL string
|
||||
value = null # REQUIRED string
|
||||
|
@ -145,8 +220,16 @@ func TestAdd_basic(t *testing.T) {
|
|||
}
|
||||
|
||||
// The provider happycorp/test has a localname "othertest" in the provider configuration.
|
||||
expected := `resource "test_instance" "new" {
|
||||
expected := `# NOTE: The "terraform add" command is currently experimental and offers only a
|
||||
# starting point for your resource configuration, with some limitations.
|
||||
#
|
||||
# The behavior of this command may change in future based on feedback, possibly
|
||||
# in incompatible ways. We don't recommend building automation around this
|
||||
# command at this time. If you have feedback about this command, please open
|
||||
# a feature request issue in the Terraform GitHub repository.
|
||||
resource "test_instance" "new" {
|
||||
provider = othertest.alias
|
||||
|
||||
value = null # REQUIRED string
|
||||
}
|
||||
`
|
||||
|
@ -164,7 +247,8 @@ func TestAdd_basic(t *testing.T) {
|
|||
View: view,
|
||||
},
|
||||
}
|
||||
args := []string{"test_instance.exists"}
|
||||
outPath := "add.tf"
|
||||
args := []string{fmt.Sprintf("-out=%s", outPath), "test_instance.exists"}
|
||||
code := c.Run(args)
|
||||
if code != 1 {
|
||||
t.Fatalf("wrong exit status. Got %d, want 0", code)
|
||||
|
@ -176,6 +260,38 @@ func TestAdd_basic(t *testing.T) {
|
|||
}
|
||||
})
|
||||
|
||||
t.Run("output existing resource to stdout", func(t *testing.T) {
|
||||
view, done := testView(t)
|
||||
c := &AddCommand{
|
||||
Meta: Meta{
|
||||
testingOverrides: overrides,
|
||||
View: view,
|
||||
},
|
||||
}
|
||||
args := []string{"test_instance.exists"}
|
||||
code := c.Run(args)
|
||||
output := done(t)
|
||||
if code != 0 {
|
||||
fmt.Println(output.Stderr())
|
||||
t.Fatalf("wrong exit status. Got %d, want 0", code)
|
||||
}
|
||||
expected := `# NOTE: The "terraform add" command is currently experimental and offers only a
|
||||
# starting point for your resource configuration, with some limitations.
|
||||
#
|
||||
# The behavior of this command may change in future based on feedback, possibly
|
||||
# in incompatible ways. We don't recommend building automation around this
|
||||
# command at this time. If you have feedback about this command, please open
|
||||
# a feature request issue in the Terraform GitHub repository.
|
||||
resource "test_instance" "exists" {
|
||||
value = null # REQUIRED string
|
||||
}
|
||||
`
|
||||
|
||||
if !cmp.Equal(output.Stdout(), expected) {
|
||||
t.Fatalf("wrong output:\n%s", cmp.Diff(expected, output.Stdout()))
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("provider not in configuration", func(t *testing.T) {
|
||||
view, done := testView(t)
|
||||
c := &AddCommand{
|
||||
|
@ -317,7 +433,14 @@ func TestAdd(t *testing.T) {
|
|||
t.Fatalf("wrong exit status. Got %d, want 0", code)
|
||||
}
|
||||
|
||||
expected := `resource "test_instance" "new" {
|
||||
expected := `# NOTE: The "terraform add" command is currently experimental and offers only a
|
||||
# starting point for your resource configuration, with some limitations.
|
||||
#
|
||||
# The behavior of this command may change in future based on feedback, possibly
|
||||
# in incompatible ways. We don't recommend building automation around this
|
||||
# command at this time. If you have feedback about this command, please open
|
||||
# a feature request issue in the Terraform GitHub repository.
|
||||
resource "test_instance" "new" {
|
||||
ami = null # OPTIONAL string
|
||||
disks = [{ # OPTIONAL list of object
|
||||
mount_point = null # REQUIRED string
|
||||
|
@ -354,7 +477,14 @@ func TestAdd(t *testing.T) {
|
|||
t.Fatalf("wrong exit status. Got %d, want 0", code)
|
||||
}
|
||||
|
||||
expected := `resource "test_instance" "new" {
|
||||
expected := `# NOTE: The "terraform add" command is currently experimental and offers only a
|
||||
# starting point for your resource configuration, with some limitations.
|
||||
#
|
||||
# The behavior of this command may change in future based on feedback, possibly
|
||||
# in incompatible ways. We don't recommend building automation around this
|
||||
# command at this time. If you have feedback about this command, please open
|
||||
# a feature request issue in the Terraform GitHub repository.
|
||||
resource "test_instance" "new" {
|
||||
value = null # REQUIRED string
|
||||
network_interface { # REQUIRED block
|
||||
}
|
||||
|
@ -382,7 +512,14 @@ func TestAdd(t *testing.T) {
|
|||
t.Fatalf("wrong exit status. Got %d, want 0", code)
|
||||
}
|
||||
|
||||
expected := `resource "test_instance" "new" {
|
||||
expected := `# NOTE: The "terraform add" command is currently experimental and offers only a
|
||||
# starting point for your resource configuration, with some limitations.
|
||||
#
|
||||
# The behavior of this command may change in future based on feedback, possibly
|
||||
# in incompatible ways. We don't recommend building automation around this
|
||||
# command at this time. If you have feedback about this command, please open
|
||||
# a feature request issue in the Terraform GitHub repository.
|
||||
resource "test_instance" "new" {
|
||||
id = null # REQUIRED string
|
||||
}
|
||||
`
|
||||
|
@ -410,7 +547,14 @@ func TestAdd(t *testing.T) {
|
|||
t.Fatalf("wrong exit status. Got %d, want 0", code)
|
||||
}
|
||||
|
||||
expected := `resource "test_instance" "new" {
|
||||
expected := `# NOTE: The "terraform add" command is currently experimental and offers only a
|
||||
# starting point for your resource configuration, with some limitations.
|
||||
#
|
||||
# The behavior of this command may change in future based on feedback, possibly
|
||||
# in incompatible ways. We don't recommend building automation around this
|
||||
# command at this time. If you have feedback about this command, please open
|
||||
# a feature request issue in the Terraform GitHub repository.
|
||||
resource "test_instance" "new" {
|
||||
id = null # REQUIRED string
|
||||
}
|
||||
`
|
||||
|
@ -511,7 +655,14 @@ func TestAdd_from_state(t *testing.T) {
|
|||
t.Fatalf("wrong exit status. Got %d, want 0", code)
|
||||
}
|
||||
|
||||
expected := `resource "test_instance" "new" {
|
||||
expected := `# NOTE: The "terraform add" command is currently experimental and offers only a
|
||||
# starting point for your resource configuration, with some limitations.
|
||||
#
|
||||
# The behavior of this command may change in future based on feedback, possibly
|
||||
# in incompatible ways. We don't recommend building automation around this
|
||||
# command at this time. If you have feedback about this command, please open
|
||||
# a feature request issue in the Terraform GitHub repository.
|
||||
resource "test_instance" "new" {
|
||||
ami = "ami-123456"
|
||||
disks = [
|
||||
{
|
||||
|
@ -528,4 +679,7 @@ func TestAdd_from_state(t *testing.T) {
|
|||
t.Fatalf("wrong output:\n%s", cmp.Diff(expected, output.Stdout()))
|
||||
}
|
||||
|
||||
if _, err := os.Stat(filepath.Join(td, ".terraform.tfstate.lock.info")); !os.IsNotExist(err) {
|
||||
t.Fatal("state left locked after add")
|
||||
}
|
||||
}
|
||||
|
|
|
@ -45,8 +45,7 @@ func (c *ApplyCommand) Run(rawArgs []string) int {
|
|||
|
||||
// Instantiate the view, even if there are flag errors, so that we render
|
||||
// diagnostics according to the desired view
|
||||
var view views.Apply
|
||||
view = views.NewApply(args.ViewType, c.Destroy, c.View)
|
||||
view := views.NewApply(args.ViewType, c.Destroy, c.View)
|
||||
|
||||
if diags.HasErrors() {
|
||||
view.Diagnostics(diags)
|
||||
|
|
|
@ -710,7 +710,6 @@ func TestApply_plan(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestApply_plan_backup(t *testing.T) {
|
||||
planPath := applyFixturePlanFile(t)
|
||||
statePath := testTempFile(t)
|
||||
backupPath := testTempFile(t)
|
||||
|
||||
|
@ -724,11 +723,17 @@ func TestApply_plan_backup(t *testing.T) {
|
|||
}
|
||||
|
||||
// create a state file that needs to be backed up
|
||||
err := statemgr.NewFilesystem(statePath).WriteState(states.NewState())
|
||||
fs := statemgr.NewFilesystem(statePath)
|
||||
fs.StateSnapshotMeta()
|
||||
err := fs.WriteState(states.NewState())
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// the plan file must contain the metadata from the prior state to be
|
||||
// backed up
|
||||
planPath := applyFixturePlanFileMatchState(t, fs.StateSnapshotMeta())
|
||||
|
||||
args := []string{
|
||||
"-state", statePath,
|
||||
"-backup", backupPath,
|
||||
|
@ -1779,6 +1784,7 @@ func TestApply_terraformEnvNonDefault(t *testing.T) {
|
|||
},
|
||||
}
|
||||
if code := newCmd.Run([]string{"test"}); code != 0 {
|
||||
t.Fatal("error creating workspace")
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1792,6 +1798,7 @@ func TestApply_terraformEnvNonDefault(t *testing.T) {
|
|||
},
|
||||
}
|
||||
if code := selCmd.Run(args); code != 0 {
|
||||
t.Fatal("error switching workspace")
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -2278,6 +2285,13 @@ func applyFixtureProvider() *terraform.MockProvider {
|
|||
// a single change to create the test_instance.foo that is included in the
|
||||
// "apply" test fixture, returning the location of that plan file.
|
||||
func applyFixturePlanFile(t *testing.T) string {
|
||||
return applyFixturePlanFileMatchState(t, statemgr.SnapshotMeta{})
|
||||
}
|
||||
|
||||
// applyFixturePlanFileMatchState creates a planfile like applyFixturePlanFile,
|
||||
// but inserts the state meta information if that plan must match a preexisting
|
||||
// state.
|
||||
func applyFixturePlanFileMatchState(t *testing.T, stateMeta statemgr.SnapshotMeta) string {
|
||||
_, snap := testModuleWithSnapshot(t, "apply")
|
||||
plannedVal := cty.ObjectVal(map[string]cty.Value{
|
||||
"id": cty.UnknownVal(cty.String),
|
||||
|
@ -2308,11 +2322,12 @@ func applyFixturePlanFile(t *testing.T) string {
|
|||
After: plannedValRaw,
|
||||
},
|
||||
})
|
||||
return testPlanFile(
|
||||
return testPlanFileMatchState(
|
||||
t,
|
||||
snap,
|
||||
states.NewState(),
|
||||
plan,
|
||||
stateMeta,
|
||||
)
|
||||
}
|
||||
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
//go:build !windows
|
||||
// +build !windows
|
||||
|
||||
package cliconfig
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package cliconfig
|
||||
|
|
|
@ -46,7 +46,7 @@ func (c *Config) CredentialsSource(helperPlugins pluginDiscovery.PluginMetaSet)
|
|||
for givenType, givenConfig := range c.CredentialsHelpers {
|
||||
available := helperPlugins.WithName(givenType)
|
||||
if available.Count() == 0 {
|
||||
log.Printf("[ERROR] Unable to find credentials helper %q; ignoring", helperType)
|
||||
log.Printf("[ERROR] Unable to find credentials helper %q; ignoring", givenType)
|
||||
break
|
||||
}
|
||||
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
//go:build !windows
|
||||
// +build !windows
|
||||
|
||||
package clistate
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package clistate
|
||||
|
|
|
@ -24,10 +24,12 @@ import (
|
|||
backendInit "github.com/hashicorp/terraform/internal/backend/init"
|
||||
backendLocal "github.com/hashicorp/terraform/internal/backend/local"
|
||||
"github.com/hashicorp/terraform/internal/command/views"
|
||||
"github.com/hashicorp/terraform/internal/command/workdir"
|
||||
"github.com/hashicorp/terraform/internal/configs"
|
||||
"github.com/hashicorp/terraform/internal/configs/configload"
|
||||
"github.com/hashicorp/terraform/internal/configs/configschema"
|
||||
"github.com/hashicorp/terraform/internal/copy"
|
||||
"github.com/hashicorp/terraform/internal/depsfile"
|
||||
"github.com/hashicorp/terraform/internal/getproviders"
|
||||
"github.com/hashicorp/terraform/internal/initwd"
|
||||
legacy "github.com/hashicorp/terraform/internal/legacy/terraform"
|
||||
|
@ -108,6 +110,59 @@ func tempDir(t *testing.T) string {
|
|||
return dir
|
||||
}
|
||||
|
||||
// tempWorkingDir constructs a workdir.Dir object referring to a newly-created
|
||||
// temporary directory, and returns that object along with a cleanup function
|
||||
// to call once the calling test is complete.
|
||||
//
|
||||
// Although workdir.Dir is built to support arbitrary base directories, the
|
||||
// not-yet-migrated behaviors in command.Meta tend to expect the root module
|
||||
// directory to be the real process working directory, and so if you intend
|
||||
// to use the result inside a command.Meta object you must use a pattern
|
||||
// similar to the following when initializing your test:
|
||||
//
|
||||
// wd, cleanup := tempWorkingDir(t)
|
||||
// defer cleanup()
|
||||
// defer testChdir(t, wd.RootModuleDir())()
|
||||
//
|
||||
// Note that testChdir modifies global state for the test process, and so a
|
||||
// test using this pattern must never call t.Parallel().
|
||||
func tempWorkingDir(t *testing.T) (*workdir.Dir, func() error) {
|
||||
t.Helper()
|
||||
|
||||
dirPath, err := os.MkdirTemp("", "tf-command-test-")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
done := func() error {
|
||||
return os.RemoveAll(dirPath)
|
||||
}
|
||||
t.Logf("temporary directory %s", dirPath)
|
||||
|
||||
return workdir.NewDir(dirPath), done
|
||||
}
|
||||
|
||||
// tempWorkingDirFixture is like tempWorkingDir but it also copies the content
|
||||
// from a fixture directory into the temporary directory before returning it.
|
||||
//
|
||||
// The same caveats about working directory apply as for testWorkingDir. See
|
||||
// the testWorkingDir commentary for an example of how to use this function
|
||||
// along with testChdir to meet the expectations of command.Meta legacy
|
||||
// functionality.
|
||||
func tempWorkingDirFixture(t *testing.T, fixtureName string) *workdir.Dir {
|
||||
t.Helper()
|
||||
|
||||
dirPath := testTempDir(t)
|
||||
t.Logf("temporary directory %s with fixture %q", dirPath, fixtureName)
|
||||
|
||||
fixturePath := testFixturePath(fixtureName)
|
||||
testCopyDir(t, fixturePath, dirPath)
|
||||
// NOTE: Unfortunately because testCopyDir immediately aborts the test
|
||||
// on failure, a failure to copy will prevent us from cleaning up the
|
||||
// temporary directory. Oh well. :(
|
||||
|
||||
return workdir.NewDir(dirPath)
|
||||
}
|
||||
|
||||
func testFixturePath(name string) string {
|
||||
return filepath.Join(fixtureDir, name)
|
||||
}
|
||||
|
@ -174,21 +229,33 @@ func testPlan(t *testing.T) *plans.Plan {
|
|||
}
|
||||
|
||||
func testPlanFile(t *testing.T, configSnap *configload.Snapshot, state *states.State, plan *plans.Plan) string {
|
||||
return testPlanFileMatchState(t, configSnap, state, plan, statemgr.SnapshotMeta{})
|
||||
}
|
||||
|
||||
func testPlanFileMatchState(t *testing.T, configSnap *configload.Snapshot, state *states.State, plan *plans.Plan, stateMeta statemgr.SnapshotMeta) string {
|
||||
t.Helper()
|
||||
|
||||
stateFile := &statefile.File{
|
||||
Lineage: "",
|
||||
Lineage: stateMeta.Lineage,
|
||||
Serial: stateMeta.Serial,
|
||||
State: state,
|
||||
TerraformVersion: version.SemVer,
|
||||
}
|
||||
prevStateFile := &statefile.File{
|
||||
Lineage: "",
|
||||
Lineage: stateMeta.Lineage,
|
||||
Serial: stateMeta.Serial,
|
||||
State: state, // we just assume no changes detected during refresh
|
||||
TerraformVersion: version.SemVer,
|
||||
}
|
||||
|
||||
path := testTempFile(t)
|
||||
err := planfile.Create(path, configSnap, prevStateFile, stateFile, plan)
|
||||
err := planfile.Create(path, planfile.CreateArgs{
|
||||
ConfigSnapshot: configSnap,
|
||||
PreviousRunStateFile: prevStateFile,
|
||||
StateFile: stateFile,
|
||||
Plan: plan,
|
||||
DependencyLocks: depsfile.NewLocks(),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create temporary plan file: %s", err)
|
||||
}
|
||||
|
@ -490,13 +557,8 @@ func testTempFile(t *testing.T) string {
|
|||
|
||||
func testTempDir(t *testing.T) string {
|
||||
t.Helper()
|
||||
|
||||
d, err := ioutil.TempDir(testingDir, "tf")
|
||||
if err != nil {
|
||||
t.Fatalf("err: %s", err)
|
||||
}
|
||||
|
||||
d, err = filepath.EvalSymlinks(d)
|
||||
d := t.TempDir()
|
||||
d, err := filepath.EvalSymlinks(d)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
@ -667,8 +729,15 @@ func testInputMap(t *testing.T, answers map[string]string) func() {
|
|||
|
||||
// Return the cleanup
|
||||
return func() {
|
||||
var unusedAnswers = testInputResponseMap
|
||||
|
||||
// First, clean up!
|
||||
test = true
|
||||
testInputResponseMap = nil
|
||||
|
||||
if len(unusedAnswers) > 0 {
|
||||
t.Fatalf("expected no unused answers provided to command.testInputMap, got: %v", unusedAnswers)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -853,8 +922,10 @@ func testLockState(sourceDir, path string) (func(), error) {
|
|||
}
|
||||
|
||||
// testCopyDir recursively copies a directory tree, attempting to preserve
|
||||
// permissions. Source directory must exist, destination directory must *not*
|
||||
// exist. Symlinks are ignored and skipped.
|
||||
// permissions. Source directory must exist, destination directory may exist
|
||||
// but will be created if not; it should typically be a temporary directory,
|
||||
// and thus already created using os.MkdirTemp or similar.
|
||||
// Symlinks are ignored and skipped.
|
||||
func testCopyDir(t *testing.T, src, dst string) {
|
||||
t.Helper()
|
||||
|
||||
|
@ -873,9 +944,6 @@ func testCopyDir(t *testing.T, src, dst string) {
|
|||
if err != nil && !os.IsNotExist(err) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err == nil {
|
||||
t.Fatal("destination already exists")
|
||||
}
|
||||
|
||||
err = os.MkdirAll(dst, si.Mode())
|
||||
if err != nil {
|
||||
|
|
|
@ -9,6 +9,7 @@ import (
|
|||
"github.com/hashicorp/terraform/internal/backend"
|
||||
"github.com/hashicorp/terraform/internal/helper/wrappedstreams"
|
||||
"github.com/hashicorp/terraform/internal/repl"
|
||||
"github.com/hashicorp/terraform/internal/terraform"
|
||||
"github.com/hashicorp/terraform/internal/tfdiags"
|
||||
|
||||
"github.com/mitchellh/cli"
|
||||
|
@ -95,7 +96,7 @@ func (c *ConsoleCommand) Run(args []string) int {
|
|||
}
|
||||
|
||||
// Get the context
|
||||
ctx, _, ctxDiags := local.Context(opReq)
|
||||
lr, _, ctxDiags := local.LocalRun(opReq)
|
||||
diags = diags.Append(ctxDiags)
|
||||
if ctxDiags.HasErrors() {
|
||||
c.showDiagnostics(diags)
|
||||
|
@ -116,10 +117,18 @@ func (c *ConsoleCommand) Run(args []string) int {
|
|||
ErrorWriter: wrappedstreams.Stderr(),
|
||||
}
|
||||
|
||||
evalOpts := &terraform.EvalOpts{}
|
||||
if lr.PlanOpts != nil {
|
||||
// the LocalRun type is built primarily to support the main operations,
|
||||
// so the variable values end up in the "PlanOpts" even though we're
|
||||
// not actually making a plan.
|
||||
evalOpts.SetVariables = lr.PlanOpts.SetVariables
|
||||
}
|
||||
|
||||
// Before we can evaluate expressions, we must compute and populate any
|
||||
// derived values (input variables, local values, output values)
|
||||
// that are not stored in the persistent state.
|
||||
scope, scopeDiags := ctx.Eval(addrs.RootModuleInstance)
|
||||
scope, scopeDiags := lr.Core.Eval(lr.Config, lr.InputState, addrs.RootModuleInstance, evalOpts)
|
||||
diags = diags.Append(scopeDiags)
|
||||
if scope == nil {
|
||||
// scope is nil if there are errors so bad that we can't even build a scope.
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
//go:build !solaris
|
||||
// +build !solaris
|
||||
|
||||
// The readline library we use doesn't currently support solaris so
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
//go:build solaris
|
||||
// +build solaris
|
||||
|
||||
package command
|
||||
|
|
|
@ -0,0 +1,221 @@
|
|||
package e2etest
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/hashicorp/terraform/internal/e2e"
|
||||
"github.com/hashicorp/terraform/internal/getproviders"
|
||||
)
|
||||
|
||||
// TestProviderTampering tests various ways that the provider plugins in the
|
||||
// local cache directory might be modified after an initial "terraform init",
|
||||
// which other Terraform commands which use those plugins should catch and
|
||||
// report early.
|
||||
func TestProviderTampering(t *testing.T) {
|
||||
// General setup: we'll do a one-off init of a test directory as our
|
||||
// starting point, and then we'll clone that result for each test so
|
||||
// that we can save the cost of a repeated re-init with the same
|
||||
// provider.
|
||||
t.Parallel()
|
||||
|
||||
// This test reaches out to releases.hashicorp.com to download the
|
||||
// null provider, so it can only run if network access is allowed.
|
||||
skipIfCannotAccessNetwork(t)
|
||||
|
||||
fixturePath := filepath.Join("testdata", "provider-tampering-base")
|
||||
tf := e2e.NewBinary(terraformBin, fixturePath)
|
||||
defer tf.Close()
|
||||
|
||||
stdout, stderr, err := tf.Run("init")
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected init error: %s\nstderr:\n%s", err, stderr)
|
||||
}
|
||||
if !strings.Contains(stdout, "Installing hashicorp/null v") {
|
||||
t.Errorf("null provider download message is missing from init output:\n%s", stdout)
|
||||
t.Logf("(this can happen if you have a copy of the plugin in one of the global plugin search dirs)")
|
||||
}
|
||||
|
||||
seedDir := tf.WorkDir()
|
||||
const providerVersion = "3.1.0" // must match the version in the fixture config
|
||||
pluginDir := ".terraform/providers/registry.terraform.io/hashicorp/null/" + providerVersion + "/" + getproviders.CurrentPlatform.String()
|
||||
pluginExe := pluginDir + "/terraform-provider-null_v" + providerVersion + "_x5"
|
||||
if getproviders.CurrentPlatform.OS == "windows" {
|
||||
pluginExe += ".exe" // ugh
|
||||
}
|
||||
|
||||
t.Run("cache dir totally gone", func(t *testing.T) {
|
||||
tf := e2e.NewBinary(terraformBin, seedDir)
|
||||
defer tf.Close()
|
||||
workDir := tf.WorkDir()
|
||||
|
||||
err := os.RemoveAll(filepath.Join(workDir, ".terraform"))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
_, stderr, err := tf.Run("plan")
|
||||
if err == nil {
|
||||
t.Fatalf("unexpected plan success\nstdout:\n%s", stdout)
|
||||
}
|
||||
if want := `registry.terraform.io/hashicorp/null: there is no package for registry.terraform.io/hashicorp/null 3.1.0 cached in .terraform/providers`; !strings.Contains(stderr, want) {
|
||||
t.Errorf("missing expected error message\nwant substring: %s\ngot:\n%s", want, stderr)
|
||||
}
|
||||
if want := `terraform init`; !strings.Contains(stderr, want) {
|
||||
t.Errorf("missing expected error message\nwant substring: %s\ngot:\n%s", want, stderr)
|
||||
}
|
||||
})
|
||||
t.Run("null plugin package modified before plan", func(t *testing.T) {
|
||||
tf := e2e.NewBinary(terraformBin, seedDir)
|
||||
defer tf.Close()
|
||||
workDir := tf.WorkDir()
|
||||
|
||||
err := ioutil.WriteFile(filepath.Join(workDir, pluginExe), []byte("tamper"), 0600)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
stdout, stderr, err := tf.Run("plan")
|
||||
if err == nil {
|
||||
t.Fatalf("unexpected plan success\nstdout:\n%s", stdout)
|
||||
}
|
||||
if want := `registry.terraform.io/hashicorp/null: the cached package for registry.terraform.io/hashicorp/null 3.1.0 (in .terraform/providers) does not match any of the checksums recorded in the dependency lock file`; !strings.Contains(stderr, want) {
|
||||
t.Errorf("missing expected error message\nwant substring: %s\ngot:\n%s", want, stderr)
|
||||
}
|
||||
if want := `terraform init`; !strings.Contains(stderr, want) {
|
||||
t.Errorf("missing expected error message\nwant substring: %s\ngot:\n%s", want, stderr)
|
||||
}
|
||||
})
|
||||
t.Run("version constraint changed in config before plan", func(t *testing.T) {
|
||||
tf := e2e.NewBinary(terraformBin, seedDir)
|
||||
defer tf.Close()
|
||||
workDir := tf.WorkDir()
|
||||
|
||||
err := ioutil.WriteFile(filepath.Join(workDir, "provider-tampering-base.tf"), []byte(`
|
||||
terraform {
|
||||
required_providers {
|
||||
null = {
|
||||
source = "hashicorp/null"
|
||||
version = "1.0.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
`), 0600)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
stdout, stderr, err := tf.Run("plan")
|
||||
if err == nil {
|
||||
t.Fatalf("unexpected plan success\nstdout:\n%s", stdout)
|
||||
}
|
||||
if want := `provider registry.terraform.io/hashicorp/null: locked version selection 3.1.0 doesn't match the updated version constraints "1.0.0"`; !strings.Contains(stderr, want) {
|
||||
t.Errorf("missing expected error message\nwant substring: %s\ngot:\n%s", want, stderr)
|
||||
}
|
||||
if want := `terraform init -upgrade`; !strings.Contains(stderr, want) {
|
||||
t.Errorf("missing expected error message\nwant substring: %s\ngot:\n%s", want, stderr)
|
||||
}
|
||||
})
|
||||
t.Run("lock file modified before plan", func(t *testing.T) {
|
||||
tf := e2e.NewBinary(terraformBin, seedDir)
|
||||
defer tf.Close()
|
||||
workDir := tf.WorkDir()
|
||||
|
||||
// NOTE: We're just emptying out the lock file here because that's
|
||||
// good enough for what we're trying to assert. The leaf codepath
|
||||
// that generates this family of errors has some different variations
|
||||
// of this error message for otehr sorts of inconsistency, but those
|
||||
// are tested more thoroughly over in the "configs" package, which is
|
||||
// ultimately responsible for that logic.
|
||||
err := ioutil.WriteFile(filepath.Join(workDir, ".terraform.lock.hcl"), []byte(``), 0600)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
stdout, stderr, err := tf.Run("plan")
|
||||
if err == nil {
|
||||
t.Fatalf("unexpected plan success\nstdout:\n%s", stdout)
|
||||
}
|
||||
if want := `provider registry.terraform.io/hashicorp/null: required by this configuration but no version is selected`; !strings.Contains(stderr, want) {
|
||||
t.Errorf("missing expected error message\nwant substring: %s\ngot:\n%s", want, stderr)
|
||||
}
|
||||
if want := `terraform init`; !strings.Contains(stderr, want) {
|
||||
t.Errorf("missing expected error message\nwant substring: %s\ngot:\n%s", want, stderr)
|
||||
}
|
||||
})
|
||||
t.Run("lock file modified after plan", func(t *testing.T) {
|
||||
tf := e2e.NewBinary(terraformBin, seedDir)
|
||||
defer tf.Close()
|
||||
workDir := tf.WorkDir()
|
||||
|
||||
_, stderr, err := tf.Run("plan", "-out", "tfplan")
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected plan failure\nstderr:\n%s", stderr)
|
||||
}
|
||||
|
||||
err = os.Remove(filepath.Join(workDir, ".terraform.lock.hcl"))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
stdout, stderr, err := tf.Run("apply", "tfplan")
|
||||
if err == nil {
|
||||
t.Fatalf("unexpected apply success\nstdout:\n%s", stdout)
|
||||
}
|
||||
if want := `provider registry.terraform.io/hashicorp/null: required by this configuration but no version is selected`; !strings.Contains(stderr, want) {
|
||||
t.Errorf("missing expected error message\nwant substring: %s\ngot:\n%s", want, stderr)
|
||||
}
|
||||
if want := `Create a new plan from the updated configuration.`; !strings.Contains(stderr, want) {
|
||||
t.Errorf("missing expected error message\nwant substring: %s\ngot:\n%s", want, stderr)
|
||||
}
|
||||
})
|
||||
t.Run("plugin cache dir entirely removed after plan", func(t *testing.T) {
|
||||
tf := e2e.NewBinary(terraformBin, seedDir)
|
||||
defer tf.Close()
|
||||
workDir := tf.WorkDir()
|
||||
|
||||
_, stderr, err := tf.Run("plan", "-out", "tfplan")
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected plan failure\nstderr:\n%s", stderr)
|
||||
}
|
||||
|
||||
err = os.RemoveAll(filepath.Join(workDir, ".terraform"))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
stdout, stderr, err := tf.Run("apply", "tfplan")
|
||||
if err == nil {
|
||||
t.Fatalf("unexpected apply success\nstdout:\n%s", stdout)
|
||||
}
|
||||
if want := `registry.terraform.io/hashicorp/null: there is no package for registry.terraform.io/hashicorp/null 3.1.0 cached in .terraform/providers`; !strings.Contains(stderr, want) {
|
||||
t.Errorf("missing expected error message\nwant substring: %s\ngot:\n%s", want, stderr)
|
||||
}
|
||||
})
|
||||
t.Run("null plugin package modified after plan", func(t *testing.T) {
|
||||
tf := e2e.NewBinary(terraformBin, seedDir)
|
||||
defer tf.Close()
|
||||
workDir := tf.WorkDir()
|
||||
|
||||
_, stderr, err := tf.Run("plan", "-out", "tfplan")
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected plan failure\nstderr:\n%s", stderr)
|
||||
}
|
||||
|
||||
err = ioutil.WriteFile(filepath.Join(workDir, pluginExe), []byte("tamper"), 0600)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
stdout, stderr, err := tf.Run("apply", "tfplan")
|
||||
if err == nil {
|
||||
t.Fatalf("unexpected apply success\nstdout:\n%s", stdout)
|
||||
}
|
||||
if want := `registry.terraform.io/hashicorp/null: the cached package for registry.terraform.io/hashicorp/null 3.1.0 (in .terraform/providers) does not match any of the checksums recorded in the dependency lock file`; !strings.Contains(stderr, want) {
|
||||
t.Errorf("missing expected error message\nwant substring: %s\ngot:\n%s", want, stderr)
|
||||
}
|
||||
})
|
||||
}
|
12
internal/command/e2etest/testdata/provider-tampering-base/provider-tampering-base.tf
vendored
Normal file
12
internal/command/e2etest/testdata/provider-tampering-base/provider-tampering-base.tf
vendored
Normal file
|
@ -0,0 +1,12 @@
|
|||
terraform {
|
||||
required_providers {
|
||||
null = {
|
||||
# Our version is intentionally fixed so that we have a fixed
|
||||
# test case here, though we might have to update this in future
|
||||
# if e.g. Terraform stops supporting plugin protocol 5, or if
|
||||
# the null provider is yanked from the registry for some reason.
|
||||
source = "hashicorp/null"
|
||||
version = "3.1.0"
|
||||
}
|
||||
}
|
||||
}
|
|
@ -20,6 +20,21 @@ import (
|
|||
"github.com/hashicorp/terraform/internal/states"
|
||||
)
|
||||
|
||||
// DiffLanguage controls the description of the resource change reasons.
|
||||
type DiffLanguage rune
|
||||
|
||||
//go:generate go run golang.org/x/tools/cmd/stringer -type=DiffLanguage diff.go
|
||||
|
||||
const (
|
||||
// DiffLanguageProposedChange indicates that the change is one which is
|
||||
// planned to be applied.
|
||||
DiffLanguageProposedChange DiffLanguage = 'P'
|
||||
|
||||
// DiffLanguageDetectedDrift indicates that the change is detected drift
|
||||
// from the configuration.
|
||||
DiffLanguageDetectedDrift DiffLanguage = 'D'
|
||||
)
|
||||
|
||||
// ResourceChange returns a string representation of a change to a particular
|
||||
// resource, for inclusion in user-facing plan output.
|
||||
//
|
||||
|
@ -33,6 +48,7 @@ func ResourceChange(
|
|||
change *plans.ResourceInstanceChangeSrc,
|
||||
schema *configschema.Block,
|
||||
color *colorstring.Colorize,
|
||||
language DiffLanguage,
|
||||
) string {
|
||||
addr := change.Addr
|
||||
var buf bytes.Buffer
|
||||
|
@ -52,33 +68,86 @@ func ResourceChange(
|
|||
|
||||
switch change.Action {
|
||||
case plans.Create:
|
||||
buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] will be created", dispAddr)))
|
||||
buf.WriteString(fmt.Sprintf(color.Color("[bold] # %s[reset] will be created"), dispAddr))
|
||||
case plans.Read:
|
||||
buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] will be read during apply\n # (config refers to values not yet known)", dispAddr)))
|
||||
buf.WriteString(fmt.Sprintf(color.Color("[bold] # %s[reset] will be read during apply\n # (config refers to values not yet known)"), dispAddr))
|
||||
case plans.Update:
|
||||
buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] will be updated in-place", dispAddr)))
|
||||
switch language {
|
||||
case DiffLanguageProposedChange:
|
||||
buf.WriteString(fmt.Sprintf(color.Color("[bold] # %s[reset] will be updated in-place"), dispAddr))
|
||||
case DiffLanguageDetectedDrift:
|
||||
buf.WriteString(fmt.Sprintf(color.Color("[bold] # %s[reset] has changed"), dispAddr))
|
||||
default:
|
||||
buf.WriteString(fmt.Sprintf(color.Color("[bold] # %s[reset] update (unknown reason %s)"), dispAddr, language))
|
||||
}
|
||||
case plans.CreateThenDelete, plans.DeleteThenCreate:
|
||||
switch change.ActionReason {
|
||||
case plans.ResourceInstanceReplaceBecauseTainted:
|
||||
buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] is tainted, so must be [bold][red]replaced", dispAddr)))
|
||||
buf.WriteString(fmt.Sprintf(color.Color("[bold] # %s[reset] is tainted, so must be [bold][red]replaced"), dispAddr))
|
||||
case plans.ResourceInstanceReplaceByRequest:
|
||||
buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] will be [bold][red]replaced[reset], as requested", dispAddr)))
|
||||
buf.WriteString(fmt.Sprintf(color.Color("[bold] # %s[reset] will be [bold][red]replaced[reset], as requested"), dispAddr))
|
||||
default:
|
||||
buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] must be [bold][red]replaced", dispAddr)))
|
||||
buf.WriteString(fmt.Sprintf(color.Color("[bold] # %s[reset] must be [bold][red]replaced"), dispAddr))
|
||||
}
|
||||
case plans.Delete:
|
||||
buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] will be [bold][red]destroyed", dispAddr)))
|
||||
switch language {
|
||||
case DiffLanguageProposedChange:
|
||||
buf.WriteString(fmt.Sprintf(color.Color("[bold] # %s[reset] will be [bold][red]destroyed"), dispAddr))
|
||||
case DiffLanguageDetectedDrift:
|
||||
buf.WriteString(fmt.Sprintf(color.Color("[bold] # %s[reset] has been deleted"), dispAddr))
|
||||
default:
|
||||
buf.WriteString(fmt.Sprintf(color.Color("[bold] # %s[reset] delete (unknown reason %s)"), dispAddr, language))
|
||||
}
|
||||
// We can sometimes give some additional detail about why we're
|
||||
// proposing to delete. We show this as additional notes, rather than
|
||||
// as additional wording in the main action statement, in an attempt
|
||||
// to make the "will be destroyed" message prominent and consistent
|
||||
// in all cases, for easier scanning of this often-risky action.
|
||||
switch change.ActionReason {
|
||||
case plans.ResourceInstanceDeleteBecauseNoResourceConfig:
|
||||
buf.WriteString(fmt.Sprintf("\n # (because %s is not in configuration)", addr.Resource.Resource))
|
||||
case plans.ResourceInstanceDeleteBecauseNoModule:
|
||||
buf.WriteString(fmt.Sprintf("\n # (because %s is not in configuration)", addr.Module))
|
||||
case plans.ResourceInstanceDeleteBecauseWrongRepetition:
|
||||
// We have some different variations of this one
|
||||
switch addr.Resource.Key.(type) {
|
||||
case nil:
|
||||
buf.WriteString("\n # (because resource uses count or for_each)")
|
||||
case addrs.IntKey:
|
||||
buf.WriteString("\n # (because resource does not use count)")
|
||||
case addrs.StringKey:
|
||||
buf.WriteString("\n # (because resource does not use for_each)")
|
||||
}
|
||||
case plans.ResourceInstanceDeleteBecauseCountIndex:
|
||||
buf.WriteString(fmt.Sprintf("\n # (because index %s is out of range for count)", addr.Resource.Key))
|
||||
case plans.ResourceInstanceDeleteBecauseEachKey:
|
||||
buf.WriteString(fmt.Sprintf("\n # (because key %s is not in for_each map)", addr.Resource.Key))
|
||||
}
|
||||
if change.DeposedKey != states.NotDeposed {
|
||||
// Some extra context about this unusual situation.
|
||||
buf.WriteString(color.Color(fmt.Sprint("\n # (left over from a partially-failed replacement of this instance)")))
|
||||
buf.WriteString(color.Color("\n # (left over from a partially-failed replacement of this instance)"))
|
||||
}
|
||||
case plans.NoOp:
|
||||
if change.Moved() {
|
||||
buf.WriteString(fmt.Sprintf(color.Color("[bold] # %s[reset] has moved to [bold]%s[reset]"), change.PrevRunAddr.String(), dispAddr))
|
||||
break
|
||||
}
|
||||
fallthrough
|
||||
default:
|
||||
// should never happen, since the above is exhaustive
|
||||
buf.WriteString(fmt.Sprintf("%s has an action the plan renderer doesn't support (this is a bug)", dispAddr))
|
||||
}
|
||||
buf.WriteString(color.Color("[reset]\n"))
|
||||
|
||||
if change.Moved() && change.Action != plans.NoOp {
|
||||
buf.WriteString(fmt.Sprintf(color.Color(" # [reset](moved from %s)\n"), change.PrevRunAddr.String()))
|
||||
}
|
||||
|
||||
if change.Moved() && change.Action == plans.NoOp {
|
||||
buf.WriteString(" ")
|
||||
} else {
|
||||
buf.WriteString(color.Color(DiffActionSymbol(change.Action)) + " ")
|
||||
}
|
||||
|
||||
switch addr.Resource.Resource.Mode {
|
||||
case addrs.ManagedResourceMode:
|
||||
|
@ -140,147 +209,6 @@ func ResourceChange(
|
|||
return buf.String()
|
||||
}
|
||||
|
||||
// ResourceInstanceDrift returns a string representation of a change to a
|
||||
// particular resource instance that was made outside of Terraform, for
|
||||
// reporting a change that has already happened rather than one that is planned.
|
||||
//
|
||||
// The the two resource instances have equal current objects then the result
|
||||
// will be an empty string to indicate that there is no drift to render.
|
||||
//
|
||||
// The resource schema must be provided along with the change so that the
|
||||
// formatted change can reflect the configuration structure for the associated
|
||||
// resource.
|
||||
//
|
||||
// If "color" is non-nil, it will be used to color the result. Otherwise,
|
||||
// no color codes will be included.
|
||||
func ResourceInstanceDrift(
|
||||
addr addrs.AbsResourceInstance,
|
||||
before, after *states.ResourceInstance,
|
||||
schema *configschema.Block,
|
||||
color *colorstring.Colorize,
|
||||
) string {
|
||||
var buf bytes.Buffer
|
||||
|
||||
if color == nil {
|
||||
color = &colorstring.Colorize{
|
||||
Colors: colorstring.DefaultColors,
|
||||
Disable: true,
|
||||
Reset: false,
|
||||
}
|
||||
}
|
||||
|
||||
dispAddr := addr.String()
|
||||
action := plans.Update
|
||||
|
||||
switch {
|
||||
case before == nil || before.Current == nil:
|
||||
// before should never be nil, but before.Current can be if the
|
||||
// instance was deposed. There is nothing to render for a deposed
|
||||
// instance, since we intend to remove it.
|
||||
return ""
|
||||
|
||||
case after == nil || after.Current == nil:
|
||||
// The object was deleted
|
||||
buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] has been deleted", dispAddr)))
|
||||
action = plans.Delete
|
||||
default:
|
||||
// The object was changed
|
||||
buf.WriteString(color.Color(fmt.Sprintf("[bold] # %s[reset] has been changed", dispAddr)))
|
||||
}
|
||||
|
||||
buf.WriteString(color.Color("[reset]\n"))
|
||||
|
||||
buf.WriteString(color.Color(DiffActionSymbol(action)) + " ")
|
||||
|
||||
switch addr.Resource.Resource.Mode {
|
||||
case addrs.ManagedResourceMode:
|
||||
buf.WriteString(fmt.Sprintf(
|
||||
"resource %q %q",
|
||||
addr.Resource.Resource.Type,
|
||||
addr.Resource.Resource.Name,
|
||||
))
|
||||
case addrs.DataResourceMode:
|
||||
buf.WriteString(fmt.Sprintf(
|
||||
"data %q %q ",
|
||||
addr.Resource.Resource.Type,
|
||||
addr.Resource.Resource.Name,
|
||||
))
|
||||
default:
|
||||
// should never happen, since the above is exhaustive
|
||||
buf.WriteString(addr.String())
|
||||
}
|
||||
|
||||
buf.WriteString(" {")
|
||||
|
||||
p := blockBodyDiffPrinter{
|
||||
buf: &buf,
|
||||
color: color,
|
||||
action: action,
|
||||
}
|
||||
|
||||
// Most commonly-used resources have nested blocks that result in us
|
||||
// going at least three traversals deep while we recurse here, so we'll
|
||||
// start with that much capacity and then grow as needed for deeper
|
||||
// structures.
|
||||
path := make(cty.Path, 0, 3)
|
||||
|
||||
ty := schema.ImpliedType()
|
||||
|
||||
var err error
|
||||
var oldObj, newObj *states.ResourceInstanceObject
|
||||
oldObj, err = before.Current.Decode(ty)
|
||||
if err != nil {
|
||||
// We shouldn't encounter errors here because Terraform Core should've
|
||||
// made sure that the previous run object conforms to the current
|
||||
// schema by having the provider upgrade it, but we'll be robust here
|
||||
// in case there are some edges we didn't find yet.
|
||||
return fmt.Sprintf(" # %s previous run state doesn't conform to current schema; this is a Terraform bug\n # %s\n", addr, err)
|
||||
}
|
||||
if after != nil && after.Current != nil {
|
||||
newObj, err = after.Current.Decode(ty)
|
||||
if err != nil {
|
||||
// We shouldn't encounter errors here because Terraform Core should've
|
||||
// made sure that the prior state object conforms to the current
|
||||
// schema by having the provider upgrade it, even if we skipped
|
||||
// refreshing on this run, but we'll be robust here in case there are
|
||||
// some edges we didn't find yet.
|
||||
return fmt.Sprintf(" # %s refreshed state doesn't conform to current schema; this is a Terraform bug\n # %s\n", addr, err)
|
||||
}
|
||||
}
|
||||
|
||||
oldVal := oldObj.Value
|
||||
var newVal cty.Value
|
||||
if newObj != nil {
|
||||
newVal = newObj.Value
|
||||
} else {
|
||||
newVal = cty.NullVal(ty)
|
||||
}
|
||||
|
||||
if newVal.RawEquals(oldVal) {
|
||||
// Nothing to show, then.
|
||||
return ""
|
||||
}
|
||||
|
||||
// We currently have an opt-out that permits the legacy SDK to return values
|
||||
// that defy our usual conventions around handling of nesting blocks. To
|
||||
// avoid the rendering code from needing to handle all of these, we'll
|
||||
// normalize first.
|
||||
// (Ideally we'd do this as part of the SDK opt-out implementation in core,
|
||||
// but we've added it here for now to reduce risk of unexpected impacts
|
||||
// on other code in core.)
|
||||
oldVal = objchange.NormalizeObjectFromLegacySDK(oldVal, schema)
|
||||
newVal = objchange.NormalizeObjectFromLegacySDK(newVal, schema)
|
||||
|
||||
result := p.writeBlockBodyDiff(schema, oldVal, newVal, 6, path)
|
||||
if result.bodyWritten {
|
||||
buf.WriteString("\n")
|
||||
buf.WriteString(strings.Repeat(" ", 4))
|
||||
}
|
||||
buf.WriteString("}\n")
|
||||
|
||||
return buf.String()
|
||||
}
|
||||
|
||||
// OutputChanges returns a string representation of a set of changes to output
|
||||
// values for inclusion in user-facing plan output.
|
||||
//
|
||||
|
@ -387,7 +315,7 @@ func (p *blockBodyDiffPrinter) writeBlockBodyDiff(schema *configschema.Block, ol
|
|||
}
|
||||
p.buf.WriteString("\n")
|
||||
p.buf.WriteString(strings.Repeat(" ", indent+2))
|
||||
p.buf.WriteString(p.color.Color(fmt.Sprintf("[dark_gray]# (%d unchanged %s hidden)[reset]", result.skippedBlocks, noun)))
|
||||
p.buf.WriteString(fmt.Sprintf(p.color.Color("[dark_gray]# (%d unchanged %s hidden)[reset]"), result.skippedBlocks, noun))
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -606,14 +534,24 @@ func (p *blockBodyDiffPrinter) writeNestedAttrDiff(
|
|||
p.buf.WriteString(strings.Repeat(" ", indent+2))
|
||||
p.buf.WriteString("]")
|
||||
|
||||
if !new.IsKnown() {
|
||||
p.buf.WriteString(" -> (known after apply)")
|
||||
}
|
||||
|
||||
case configschema.NestingSet:
|
||||
oldItems := ctyCollectionValues(old)
|
||||
newItems := ctyCollectionValues(new)
|
||||
|
||||
var all cty.Value
|
||||
if len(oldItems)+len(newItems) > 0 {
|
||||
allItems := make([]cty.Value, 0, len(oldItems)+len(newItems))
|
||||
allItems = append(allItems, oldItems...)
|
||||
allItems = append(allItems, newItems...)
|
||||
all := cty.SetVal(allItems)
|
||||
|
||||
all = cty.SetVal(allItems)
|
||||
} else {
|
||||
all = cty.SetValEmpty(old.Type().ElementType())
|
||||
}
|
||||
|
||||
p.buf.WriteString(" = [")
|
||||
|
||||
|
@ -625,11 +563,18 @@ func (p *blockBodyDiffPrinter) writeNestedAttrDiff(
|
|||
case !val.IsKnown():
|
||||
action = plans.Update
|
||||
newValue = val
|
||||
case !old.HasElement(val).True():
|
||||
case !new.IsKnown():
|
||||
action = plans.Delete
|
||||
// the value must have come from the old set
|
||||
oldValue = val
|
||||
// Mark the new val as null, but the entire set will be
|
||||
// displayed as "(unknown after apply)"
|
||||
newValue = cty.NullVal(val.Type())
|
||||
case old.IsNull() || !old.HasElement(val).True():
|
||||
action = plans.Create
|
||||
oldValue = cty.NullVal(val.Type())
|
||||
newValue = val
|
||||
case !new.HasElement(val).True():
|
||||
case new.IsNull() || !new.HasElement(val).True():
|
||||
action = plans.Delete
|
||||
oldValue = val
|
||||
newValue = cty.NullVal(val.Type())
|
||||
|
@ -659,6 +604,10 @@ func (p *blockBodyDiffPrinter) writeNestedAttrDiff(
|
|||
p.buf.WriteString(strings.Repeat(" ", indent+2))
|
||||
p.buf.WriteString("]")
|
||||
|
||||
if !new.IsKnown() {
|
||||
p.buf.WriteString(" -> (known after apply)")
|
||||
}
|
||||
|
||||
case configschema.NestingMap:
|
||||
// For the sake of handling nested blocks, we'll treat a null map
|
||||
// the same as an empty map since the config language doesn't
|
||||
|
@ -667,7 +616,12 @@ func (p *blockBodyDiffPrinter) writeNestedAttrDiff(
|
|||
new = ctyNullBlockMapAsEmpty(new)
|
||||
|
||||
oldItems := old.AsValueMap()
|
||||
newItems := new.AsValueMap()
|
||||
|
||||
newItems := map[string]cty.Value{}
|
||||
|
||||
if new.IsKnown() {
|
||||
newItems = new.AsValueMap()
|
||||
}
|
||||
|
||||
allKeys := make(map[string]bool)
|
||||
for k := range oldItems {
|
||||
|
@ -689,6 +643,7 @@ func (p *blockBodyDiffPrinter) writeNestedAttrDiff(
|
|||
for _, k := range allKeysOrder {
|
||||
var action plans.Action
|
||||
oldValue := oldItems[k]
|
||||
|
||||
newValue := newItems[k]
|
||||
switch {
|
||||
case oldValue == cty.NilVal:
|
||||
|
@ -724,9 +679,10 @@ func (p *blockBodyDiffPrinter) writeNestedAttrDiff(
|
|||
p.writeSkippedElems(unchanged, indent+4)
|
||||
p.buf.WriteString(strings.Repeat(" ", indent+2))
|
||||
p.buf.WriteString("}")
|
||||
if !new.IsKnown() {
|
||||
p.buf.WriteString(" -> (known after apply)")
|
||||
}
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func (p *blockBodyDiffPrinter) writeNestedBlockDiffs(name string, blockS *configschema.NestedBlock, old, new cty.Value, blankBefore bool, indent int, path cty.Path) int {
|
||||
|
@ -1379,7 +1335,7 @@ func (p *blockBodyDiffPrinter) writeValueDiff(old, new cty.Value, indent int, pa
|
|||
if suppressedElements == 1 {
|
||||
noun = "element"
|
||||
}
|
||||
p.buf.WriteString(p.color.Color(fmt.Sprintf("[dark_gray]# (%d unchanged %s hidden)[reset]", suppressedElements, noun)))
|
||||
p.buf.WriteString(fmt.Sprintf(p.color.Color("[dark_gray]# (%d unchanged %s hidden)[reset]"), suppressedElements, noun))
|
||||
p.buf.WriteString("\n")
|
||||
}
|
||||
|
||||
|
@ -1440,7 +1396,7 @@ func (p *blockBodyDiffPrinter) writeValueDiff(old, new cty.Value, indent int, pa
|
|||
if hidden == 1 {
|
||||
noun = "element"
|
||||
}
|
||||
p.buf.WriteString(p.color.Color(fmt.Sprintf("[dark_gray]# (%d unchanged %s hidden)[reset]", hidden, noun)))
|
||||
p.buf.WriteString(fmt.Sprintf(p.color.Color("[dark_gray]# (%d unchanged %s hidden)[reset]"), hidden, noun))
|
||||
p.buf.WriteString("\n")
|
||||
}
|
||||
|
||||
|
@ -1582,7 +1538,7 @@ func (p *blockBodyDiffPrinter) writeValueDiff(old, new cty.Value, indent int, pa
|
|||
if suppressedElements == 1 {
|
||||
noun = "element"
|
||||
}
|
||||
p.buf.WriteString(p.color.Color(fmt.Sprintf("[dark_gray]# (%d unchanged %s hidden)[reset]", suppressedElements, noun)))
|
||||
p.buf.WriteString(fmt.Sprintf(p.color.Color("[dark_gray]# (%d unchanged %s hidden)[reset]"), suppressedElements, noun))
|
||||
p.buf.WriteString("\n")
|
||||
}
|
||||
|
||||
|
@ -1674,7 +1630,7 @@ func (p *blockBodyDiffPrinter) writeValueDiff(old, new cty.Value, indent int, pa
|
|||
if suppressedElements == 1 {
|
||||
noun = "element"
|
||||
}
|
||||
p.buf.WriteString(p.color.Color(fmt.Sprintf("[dark_gray]# (%d unchanged %s hidden)[reset]", suppressedElements, noun)))
|
||||
p.buf.WriteString(fmt.Sprintf(p.color.Color("[dark_gray]# (%d unchanged %s hidden)[reset]"), suppressedElements, noun))
|
||||
p.buf.WriteString("\n")
|
||||
}
|
||||
|
||||
|
@ -1747,7 +1703,7 @@ func (p *blockBodyDiffPrinter) writeSensitivityWarning(old, new cty.Value, inden
|
|||
|
||||
if new.HasMark(marks.Sensitive) && !old.HasMark(marks.Sensitive) {
|
||||
p.buf.WriteString(strings.Repeat(" ", indent))
|
||||
p.buf.WriteString(p.color.Color(fmt.Sprintf("# [yellow]Warning:[reset] this %s will be marked as sensitive and will not\n", diffType)))
|
||||
p.buf.WriteString(fmt.Sprintf(p.color.Color("# [yellow]Warning:[reset] this %s will be marked as sensitive and will not\n"), diffType))
|
||||
p.buf.WriteString(strings.Repeat(" ", indent))
|
||||
p.buf.WriteString(fmt.Sprintf("# display in UI output after applying this change.%s\n", valueUnchangedSuffix))
|
||||
}
|
||||
|
@ -1755,7 +1711,7 @@ func (p *blockBodyDiffPrinter) writeSensitivityWarning(old, new cty.Value, inden
|
|||
// Note if changing this attribute will change its sensitivity
|
||||
if old.HasMark(marks.Sensitive) && !new.HasMark(marks.Sensitive) {
|
||||
p.buf.WriteString(strings.Repeat(" ", indent))
|
||||
p.buf.WriteString(p.color.Color(fmt.Sprintf("# [yellow]Warning:[reset] this %s will no longer be marked as sensitive\n", diffType)))
|
||||
p.buf.WriteString(fmt.Sprintf(p.color.Color("# [yellow]Warning:[reset] this %s will no longer be marked as sensitive\n"), diffType))
|
||||
p.buf.WriteString(strings.Repeat(" ", indent))
|
||||
p.buf.WriteString(fmt.Sprintf("# after applying this change.%s\n", valueUnchangedSuffix))
|
||||
}
|
||||
|
@ -2017,7 +1973,7 @@ func (p *blockBodyDiffPrinter) writeSkippedAttr(skipped, indent int) {
|
|||
}
|
||||
p.buf.WriteString("\n")
|
||||
p.buf.WriteString(strings.Repeat(" ", indent))
|
||||
p.buf.WriteString(p.color.Color(fmt.Sprintf("[dark_gray]# (%d unchanged %s hidden)[reset]", skipped, noun)))
|
||||
p.buf.WriteString(fmt.Sprintf(p.color.Color("[dark_gray]# (%d unchanged %s hidden)[reset]"), skipped, noun))
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -2028,7 +1984,7 @@ func (p *blockBodyDiffPrinter) writeSkippedElems(skipped, indent int) {
|
|||
noun = "element"
|
||||
}
|
||||
p.buf.WriteString(strings.Repeat(" ", indent))
|
||||
p.buf.WriteString(p.color.Color(fmt.Sprintf("[dark_gray]# (%d unchanged %s hidden)[reset]", skipped, noun)))
|
||||
p.buf.WriteString(fmt.Sprintf(p.color.Color("[dark_gray]# (%d unchanged %s hidden)[reset]"), skipped, noun))
|
||||
p.buf.WriteString("\n")
|
||||
}
|
||||
}
|
||||
|
|
|
@ -2633,6 +2633,56 @@ func TestResourceChange_nestedList(t *testing.T) {
|
|||
~ attr = "y" -> "z"
|
||||
}
|
||||
}
|
||||
`,
|
||||
},
|
||||
"in-place update - unknown": {
|
||||
Action: plans.Update,
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Before: cty.ObjectVal(map[string]cty.Value{
|
||||
"id": cty.StringVal("i-02ae66f368e8518a9"),
|
||||
"ami": cty.StringVal("ami-BEFORE"),
|
||||
"disks": cty.ListVal([]cty.Value{
|
||||
cty.ObjectVal(map[string]cty.Value{
|
||||
"mount_point": cty.StringVal("/var/diska"),
|
||||
"size": cty.StringVal("50GB"),
|
||||
}),
|
||||
}),
|
||||
"root_block_device": cty.ListVal([]cty.Value{
|
||||
cty.ObjectVal(map[string]cty.Value{
|
||||
"volume_type": cty.StringVal("gp2"),
|
||||
"new_field": cty.StringVal("new_value"),
|
||||
}),
|
||||
}),
|
||||
}),
|
||||
After: cty.ObjectVal(map[string]cty.Value{
|
||||
"id": cty.StringVal("i-02ae66f368e8518a9"),
|
||||
"ami": cty.StringVal("ami-AFTER"),
|
||||
"disks": cty.UnknownVal(cty.List(cty.Object(map[string]cty.Type{
|
||||
"mount_point": cty.String,
|
||||
"size": cty.String,
|
||||
}))),
|
||||
"root_block_device": cty.ListVal([]cty.Value{
|
||||
cty.ObjectVal(map[string]cty.Value{
|
||||
"volume_type": cty.StringVal("gp2"),
|
||||
"new_field": cty.StringVal("new_value"),
|
||||
}),
|
||||
}),
|
||||
}),
|
||||
RequiredReplace: cty.NewPathSet(),
|
||||
Schema: testSchemaPlus(configschema.NestingList),
|
||||
ExpectedOutput: ` # test_instance.example will be updated in-place
|
||||
~ resource "test_instance" "example" {
|
||||
~ ami = "ami-BEFORE" -> "ami-AFTER"
|
||||
~ disks = [
|
||||
~ {
|
||||
- mount_point = "/var/diska" -> null
|
||||
- size = "50GB" -> null
|
||||
},
|
||||
] -> (known after apply)
|
||||
id = "i-02ae66f368e8518a9"
|
||||
|
||||
# (1 unchanged block hidden)
|
||||
}
|
||||
`,
|
||||
},
|
||||
}
|
||||
|
@ -2861,6 +2911,148 @@ func TestResourceChange_nestedSet(t *testing.T) {
|
|||
- volume_type = "gp2" -> null
|
||||
}
|
||||
}
|
||||
`,
|
||||
},
|
||||
"in-place update - empty nested sets": {
|
||||
Action: plans.Update,
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Before: cty.ObjectVal(map[string]cty.Value{
|
||||
"id": cty.StringVal("i-02ae66f368e8518a9"),
|
||||
"ami": cty.StringVal("ami-BEFORE"),
|
||||
"disks": cty.NullVal(cty.Set(cty.Object(map[string]cty.Type{
|
||||
"mount_point": cty.String,
|
||||
"size": cty.String,
|
||||
}))),
|
||||
"root_block_device": cty.SetValEmpty(cty.Object(map[string]cty.Type{
|
||||
"volume_type": cty.String,
|
||||
})),
|
||||
}),
|
||||
After: cty.ObjectVal(map[string]cty.Value{
|
||||
"id": cty.StringVal("i-02ae66f368e8518a9"),
|
||||
"ami": cty.StringVal("ami-AFTER"),
|
||||
"disks": cty.SetValEmpty(cty.Object(map[string]cty.Type{
|
||||
"mount_point": cty.String,
|
||||
"size": cty.String,
|
||||
})),
|
||||
"root_block_device": cty.SetValEmpty(cty.Object(map[string]cty.Type{
|
||||
"volume_type": cty.String,
|
||||
})),
|
||||
}),
|
||||
RequiredReplace: cty.NewPathSet(),
|
||||
Schema: testSchema(configschema.NestingSet),
|
||||
ExpectedOutput: ` # test_instance.example will be updated in-place
|
||||
~ resource "test_instance" "example" {
|
||||
~ ami = "ami-BEFORE" -> "ami-AFTER"
|
||||
+ disks = [
|
||||
]
|
||||
id = "i-02ae66f368e8518a9"
|
||||
}
|
||||
`,
|
||||
},
|
||||
"in-place update - null insertion": {
|
||||
Action: plans.Update,
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Before: cty.ObjectVal(map[string]cty.Value{
|
||||
"id": cty.StringVal("i-02ae66f368e8518a9"),
|
||||
"ami": cty.StringVal("ami-BEFORE"),
|
||||
"disks": cty.NullVal(cty.Set(cty.Object(map[string]cty.Type{
|
||||
"mount_point": cty.String,
|
||||
"size": cty.String,
|
||||
}))),
|
||||
"root_block_device": cty.SetVal([]cty.Value{
|
||||
cty.ObjectVal(map[string]cty.Value{
|
||||
"volume_type": cty.StringVal("gp2"),
|
||||
"new_field": cty.NullVal(cty.String),
|
||||
}),
|
||||
}),
|
||||
}),
|
||||
After: cty.ObjectVal(map[string]cty.Value{
|
||||
"id": cty.StringVal("i-02ae66f368e8518a9"),
|
||||
"ami": cty.StringVal("ami-AFTER"),
|
||||
"disks": cty.SetVal([]cty.Value{
|
||||
cty.ObjectVal(map[string]cty.Value{
|
||||
"mount_point": cty.StringVal("/var/diska"),
|
||||
"size": cty.StringVal("50GB"),
|
||||
}),
|
||||
}),
|
||||
"root_block_device": cty.SetVal([]cty.Value{
|
||||
cty.ObjectVal(map[string]cty.Value{
|
||||
"volume_type": cty.StringVal("gp2"),
|
||||
"new_field": cty.StringVal("new_value"),
|
||||
}),
|
||||
}),
|
||||
}),
|
||||
RequiredReplace: cty.NewPathSet(),
|
||||
Schema: testSchemaPlus(configschema.NestingSet),
|
||||
ExpectedOutput: ` # test_instance.example will be updated in-place
|
||||
~ resource "test_instance" "example" {
|
||||
~ ami = "ami-BEFORE" -> "ami-AFTER"
|
||||
+ disks = [
|
||||
+ {
|
||||
+ mount_point = "/var/diska"
|
||||
+ size = "50GB"
|
||||
},
|
||||
]
|
||||
id = "i-02ae66f368e8518a9"
|
||||
|
||||
+ root_block_device {
|
||||
+ new_field = "new_value"
|
||||
+ volume_type = "gp2"
|
||||
}
|
||||
- root_block_device {
|
||||
- volume_type = "gp2" -> null
|
||||
}
|
||||
}
|
||||
`,
|
||||
},
|
||||
"in-place update - unknown": {
|
||||
Action: plans.Update,
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Before: cty.ObjectVal(map[string]cty.Value{
|
||||
"id": cty.StringVal("i-02ae66f368e8518a9"),
|
||||
"ami": cty.StringVal("ami-BEFORE"),
|
||||
"disks": cty.SetVal([]cty.Value{
|
||||
cty.ObjectVal(map[string]cty.Value{
|
||||
"mount_point": cty.StringVal("/var/diska"),
|
||||
"size": cty.StringVal("50GB"),
|
||||
}),
|
||||
}),
|
||||
"root_block_device": cty.SetVal([]cty.Value{
|
||||
cty.ObjectVal(map[string]cty.Value{
|
||||
"volume_type": cty.StringVal("gp2"),
|
||||
"new_field": cty.StringVal("new_value"),
|
||||
}),
|
||||
}),
|
||||
}),
|
||||
After: cty.ObjectVal(map[string]cty.Value{
|
||||
"id": cty.StringVal("i-02ae66f368e8518a9"),
|
||||
"ami": cty.StringVal("ami-AFTER"),
|
||||
"disks": cty.UnknownVal(cty.Set(cty.Object(map[string]cty.Type{
|
||||
"mount_point": cty.String,
|
||||
"size": cty.String,
|
||||
}))),
|
||||
"root_block_device": cty.SetVal([]cty.Value{
|
||||
cty.ObjectVal(map[string]cty.Value{
|
||||
"volume_type": cty.StringVal("gp2"),
|
||||
"new_field": cty.StringVal("new_value"),
|
||||
}),
|
||||
}),
|
||||
}),
|
||||
RequiredReplace: cty.NewPathSet(),
|
||||
Schema: testSchemaPlus(configschema.NestingSet),
|
||||
ExpectedOutput: ` # test_instance.example will be updated in-place
|
||||
~ resource "test_instance" "example" {
|
||||
~ ami = "ami-BEFORE" -> "ami-AFTER"
|
||||
~ disks = [
|
||||
- {
|
||||
- mount_point = "/var/diska" -> null
|
||||
- size = "50GB" -> null
|
||||
},
|
||||
] -> (known after apply)
|
||||
id = "i-02ae66f368e8518a9"
|
||||
|
||||
# (1 unchanged block hidden)
|
||||
}
|
||||
`,
|
||||
},
|
||||
}
|
||||
|
@ -3199,7 +3391,339 @@ func TestResourceChange_nestedMap(t *testing.T) {
|
|||
}
|
||||
`,
|
||||
},
|
||||
"in-place update - unknown": {
|
||||
Action: plans.Update,
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Before: cty.ObjectVal(map[string]cty.Value{
|
||||
"id": cty.StringVal("i-02ae66f368e8518a9"),
|
||||
"ami": cty.StringVal("ami-BEFORE"),
|
||||
"disks": cty.MapVal(map[string]cty.Value{
|
||||
"disk_a": cty.ObjectVal(map[string]cty.Value{
|
||||
"mount_point": cty.StringVal("/var/diska"),
|
||||
"size": cty.StringVal("50GB"),
|
||||
}),
|
||||
}),
|
||||
"root_block_device": cty.MapVal(map[string]cty.Value{
|
||||
"a": cty.ObjectVal(map[string]cty.Value{
|
||||
"volume_type": cty.StringVal("gp2"),
|
||||
"new_field": cty.StringVal("new_value"),
|
||||
}),
|
||||
}),
|
||||
}),
|
||||
After: cty.ObjectVal(map[string]cty.Value{
|
||||
"id": cty.StringVal("i-02ae66f368e8518a9"),
|
||||
"ami": cty.StringVal("ami-AFTER"),
|
||||
"disks": cty.UnknownVal(cty.Map(cty.Object(map[string]cty.Type{
|
||||
"mount_point": cty.String,
|
||||
"size": cty.String,
|
||||
}))),
|
||||
"root_block_device": cty.MapVal(map[string]cty.Value{
|
||||
"a": cty.ObjectVal(map[string]cty.Value{
|
||||
"volume_type": cty.StringVal("gp2"),
|
||||
"new_field": cty.StringVal("new_value"),
|
||||
}),
|
||||
}),
|
||||
}),
|
||||
RequiredReplace: cty.NewPathSet(),
|
||||
Schema: testSchemaPlus(configschema.NestingMap),
|
||||
ExpectedOutput: ` # test_instance.example will be updated in-place
|
||||
~ resource "test_instance" "example" {
|
||||
~ ami = "ami-BEFORE" -> "ami-AFTER"
|
||||
~ disks = {
|
||||
- "disk_a" = {
|
||||
- mount_point = "/var/diska" -> null
|
||||
- size = "50GB" -> null
|
||||
},
|
||||
} -> (known after apply)
|
||||
id = "i-02ae66f368e8518a9"
|
||||
|
||||
# (1 unchanged block hidden)
|
||||
}
|
||||
`,
|
||||
},
|
||||
"in-place update - insertion sensitive": {
|
||||
Action: plans.Update,
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Before: cty.ObjectVal(map[string]cty.Value{
|
||||
"id": cty.StringVal("i-02ae66f368e8518a9"),
|
||||
"ami": cty.StringVal("ami-BEFORE"),
|
||||
"disks": cty.MapValEmpty(cty.Object(map[string]cty.Type{
|
||||
"mount_point": cty.String,
|
||||
"size": cty.String,
|
||||
})),
|
||||
"root_block_device": cty.MapVal(map[string]cty.Value{
|
||||
"a": cty.ObjectVal(map[string]cty.Value{
|
||||
"volume_type": cty.StringVal("gp2"),
|
||||
"new_field": cty.StringVal("new_value"),
|
||||
}),
|
||||
}),
|
||||
}),
|
||||
After: cty.ObjectVal(map[string]cty.Value{
|
||||
"id": cty.StringVal("i-02ae66f368e8518a9"),
|
||||
"ami": cty.StringVal("ami-AFTER"),
|
||||
"disks": cty.MapVal(map[string]cty.Value{
|
||||
"disk_a": cty.ObjectVal(map[string]cty.Value{
|
||||
"mount_point": cty.StringVal("/var/diska"),
|
||||
"size": cty.StringVal("50GB"),
|
||||
}),
|
||||
}),
|
||||
"root_block_device": cty.MapVal(map[string]cty.Value{
|
||||
"a": cty.ObjectVal(map[string]cty.Value{
|
||||
"volume_type": cty.StringVal("gp2"),
|
||||
"new_field": cty.StringVal("new_value"),
|
||||
}),
|
||||
}),
|
||||
}),
|
||||
AfterValMarks: []cty.PathValueMarks{
|
||||
{
|
||||
Path: cty.Path{cty.GetAttrStep{Name: "disks"},
|
||||
cty.IndexStep{Key: cty.StringVal("disk_a")},
|
||||
cty.GetAttrStep{Name: "mount_point"},
|
||||
},
|
||||
Marks: cty.NewValueMarks(marks.Sensitive),
|
||||
},
|
||||
},
|
||||
RequiredReplace: cty.NewPathSet(),
|
||||
Schema: testSchemaPlus(configschema.NestingMap),
|
||||
ExpectedOutput: ` # test_instance.example will be updated in-place
|
||||
~ resource "test_instance" "example" {
|
||||
~ ami = "ami-BEFORE" -> "ami-AFTER"
|
||||
~ disks = {
|
||||
+ "disk_a" = {
|
||||
+ mount_point = (sensitive)
|
||||
+ size = "50GB"
|
||||
},
|
||||
}
|
||||
id = "i-02ae66f368e8518a9"
|
||||
|
||||
# (1 unchanged block hidden)
|
||||
}
|
||||
`,
|
||||
},
|
||||
}
|
||||
runTestCases(t, testCases)
|
||||
}
|
||||
|
||||
func TestResourceChange_actionReason(t *testing.T) {
|
||||
emptySchema := &configschema.Block{}
|
||||
nullVal := cty.NullVal(cty.EmptyObject)
|
||||
emptyVal := cty.EmptyObjectVal
|
||||
|
||||
testCases := map[string]testCase{
|
||||
"delete for no particular reason": {
|
||||
Action: plans.Delete,
|
||||
ActionReason: plans.ResourceInstanceChangeNoReason,
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Before: emptyVal,
|
||||
After: nullVal,
|
||||
Schema: emptySchema,
|
||||
RequiredReplace: cty.NewPathSet(),
|
||||
ExpectedOutput: ` # test_instance.example will be destroyed
|
||||
- resource "test_instance" "example" {}
|
||||
`,
|
||||
},
|
||||
"delete because of wrong repetition mode (NoKey)": {
|
||||
Action: plans.Delete,
|
||||
ActionReason: plans.ResourceInstanceDeleteBecauseWrongRepetition,
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
InstanceKey: addrs.NoKey,
|
||||
Before: emptyVal,
|
||||
After: nullVal,
|
||||
Schema: emptySchema,
|
||||
RequiredReplace: cty.NewPathSet(),
|
||||
ExpectedOutput: ` # test_instance.example will be destroyed
|
||||
# (because resource uses count or for_each)
|
||||
- resource "test_instance" "example" {}
|
||||
`,
|
||||
},
|
||||
"delete because of wrong repetition mode (IntKey)": {
|
||||
Action: plans.Delete,
|
||||
ActionReason: plans.ResourceInstanceDeleteBecauseWrongRepetition,
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
InstanceKey: addrs.IntKey(1),
|
||||
Before: emptyVal,
|
||||
After: nullVal,
|
||||
Schema: emptySchema,
|
||||
RequiredReplace: cty.NewPathSet(),
|
||||
ExpectedOutput: ` # test_instance.example[1] will be destroyed
|
||||
# (because resource does not use count)
|
||||
- resource "test_instance" "example" {}
|
||||
`,
|
||||
},
|
||||
"delete because of wrong repetition mode (StringKey)": {
|
||||
Action: plans.Delete,
|
||||
ActionReason: plans.ResourceInstanceDeleteBecauseWrongRepetition,
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
InstanceKey: addrs.StringKey("a"),
|
||||
Before: emptyVal,
|
||||
After: nullVal,
|
||||
Schema: emptySchema,
|
||||
RequiredReplace: cty.NewPathSet(),
|
||||
ExpectedOutput: ` # test_instance.example["a"] will be destroyed
|
||||
# (because resource does not use for_each)
|
||||
- resource "test_instance" "example" {}
|
||||
`,
|
||||
},
|
||||
"delete because no resource configuration": {
|
||||
Action: plans.Delete,
|
||||
ActionReason: plans.ResourceInstanceDeleteBecauseNoResourceConfig,
|
||||
ModuleInst: addrs.RootModuleInstance.Child("foo", addrs.NoKey),
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Before: emptyVal,
|
||||
After: nullVal,
|
||||
Schema: emptySchema,
|
||||
RequiredReplace: cty.NewPathSet(),
|
||||
ExpectedOutput: ` # module.foo.test_instance.example will be destroyed
|
||||
# (because test_instance.example is not in configuration)
|
||||
- resource "test_instance" "example" {}
|
||||
`,
|
||||
},
|
||||
"delete because no module": {
|
||||
Action: plans.Delete,
|
||||
ActionReason: plans.ResourceInstanceDeleteBecauseNoModule,
|
||||
ModuleInst: addrs.RootModuleInstance.Child("foo", addrs.IntKey(1)),
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Before: emptyVal,
|
||||
After: nullVal,
|
||||
Schema: emptySchema,
|
||||
RequiredReplace: cty.NewPathSet(),
|
||||
ExpectedOutput: ` # module.foo[1].test_instance.example will be destroyed
|
||||
# (because module.foo[1] is not in configuration)
|
||||
- resource "test_instance" "example" {}
|
||||
`,
|
||||
},
|
||||
"delete because out of range for count": {
|
||||
Action: plans.Delete,
|
||||
ActionReason: plans.ResourceInstanceDeleteBecauseCountIndex,
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
InstanceKey: addrs.IntKey(1),
|
||||
Before: emptyVal,
|
||||
After: nullVal,
|
||||
Schema: emptySchema,
|
||||
RequiredReplace: cty.NewPathSet(),
|
||||
ExpectedOutput: ` # test_instance.example[1] will be destroyed
|
||||
# (because index [1] is out of range for count)
|
||||
- resource "test_instance" "example" {}
|
||||
`,
|
||||
},
|
||||
"delete because out of range for for_each": {
|
||||
Action: plans.Delete,
|
||||
ActionReason: plans.ResourceInstanceDeleteBecauseEachKey,
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
InstanceKey: addrs.StringKey("boop"),
|
||||
Before: emptyVal,
|
||||
After: nullVal,
|
||||
Schema: emptySchema,
|
||||
RequiredReplace: cty.NewPathSet(),
|
||||
ExpectedOutput: ` # test_instance.example["boop"] will be destroyed
|
||||
# (because key ["boop"] is not in for_each map)
|
||||
- resource "test_instance" "example" {}
|
||||
`,
|
||||
},
|
||||
"replace for no particular reason (delete first)": {
|
||||
Action: plans.DeleteThenCreate,
|
||||
ActionReason: plans.ResourceInstanceChangeNoReason,
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Before: emptyVal,
|
||||
After: nullVal,
|
||||
Schema: emptySchema,
|
||||
RequiredReplace: cty.NewPathSet(),
|
||||
ExpectedOutput: ` # test_instance.example must be replaced
|
||||
-/+ resource "test_instance" "example" {}
|
||||
`,
|
||||
},
|
||||
"replace for no particular reason (create first)": {
|
||||
Action: plans.CreateThenDelete,
|
||||
ActionReason: plans.ResourceInstanceChangeNoReason,
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Before: emptyVal,
|
||||
After: nullVal,
|
||||
Schema: emptySchema,
|
||||
RequiredReplace: cty.NewPathSet(),
|
||||
ExpectedOutput: ` # test_instance.example must be replaced
|
||||
+/- resource "test_instance" "example" {}
|
||||
`,
|
||||
},
|
||||
"replace by request (delete first)": {
|
||||
Action: plans.DeleteThenCreate,
|
||||
ActionReason: plans.ResourceInstanceReplaceByRequest,
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Before: emptyVal,
|
||||
After: nullVal,
|
||||
Schema: emptySchema,
|
||||
RequiredReplace: cty.NewPathSet(),
|
||||
ExpectedOutput: ` # test_instance.example will be replaced, as requested
|
||||
-/+ resource "test_instance" "example" {}
|
||||
`,
|
||||
},
|
||||
"replace by request (create first)": {
|
||||
Action: plans.CreateThenDelete,
|
||||
ActionReason: plans.ResourceInstanceReplaceByRequest,
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Before: emptyVal,
|
||||
After: nullVal,
|
||||
Schema: emptySchema,
|
||||
RequiredReplace: cty.NewPathSet(),
|
||||
ExpectedOutput: ` # test_instance.example will be replaced, as requested
|
||||
+/- resource "test_instance" "example" {}
|
||||
`,
|
||||
},
|
||||
"replace because tainted (delete first)": {
|
||||
Action: plans.DeleteThenCreate,
|
||||
ActionReason: plans.ResourceInstanceReplaceBecauseTainted,
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Before: emptyVal,
|
||||
After: nullVal,
|
||||
Schema: emptySchema,
|
||||
RequiredReplace: cty.NewPathSet(),
|
||||
ExpectedOutput: ` # test_instance.example is tainted, so must be replaced
|
||||
-/+ resource "test_instance" "example" {}
|
||||
`,
|
||||
},
|
||||
"replace because tainted (create first)": {
|
||||
Action: plans.CreateThenDelete,
|
||||
ActionReason: plans.ResourceInstanceReplaceBecauseTainted,
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Before: emptyVal,
|
||||
After: nullVal,
|
||||
Schema: emptySchema,
|
||||
RequiredReplace: cty.NewPathSet(),
|
||||
ExpectedOutput: ` # test_instance.example is tainted, so must be replaced
|
||||
+/- resource "test_instance" "example" {}
|
||||
`,
|
||||
},
|
||||
"replace because cannot update (delete first)": {
|
||||
Action: plans.DeleteThenCreate,
|
||||
ActionReason: plans.ResourceInstanceReplaceBecauseCannotUpdate,
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Before: emptyVal,
|
||||
After: nullVal,
|
||||
Schema: emptySchema,
|
||||
RequiredReplace: cty.NewPathSet(),
|
||||
// This one has no special message, because the fuller explanation
|
||||
// typically appears inline as a "# forces replacement" comment.
|
||||
// (not shown here)
|
||||
ExpectedOutput: ` # test_instance.example must be replaced
|
||||
-/+ resource "test_instance" "example" {}
|
||||
`,
|
||||
},
|
||||
"replace because cannot update (create first)": {
|
||||
Action: plans.CreateThenDelete,
|
||||
ActionReason: plans.ResourceInstanceReplaceBecauseCannotUpdate,
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Before: emptyVal,
|
||||
After: nullVal,
|
||||
Schema: emptySchema,
|
||||
RequiredReplace: cty.NewPathSet(),
|
||||
// This one has no special message, because the fuller explanation
|
||||
// typically appears inline as a "# forces replacement" comment.
|
||||
// (not shown here)
|
||||
ExpectedOutput: ` # test_instance.example must be replaced
|
||||
+/- resource "test_instance" "example" {}
|
||||
`,
|
||||
},
|
||||
}
|
||||
|
||||
runTestCases(t, testCases)
|
||||
}
|
||||
|
||||
|
@ -4147,10 +4671,85 @@ func TestResourceChange_sensitiveVariable(t *testing.T) {
|
|||
runTestCases(t, testCases)
|
||||
}
|
||||
|
||||
func TestResourceChange_moved(t *testing.T) {
|
||||
prevRunAddr := addrs.Resource{
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Type: "test_instance",
|
||||
Name: "previous",
|
||||
}.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance)
|
||||
|
||||
testCases := map[string]testCase{
|
||||
"moved and updated": {
|
||||
PrevRunAddr: prevRunAddr,
|
||||
Action: plans.Update,
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Before: cty.ObjectVal(map[string]cty.Value{
|
||||
"id": cty.StringVal("12345"),
|
||||
"foo": cty.StringVal("hello"),
|
||||
"bar": cty.StringVal("baz"),
|
||||
}),
|
||||
After: cty.ObjectVal(map[string]cty.Value{
|
||||
"id": cty.StringVal("12345"),
|
||||
"foo": cty.StringVal("hello"),
|
||||
"bar": cty.StringVal("boop"),
|
||||
}),
|
||||
Schema: &configschema.Block{
|
||||
Attributes: map[string]*configschema.Attribute{
|
||||
"id": {Type: cty.String, Computed: true},
|
||||
"foo": {Type: cty.String, Optional: true},
|
||||
"bar": {Type: cty.String, Optional: true},
|
||||
},
|
||||
},
|
||||
RequiredReplace: cty.NewPathSet(),
|
||||
ExpectedOutput: ` # test_instance.example will be updated in-place
|
||||
# (moved from test_instance.previous)
|
||||
~ resource "test_instance" "example" {
|
||||
~ bar = "baz" -> "boop"
|
||||
id = "12345"
|
||||
# (1 unchanged attribute hidden)
|
||||
}
|
||||
`,
|
||||
},
|
||||
"moved without changes": {
|
||||
PrevRunAddr: prevRunAddr,
|
||||
Action: plans.NoOp,
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Before: cty.ObjectVal(map[string]cty.Value{
|
||||
"id": cty.StringVal("12345"),
|
||||
"foo": cty.StringVal("hello"),
|
||||
"bar": cty.StringVal("baz"),
|
||||
}),
|
||||
After: cty.ObjectVal(map[string]cty.Value{
|
||||
"id": cty.StringVal("12345"),
|
||||
"foo": cty.StringVal("hello"),
|
||||
"bar": cty.StringVal("baz"),
|
||||
}),
|
||||
Schema: &configschema.Block{
|
||||
Attributes: map[string]*configschema.Attribute{
|
||||
"id": {Type: cty.String, Computed: true},
|
||||
"foo": {Type: cty.String, Optional: true},
|
||||
"bar": {Type: cty.String, Optional: true},
|
||||
},
|
||||
},
|
||||
RequiredReplace: cty.NewPathSet(),
|
||||
ExpectedOutput: ` # test_instance.previous has moved to test_instance.example
|
||||
resource "test_instance" "example" {
|
||||
id = "12345"
|
||||
# (2 unchanged attributes hidden)
|
||||
}
|
||||
`,
|
||||
},
|
||||
}
|
||||
|
||||
runTestCases(t, testCases)
|
||||
}
|
||||
|
||||
type testCase struct {
|
||||
Action plans.Action
|
||||
ActionReason plans.ResourceInstanceChangeActionReason
|
||||
ModuleInst addrs.ModuleInstance
|
||||
Mode addrs.ResourceMode
|
||||
InstanceKey addrs.InstanceKey
|
||||
DeposedKey states.DeposedKey
|
||||
Before cty.Value
|
||||
BeforeValMarks []cty.PathValueMarks
|
||||
|
@ -4159,6 +4758,7 @@ type testCase struct {
|
|||
Schema *configschema.Block
|
||||
RequiredReplace cty.PathSet
|
||||
ExpectedOutput string
|
||||
PrevRunAddr addrs.AbsResourceInstance
|
||||
}
|
||||
|
||||
func runTestCases(t *testing.T, testCases map[string]testCase) {
|
||||
|
@ -4192,12 +4792,22 @@ func runTestCases(t *testing.T, testCases map[string]testCase) {
|
|||
t.Fatal(err)
|
||||
}
|
||||
|
||||
change := &plans.ResourceInstanceChangeSrc{
|
||||
Addr: addrs.Resource{
|
||||
addr := addrs.Resource{
|
||||
Mode: tc.Mode,
|
||||
Type: "test_instance",
|
||||
Name: "example",
|
||||
}.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance),
|
||||
}.Instance(tc.InstanceKey).Absolute(tc.ModuleInst)
|
||||
|
||||
prevRunAddr := tc.PrevRunAddr
|
||||
// If no previous run address is given, reuse the current address
|
||||
// to make initialization easier
|
||||
if prevRunAddr.Resource.Resource.Type == "" {
|
||||
prevRunAddr = addr
|
||||
}
|
||||
|
||||
change := &plans.ResourceInstanceChangeSrc{
|
||||
Addr: addr,
|
||||
PrevRunAddr: prevRunAddr,
|
||||
DeposedKey: tc.DeposedKey,
|
||||
ProviderAddr: addrs.AbsProviderConfig{
|
||||
Provider: addrs.NewDefaultProvider("test"),
|
||||
|
@ -4214,7 +4824,7 @@ func runTestCases(t *testing.T, testCases map[string]testCase) {
|
|||
RequiredReplace: tc.RequiredReplace,
|
||||
}
|
||||
|
||||
output := ResourceChange(change, tc.Schema, color)
|
||||
output := ResourceChange(change, tc.Schema, color, DiffLanguageProposedChange)
|
||||
if diff := cmp.Diff(output, tc.ExpectedOutput); diff != "" {
|
||||
t.Errorf("wrong output\n%s", diff)
|
||||
}
|
||||
|
|
|
@ -0,0 +1,29 @@
|
|||
// Code generated by "stringer -type=DiffLanguage diff.go"; DO NOT EDIT.
|
||||
|
||||
package format
|
||||
|
||||
import "strconv"
|
||||
|
||||
func _() {
|
||||
// An "invalid array index" compiler error signifies that the constant values have changed.
|
||||
// Re-run the stringer command to generate them again.
|
||||
var x [1]struct{}
|
||||
_ = x[DiffLanguageProposedChange-80]
|
||||
_ = x[DiffLanguageDetectedDrift-68]
|
||||
}
|
||||
|
||||
const (
|
||||
_DiffLanguage_name_0 = "DiffLanguageDetectedDrift"
|
||||
_DiffLanguage_name_1 = "DiffLanguageProposedChange"
|
||||
)
|
||||
|
||||
func (i DiffLanguage) String() string {
|
||||
switch {
|
||||
case i == 68:
|
||||
return _DiffLanguage_name_0
|
||||
case i == 80:
|
||||
return _DiffLanguage_name_1
|
||||
default:
|
||||
return "DiffLanguage(" + strconv.FormatInt(int64(i), 10) + ")"
|
||||
}
|
||||
}
|
|
@ -1,6 +1,7 @@
|
|||
package format
|
||||
|
||||
import (
|
||||
"github.com/hashicorp/terraform/internal/lang/marks"
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
)
|
||||
|
||||
|
@ -30,6 +31,11 @@ func ObjectValueID(obj cty.Value) (k, v string) {
|
|||
|
||||
case atys["id"] == cty.String:
|
||||
v := obj.GetAttr("id")
|
||||
if v.HasMark(marks.Sensitive) {
|
||||
break
|
||||
}
|
||||
v, _ = v.Unmark()
|
||||
|
||||
if v.IsKnown() && !v.IsNull() {
|
||||
return "id", v.AsString()
|
||||
}
|
||||
|
@ -38,6 +44,11 @@ func ObjectValueID(obj cty.Value) (k, v string) {
|
|||
// "name" isn't always globally unique, but if there isn't also an
|
||||
// "id" then it _often_ is, in practice.
|
||||
v := obj.GetAttr("name")
|
||||
if v.HasMark(marks.Sensitive) {
|
||||
break
|
||||
}
|
||||
v, _ = v.Unmark()
|
||||
|
||||
if v.IsKnown() && !v.IsNull() {
|
||||
return "name", v.AsString()
|
||||
}
|
||||
|
@ -77,25 +88,41 @@ func ObjectValueName(obj cty.Value) (k, v string) {
|
|||
|
||||
case atys["name"] == cty.String:
|
||||
v := obj.GetAttr("name")
|
||||
if v.HasMark(marks.Sensitive) {
|
||||
break
|
||||
}
|
||||
v, _ = v.Unmark()
|
||||
|
||||
if v.IsKnown() && !v.IsNull() {
|
||||
return "name", v.AsString()
|
||||
}
|
||||
|
||||
case atys["tags"].IsMapType() && atys["tags"].ElementType() == cty.String:
|
||||
tags := obj.GetAttr("tags")
|
||||
if tags.IsNull() || !tags.IsWhollyKnown() {
|
||||
if tags.IsNull() || !tags.IsWhollyKnown() || tags.HasMark(marks.Sensitive) {
|
||||
break
|
||||
}
|
||||
tags, _ = tags.Unmark()
|
||||
|
||||
switch {
|
||||
case tags.HasIndex(cty.StringVal("name")).RawEquals(cty.True):
|
||||
v := tags.Index(cty.StringVal("name"))
|
||||
if v.HasMark(marks.Sensitive) {
|
||||
break
|
||||
}
|
||||
v, _ = v.Unmark()
|
||||
|
||||
if v.IsKnown() && !v.IsNull() {
|
||||
return "tags.name", v.AsString()
|
||||
}
|
||||
case tags.HasIndex(cty.StringVal("Name")).RawEquals(cty.True):
|
||||
// AWS-style naming convention
|
||||
v := tags.Index(cty.StringVal("Name"))
|
||||
if v.HasMark(marks.Sensitive) {
|
||||
break
|
||||
}
|
||||
v, _ = v.Unmark()
|
||||
|
||||
if v.IsKnown() && !v.IsNull() {
|
||||
return "tags.Name", v.AsString()
|
||||
}
|
||||
|
|
|
@ -4,6 +4,7 @@ import (
|
|||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/hashicorp/terraform/internal/lang/marks"
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
)
|
||||
|
||||
|
@ -57,6 +58,14 @@ func TestObjectValueIDOrName(t *testing.T) {
|
|||
[...]string{"name", "awesome-foo"},
|
||||
[...]string{"name", "awesome-foo"},
|
||||
},
|
||||
{
|
||||
cty.ObjectVal(map[string]cty.Value{
|
||||
"name": cty.StringVal("awesome-foo").Mark(marks.Sensitive),
|
||||
}),
|
||||
[...]string{"", ""},
|
||||
[...]string{"", ""},
|
||||
[...]string{"", ""},
|
||||
},
|
||||
{
|
||||
cty.ObjectVal(map[string]cty.Value{
|
||||
"name": cty.StringVal("awesome-foo"),
|
||||
|
@ -161,6 +170,16 @@ func TestObjectValueIDOrName(t *testing.T) {
|
|||
[...]string{"", ""},
|
||||
[...]string{"", ""},
|
||||
},
|
||||
{
|
||||
cty.ObjectVal(map[string]cty.Value{
|
||||
"tags": cty.MapVal(map[string]cty.Value{
|
||||
"Name": cty.UnknownVal(cty.String).Mark(marks.Sensitive),
|
||||
}),
|
||||
}),
|
||||
[...]string{"", ""},
|
||||
[...]string{"", ""},
|
||||
[...]string{"", ""},
|
||||
},
|
||||
{
|
||||
cty.ObjectVal(map[string]cty.Value{
|
||||
"tags": cty.MapVal(map[string]cty.Value{
|
||||
|
|
|
@ -1,7 +1,6 @@
|
|||
package command
|
||||
|
||||
import (
|
||||
"os"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
|
@ -9,17 +8,15 @@ import (
|
|||
)
|
||||
|
||||
func TestGet(t *testing.T) {
|
||||
td := tempDir(t)
|
||||
testCopyDir(t, testFixturePath("get"), td)
|
||||
defer os.RemoveAll(td)
|
||||
defer testChdir(t, td)()
|
||||
wd := tempWorkingDirFixture(t, "get")
|
||||
defer testChdir(t, wd.RootModuleDir())()
|
||||
|
||||
ui := new(cli.MockUi)
|
||||
ui := cli.NewMockUi()
|
||||
c := &GetCommand{
|
||||
Meta: Meta{
|
||||
testingOverrides: metaOverridesForProvider(testProvider()),
|
||||
Ui: ui,
|
||||
dataDir: tempDir(t),
|
||||
WorkingDir: wd,
|
||||
},
|
||||
}
|
||||
|
||||
|
@ -35,12 +32,16 @@ func TestGet(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestGet_multipleArgs(t *testing.T) {
|
||||
ui := new(cli.MockUi)
|
||||
wd, cleanup := tempWorkingDir(t)
|
||||
defer cleanup()
|
||||
defer testChdir(t, wd.RootModuleDir())()
|
||||
|
||||
ui := cli.NewMockUi()
|
||||
c := &GetCommand{
|
||||
Meta: Meta{
|
||||
testingOverrides: metaOverridesForProvider(testProvider()),
|
||||
Ui: ui,
|
||||
dataDir: tempDir(t),
|
||||
WorkingDir: wd,
|
||||
},
|
||||
}
|
||||
|
||||
|
@ -54,17 +55,15 @@ func TestGet_multipleArgs(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestGet_update(t *testing.T) {
|
||||
td := tempDir(t)
|
||||
testCopyDir(t, testFixturePath("get"), td)
|
||||
defer os.RemoveAll(td)
|
||||
defer testChdir(t, td)()
|
||||
wd := tempWorkingDirFixture(t, "get")
|
||||
defer testChdir(t, wd.RootModuleDir())()
|
||||
|
||||
ui := new(cli.MockUi)
|
||||
ui := cli.NewMockUi()
|
||||
c := &GetCommand{
|
||||
Meta: Meta{
|
||||
testingOverrides: metaOverridesForProvider(testProvider()),
|
||||
Ui: ui,
|
||||
dataDir: tempDir(t),
|
||||
WorkingDir: wd,
|
||||
},
|
||||
}
|
||||
|
||||
|
|
|
@ -4,12 +4,12 @@ import (
|
|||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/hashicorp/terraform/internal/plans/planfile"
|
||||
"github.com/hashicorp/terraform/internal/tfdiags"
|
||||
|
||||
"github.com/hashicorp/terraform/internal/backend"
|
||||
"github.com/hashicorp/terraform/internal/dag"
|
||||
"github.com/hashicorp/terraform/internal/plans"
|
||||
"github.com/hashicorp/terraform/internal/plans/planfile"
|
||||
"github.com/hashicorp/terraform/internal/terraform"
|
||||
"github.com/hashicorp/terraform/internal/tfdiags"
|
||||
)
|
||||
|
||||
// GraphCommand is a Command implementation that takes a Terraform
|
||||
|
@ -103,35 +103,64 @@ func (c *GraphCommand) Run(args []string) int {
|
|||
}
|
||||
|
||||
// Get the context
|
||||
ctx, _, ctxDiags := local.Context(opReq)
|
||||
lr, _, ctxDiags := local.LocalRun(opReq)
|
||||
diags = diags.Append(ctxDiags)
|
||||
if ctxDiags.HasErrors() {
|
||||
c.showDiagnostics(diags)
|
||||
return 1
|
||||
}
|
||||
|
||||
// Determine the graph type
|
||||
graphType := terraform.GraphTypePlan
|
||||
if planFile != nil {
|
||||
graphType = terraform.GraphTypeApply
|
||||
if graphTypeStr == "" {
|
||||
switch {
|
||||
case lr.Plan != nil:
|
||||
graphTypeStr = "apply"
|
||||
default:
|
||||
graphTypeStr = "plan"
|
||||
}
|
||||
}
|
||||
|
||||
if graphTypeStr != "" {
|
||||
v, ok := terraform.GraphTypeMap[graphTypeStr]
|
||||
if !ok {
|
||||
c.Ui.Error(fmt.Sprintf("Invalid graph type requested: %s", graphTypeStr))
|
||||
return 1
|
||||
var g *terraform.Graph
|
||||
var graphDiags tfdiags.Diagnostics
|
||||
switch graphTypeStr {
|
||||
case "plan":
|
||||
g, graphDiags = lr.Core.PlanGraphForUI(lr.Config, lr.InputState, plans.NormalMode)
|
||||
case "plan-refresh-only":
|
||||
g, graphDiags = lr.Core.PlanGraphForUI(lr.Config, lr.InputState, plans.RefreshOnlyMode)
|
||||
case "plan-destroy":
|
||||
g, graphDiags = lr.Core.PlanGraphForUI(lr.Config, lr.InputState, plans.DestroyMode)
|
||||
case "apply":
|
||||
plan := lr.Plan
|
||||
|
||||
// Historically "terraform graph" would allow the nonsensical request to
|
||||
// render an apply graph without a plan, so we continue to support that
|
||||
// here, though perhaps one day this should be an error.
|
||||
if lr.Plan == nil {
|
||||
plan = &plans.Plan{
|
||||
Changes: plans.NewChanges(),
|
||||
UIMode: plans.NormalMode,
|
||||
PriorState: lr.InputState,
|
||||
PrevRunState: lr.InputState,
|
||||
}
|
||||
}
|
||||
|
||||
graphType = v
|
||||
g, graphDiags = lr.Core.ApplyGraphForUI(plan, lr.Config)
|
||||
case "eval", "validate":
|
||||
// Terraform v0.12 through v1.0 supported both of these, but the
|
||||
// graph variants for "eval" and "validate" are purely implementation
|
||||
// details and don't reveal anything (user-model-wise) that you can't
|
||||
// see in the plan graph.
|
||||
graphDiags = graphDiags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Graph type no longer available",
|
||||
fmt.Sprintf("The graph type %q is no longer available. Use -type=plan instead to get a similar result.", graphTypeStr),
|
||||
))
|
||||
default:
|
||||
graphDiags = graphDiags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Unsupported graph type",
|
||||
`The -type=... argument must be either "plan", "plan-refresh-only", "plan-destroy", or "apply".`,
|
||||
))
|
||||
}
|
||||
|
||||
// Skip validation during graph generation - we want to see the graph even if
|
||||
// it is invalid for some reason.
|
||||
g, graphDiags := ctx.Graph(graphType, &terraform.ContextGraphOpts{
|
||||
Verbose: verbose,
|
||||
Validate: false,
|
||||
})
|
||||
diags = diags.Append(graphDiags)
|
||||
if graphDiags.HasErrors() {
|
||||
c.showDiagnostics(diags)
|
||||
|
@ -165,19 +194,13 @@ func (c *GraphCommand) Help() string {
|
|||
helpText := `
|
||||
Usage: terraform [global options] graph [options]
|
||||
|
||||
Outputs the visual execution graph of Terraform resources according to
|
||||
either the current configuration or an execution plan.
|
||||
Produces a representation of the dependency graph between different
|
||||
objects in the current configuration and state.
|
||||
|
||||
The graph is outputted in DOT format. The typical program that can
|
||||
The graph is presented in the DOT language. The typical program that can
|
||||
read this format is GraphViz, but many web services are also available
|
||||
to read this format.
|
||||
|
||||
The -type flag can be used to control the type of graph shown. Terraform
|
||||
creates different graphs for different operations. See the options below
|
||||
for the list of types supported. The default type is "plan" if a
|
||||
configuration is given, and "apply" if a plan file is passed as an
|
||||
argument.
|
||||
|
||||
Options:
|
||||
|
||||
-plan=tfplan Render graph using the specified plan file instead of the
|
||||
|
@ -186,8 +209,9 @@ Options:
|
|||
-draw-cycles Highlight any cycles in the graph with colored edges.
|
||||
This helps when diagnosing cycle errors.
|
||||
|
||||
-type=plan Type of graph to output. Can be: plan, plan-destroy, apply,
|
||||
validate, input, refresh.
|
||||
-type=plan Type of graph to output. Can be: plan, plan-refresh-only,
|
||||
plan-destroy, or apply. By default Terraform chooses
|
||||
"plan", or "apply" if you also set the -plan=... option.
|
||||
|
||||
-module-depth=n (deprecated) In prior versions of Terraform, specified the
|
||||
depth of modules to show in the output.
|
||||
|
|
|
@ -212,7 +212,7 @@ func (c *ImportCommand) Run(args []string) int {
|
|||
}
|
||||
|
||||
// Get the context
|
||||
ctx, state, ctxDiags := local.Context(opReq)
|
||||
lr, state, ctxDiags := local.LocalRun(opReq)
|
||||
diags = diags.Append(ctxDiags)
|
||||
if ctxDiags.HasErrors() {
|
||||
c.showDiagnostics(diags)
|
||||
|
@ -230,13 +230,18 @@ func (c *ImportCommand) Run(args []string) int {
|
|||
// Perform the import. Note that as you can see it is possible for this
|
||||
// API to import more than one resource at once. For now, we only allow
|
||||
// one while we stabilize this feature.
|
||||
newState, importDiags := ctx.Import(&terraform.ImportOpts{
|
||||
newState, importDiags := lr.Core.Import(lr.Config, lr.InputState, &terraform.ImportOpts{
|
||||
Targets: []*terraform.ImportTarget{
|
||||
&terraform.ImportTarget{
|
||||
{
|
||||
Addr: addr,
|
||||
ID: args[1],
|
||||
},
|
||||
},
|
||||
|
||||
// The LocalRun idea is designed around our primary operations, so
|
||||
// the input variables end up represented as plan options even though
|
||||
// this particular operation isn't really a plan.
|
||||
SetVariables: lr.PlanOpts.SetVariables,
|
||||
})
|
||||
diags = diags.Append(importDiags)
|
||||
if diags.HasErrors() {
|
||||
|
|
|
@ -331,8 +331,8 @@ func TestImport_initializationErrorShouldUnlock(t *testing.T) {
|
|||
}
|
||||
|
||||
// specifically, it should fail due to a missing provider
|
||||
msg := ui.ErrorWriter.String()
|
||||
if want := `unknown provider "registry.terraform.io/hashicorp/unknown"`; !strings.Contains(msg, want) {
|
||||
msg := strings.ReplaceAll(ui.ErrorWriter.String(), "\n", " ")
|
||||
if want := `provider registry.terraform.io/hashicorp/unknown: required by this configuration but no version is selected`; !strings.Contains(msg, want) {
|
||||
t.Errorf("incorrect message\nwant substring: %s\ngot:\n%s", want, msg)
|
||||
}
|
||||
|
||||
|
|
|
@ -150,19 +150,7 @@ func (c *InitCommand) Run(args []string) int {
|
|||
// initialization functionality remains built around "earlyconfig" and
|
||||
// so we need to still load the module via that mechanism anyway until we
|
||||
// can do some more invasive refactoring here.
|
||||
rootMod, confDiags := c.loadSingleModule(path)
|
||||
rootModEarly, earlyConfDiags := c.loadSingleModuleEarly(path)
|
||||
if confDiags.HasErrors() {
|
||||
c.Ui.Error(c.Colorize().Color(strings.TrimSpace(errInitConfigError)))
|
||||
// TODO: It would be nice to check the version constraints in
|
||||
// rootModEarly.RequiredCore and print out a hint if the module is
|
||||
// declaring that it's not compatible with this version of Terraform,
|
||||
// though we're deferring that for now because we're intending to
|
||||
// refactor our use of "earlyconfig" here anyway and so whatever we
|
||||
// might do here right now would likely be invalidated by that.
|
||||
c.showDiagnostics(confDiags)
|
||||
return 1
|
||||
}
|
||||
// If _only_ the early loader encountered errors then that's unusual
|
||||
// (it should generally be a superset of the normal loader) but we'll
|
||||
// return those errors anyway since otherwise we'll probably get
|
||||
|
@ -172,7 +160,12 @@ func (c *InitCommand) Run(args []string) int {
|
|||
c.Ui.Error(c.Colorize().Color(strings.TrimSpace(errInitConfigError)))
|
||||
// Errors from the early loader are generally not as high-quality since
|
||||
// it has less context to work with.
|
||||
diags = diags.Append(confDiags)
|
||||
|
||||
// TODO: It would be nice to check the version constraints in
|
||||
// rootModEarly.RequiredCore and print out a hint if the module is
|
||||
// declaring that it's not compatible with this version of Terraform,
|
||||
// and that may be what caused earlyconfig to fail.
|
||||
diags = diags.Append(earlyConfDiags)
|
||||
c.showDiagnostics(diags)
|
||||
return 1
|
||||
}
|
||||
|
@ -192,6 +185,20 @@ func (c *InitCommand) Run(args []string) int {
|
|||
// With all of the modules (hopefully) installed, we can now try to load the
|
||||
// whole configuration tree.
|
||||
config, confDiags := c.loadConfig(path)
|
||||
// configDiags will be handled after the version constraint check, since an
|
||||
// incorrect version of terraform may be producing errors for configuration
|
||||
// constructs added in later versions.
|
||||
|
||||
// Before we go further, we'll check to make sure none of the modules in
|
||||
// the configuration declare that they don't support this Terraform
|
||||
// version, so we can produce a version-related error message rather than
|
||||
// potentially-confusing downstream errors.
|
||||
versionDiags := terraform.CheckCoreVersionRequirements(config)
|
||||
if versionDiags.HasErrors() {
|
||||
c.showDiagnostics(versionDiags)
|
||||
return 1
|
||||
}
|
||||
|
||||
diags = diags.Append(confDiags)
|
||||
if confDiags.HasErrors() {
|
||||
c.Ui.Error(strings.TrimSpace(errInitConfigError))
|
||||
|
@ -199,21 +206,10 @@ func (c *InitCommand) Run(args []string) int {
|
|||
return 1
|
||||
}
|
||||
|
||||
// Before we go further, we'll check to make sure none of the modules in the
|
||||
// configuration declare that they don't support this Terraform version, so
|
||||
// we can produce a version-related error message rather than
|
||||
// potentially-confusing downstream errors.
|
||||
versionDiags := terraform.CheckCoreVersionRequirements(config)
|
||||
diags = diags.Append(versionDiags)
|
||||
if versionDiags.HasErrors() {
|
||||
c.showDiagnostics(diags)
|
||||
return 1
|
||||
}
|
||||
|
||||
var back backend.Backend
|
||||
if flagBackend {
|
||||
|
||||
be, backendOutput, backendDiags := c.initBackend(rootMod, flagConfigExtra)
|
||||
be, backendOutput, backendDiags := c.initBackend(config.Module, flagConfigExtra)
|
||||
diags = diags.Append(backendDiags)
|
||||
if backendDiags.HasErrors() {
|
||||
c.showDiagnostics(diags)
|
||||
|
|
|
@ -540,11 +540,6 @@ func TestInit_backendConfigFileChange(t *testing.T) {
|
|||
defer os.RemoveAll(td)
|
||||
defer testChdir(t, td)()
|
||||
|
||||
// Ask input
|
||||
defer testInputMap(t, map[string]string{
|
||||
"backend-migrate-to-new": "no",
|
||||
})()
|
||||
|
||||
ui := new(cli.MockUi)
|
||||
view, _ := testView(t)
|
||||
c := &InitCommand{
|
||||
|
@ -1613,6 +1608,59 @@ func TestInit_checkRequiredVersion(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
// Verify that init will error out with an invalid version constraint, even if
|
||||
// there are other invalid configuration constructs.
|
||||
func TestInit_checkRequiredVersionFirst(t *testing.T) {
|
||||
t.Run("root_module", func(t *testing.T) {
|
||||
td := t.TempDir()
|
||||
testCopyDir(t, testFixturePath("init-check-required-version-first"), td)
|
||||
defer testChdir(t, td)()
|
||||
|
||||
ui := cli.NewMockUi()
|
||||
view, _ := testView(t)
|
||||
c := &InitCommand{
|
||||
Meta: Meta{
|
||||
testingOverrides: metaOverridesForProvider(testProvider()),
|
||||
Ui: ui,
|
||||
View: view,
|
||||
},
|
||||
}
|
||||
|
||||
args := []string{}
|
||||
if code := c.Run(args); code != 1 {
|
||||
t.Fatalf("got exit status %d; want 1\nstderr:\n%s\n\nstdout:\n%s", code, ui.ErrorWriter.String(), ui.OutputWriter.String())
|
||||
}
|
||||
errStr := ui.ErrorWriter.String()
|
||||
if !strings.Contains(errStr, `Unsupported Terraform Core version`) {
|
||||
t.Fatalf("output should point to unmet version constraint, but is:\n\n%s", errStr)
|
||||
}
|
||||
})
|
||||
t.Run("sub_module", func(t *testing.T) {
|
||||
td := t.TempDir()
|
||||
testCopyDir(t, testFixturePath("init-check-required-version-first-module"), td)
|
||||
defer testChdir(t, td)()
|
||||
|
||||
ui := cli.NewMockUi()
|
||||
view, _ := testView(t)
|
||||
c := &InitCommand{
|
||||
Meta: Meta{
|
||||
testingOverrides: metaOverridesForProvider(testProvider()),
|
||||
Ui: ui,
|
||||
View: view,
|
||||
},
|
||||
}
|
||||
|
||||
args := []string{}
|
||||
if code := c.Run(args); code != 1 {
|
||||
t.Fatalf("got exit status %d; want 1\nstderr:\n%s\n\nstdout:\n%s", code, ui.ErrorWriter.String(), ui.OutputWriter.String())
|
||||
}
|
||||
errStr := ui.ErrorWriter.String()
|
||||
if !strings.Contains(errStr, `Unsupported Terraform Core version`) {
|
||||
t.Fatalf("output should point to unmet version constraint, but is:\n\n%s", errStr)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestInit_providerLockFile(t *testing.T) {
|
||||
// Create a temporary working directory that is empty
|
||||
td := tempDir(t)
|
||||
|
|
|
@ -10,6 +10,7 @@ import (
|
|||
"github.com/hashicorp/terraform/internal/addrs"
|
||||
"github.com/hashicorp/terraform/internal/configs/configschema"
|
||||
"github.com/hashicorp/terraform/internal/lang"
|
||||
"github.com/hashicorp/terraform/internal/lang/blocktoattr"
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
ctyjson "github.com/zclconf/go-cty/cty/json"
|
||||
)
|
||||
|
@ -96,6 +97,9 @@ func marshalExpressions(body hcl.Body, schema *configschema.Block) expressions {
|
|||
// (lowSchema is an hcl.BodySchema:
|
||||
// https://godoc.org/github.com/hashicorp/hcl/v2/hcl#BodySchema )
|
||||
|
||||
// fix any ConfigModeAttr blocks present from legacy providers
|
||||
body = blocktoattr.FixUpBlockAttrs(body, schema)
|
||||
|
||||
// Use the low-level schema with the body to decode one level We'll just
|
||||
// ignore any additional content that's not covered by the schema, which
|
||||
// will effectively ignore "dynamic" blocks, and may also ignore other
|
||||
|
|
|
@ -80,6 +80,29 @@ func TestMarshalExpressions(t *testing.T) {
|
|||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
hcltest.MockBody(&hcl.BodyContent{
|
||||
Blocks: hcl.Blocks{
|
||||
{
|
||||
Type: "block_to_attr",
|
||||
Body: hcltest.MockBody(&hcl.BodyContent{
|
||||
|
||||
Attributes: hcl.Attributes{
|
||||
"foo": {
|
||||
Name: "foo",
|
||||
Expr: hcltest.MockExprTraversalSrc(`module.foo.bar`),
|
||||
},
|
||||
},
|
||||
}),
|
||||
},
|
||||
},
|
||||
}),
|
||||
expressions{
|
||||
"block_to_attr": expression{
|
||||
References: []string{"module.foo.bar", "module.foo"},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
|
@ -89,6 +112,11 @@ func TestMarshalExpressions(t *testing.T) {
|
|||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"block_to_attr": {
|
||||
Type: cty.List(cty.Object(map[string]cty.Type{
|
||||
"foo": cty.String,
|
||||
})),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@ import (
|
|||
// FormatVersion represents the version of the json format and will be
|
||||
// incremented for any change to this format that requires changes to a
|
||||
// consuming parser.
|
||||
const FormatVersion = "0.2"
|
||||
const FormatVersion = "1.0"
|
||||
|
||||
// Plan is the top-level representation of the json format of a plan. It includes
|
||||
// the complete config and current state.
|
||||
|
@ -130,15 +130,33 @@ func Marshal(
|
|||
}
|
||||
|
||||
// output.ResourceDrift
|
||||
err = output.marshalResourceDrift(p.PrevRunState, p.PriorState, schemas)
|
||||
if len(p.DriftedResources) > 0 {
|
||||
// In refresh-only mode, we render all resources marked as drifted,
|
||||
// including those which have moved without other changes. In other plan
|
||||
// modes, move-only changes will be included in the planned changes, so
|
||||
// we skip them here.
|
||||
var driftedResources []*plans.ResourceInstanceChangeSrc
|
||||
if p.UIMode == plans.RefreshOnlyMode {
|
||||
driftedResources = p.DriftedResources
|
||||
} else {
|
||||
for _, dr := range p.DriftedResources {
|
||||
if dr.Action != plans.NoOp {
|
||||
driftedResources = append(driftedResources, dr)
|
||||
}
|
||||
}
|
||||
}
|
||||
output.ResourceDrift, err = output.marshalResourceChanges(driftedResources, schemas)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error in marshalResourceDrift: %s", err)
|
||||
return nil, fmt.Errorf("error in marshaling resource drift: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
// output.ResourceChanges
|
||||
err = output.marshalResourceChanges(p.Changes, schemas)
|
||||
if p.Changes != nil {
|
||||
output.ResourceChanges, err = output.marshalResourceChanges(p.Changes.Resources, schemas)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error in marshalResourceChanges: %s", err)
|
||||
return nil, fmt.Errorf("error in marshaling resource changes: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
// output.OutputChanges
|
||||
|
@ -188,152 +206,16 @@ func (p *plan) marshalPlanVariables(vars map[string]plans.DynamicValue, schemas
|
|||
return nil
|
||||
}
|
||||
|
||||
func (p *plan) marshalResourceDrift(oldState, newState *states.State, schemas *terraform.Schemas) error {
|
||||
// Our goal here is to build a data structure of the same shape as we use
|
||||
// to describe planned resource changes, but in this case we'll be
|
||||
// taking the old and new values from different state snapshots rather
|
||||
// than from a real "Changes" object.
|
||||
//
|
||||
// In doing this we make an assumption that drift detection can only
|
||||
// ever show objects as updated or removed, and will never show anything
|
||||
// as created because we only refresh objects we were already tracking
|
||||
// after the previous run. This means we can use oldState as our baseline
|
||||
// for what resource instances we might include, and check for each item
|
||||
// whether it's present in newState. If we ever have some mechanism to
|
||||
// detect "additive drift" later then we'll need to take a different
|
||||
// approach here, but we have no plans for that at the time of writing.
|
||||
//
|
||||
// We also assume that both states have had all managed resource objects
|
||||
// upgraded to match the current schemas given in schemas, so we shouldn't
|
||||
// need to contend with oldState having old-shaped objects even if the
|
||||
// user changed provider versions since the last run.
|
||||
func (p *plan) marshalResourceChanges(resources []*plans.ResourceInstanceChangeSrc, schemas *terraform.Schemas) ([]resourceChange, error) {
|
||||
var ret []resourceChange
|
||||
|
||||
if newState.ManagedResourcesEqual(oldState) {
|
||||
// Nothing to do, because we only detect and report drift for managed
|
||||
// resource instances.
|
||||
return nil
|
||||
}
|
||||
for _, ms := range oldState.Modules {
|
||||
for _, rs := range ms.Resources {
|
||||
if rs.Addr.Resource.Mode != addrs.ManagedResourceMode {
|
||||
// Drift reporting is only for managed resources
|
||||
continue
|
||||
}
|
||||
|
||||
provider := rs.ProviderConfig.Provider
|
||||
for key, oldIS := range rs.Instances {
|
||||
if oldIS.Current == nil {
|
||||
// Not interested in instances that only have deposed objects
|
||||
continue
|
||||
}
|
||||
addr := rs.Addr.Instance(key)
|
||||
newIS := newState.ResourceInstance(addr)
|
||||
|
||||
schema, _ := schemas.ResourceTypeConfig(
|
||||
provider,
|
||||
addr.Resource.Resource.Mode,
|
||||
addr.Resource.Resource.Type,
|
||||
)
|
||||
if schema == nil {
|
||||
return fmt.Errorf("no schema found for %s (in provider %s)", addr, provider)
|
||||
}
|
||||
ty := schema.ImpliedType()
|
||||
|
||||
oldObj, err := oldIS.Current.Decode(ty)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to decode previous run data for %s: %s", addr, err)
|
||||
}
|
||||
|
||||
var newObj *states.ResourceInstanceObject
|
||||
if newIS != nil && newIS.Current != nil {
|
||||
newObj, err = newIS.Current.Decode(ty)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to decode refreshed data for %s: %s", addr, err)
|
||||
}
|
||||
}
|
||||
|
||||
var oldVal, newVal cty.Value
|
||||
oldVal = oldObj.Value
|
||||
if newObj != nil {
|
||||
newVal = newObj.Value
|
||||
} else {
|
||||
newVal = cty.NullVal(ty)
|
||||
}
|
||||
|
||||
if oldVal.RawEquals(newVal) {
|
||||
// No drift if the two values are semantically equivalent
|
||||
continue
|
||||
}
|
||||
|
||||
oldSensitive := jsonstate.SensitiveAsBool(oldVal)
|
||||
newSensitive := jsonstate.SensitiveAsBool(newVal)
|
||||
oldVal, _ = oldVal.UnmarkDeep()
|
||||
newVal, _ = newVal.UnmarkDeep()
|
||||
|
||||
var before, after []byte
|
||||
var beforeSensitive, afterSensitive []byte
|
||||
before, err = ctyjson.Marshal(oldVal, oldVal.Type())
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to encode previous run data for %s as JSON: %s", addr, err)
|
||||
}
|
||||
after, err = ctyjson.Marshal(newVal, oldVal.Type())
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to encode refreshed data for %s as JSON: %s", addr, err)
|
||||
}
|
||||
beforeSensitive, err = ctyjson.Marshal(oldSensitive, oldSensitive.Type())
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to encode previous run data sensitivity for %s as JSON: %s", addr, err)
|
||||
}
|
||||
afterSensitive, err = ctyjson.Marshal(newSensitive, newSensitive.Type())
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to encode refreshed data sensitivity for %s as JSON: %s", addr, err)
|
||||
}
|
||||
|
||||
// We can only detect updates and deletes as drift.
|
||||
action := plans.Update
|
||||
if newVal.IsNull() {
|
||||
action = plans.Delete
|
||||
}
|
||||
|
||||
change := resourceChange{
|
||||
Address: addr.String(),
|
||||
ModuleAddress: addr.Module.String(),
|
||||
Mode: "managed", // drift reporting is only for managed resources
|
||||
Name: addr.Resource.Resource.Name,
|
||||
Type: addr.Resource.Resource.Type,
|
||||
ProviderName: provider.String(),
|
||||
|
||||
Change: change{
|
||||
Actions: actionString(action.String()),
|
||||
Before: json.RawMessage(before),
|
||||
BeforeSensitive: json.RawMessage(beforeSensitive),
|
||||
After: json.RawMessage(after),
|
||||
AfterSensitive: json.RawMessage(afterSensitive),
|
||||
// AfterUnknown is never populated here because
|
||||
// values in a state are always fully known.
|
||||
},
|
||||
}
|
||||
p.ResourceDrift = append(p.ResourceDrift, change)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
sort.Slice(p.ResourceChanges, func(i, j int) bool {
|
||||
return p.ResourceChanges[i].Address < p.ResourceChanges[j].Address
|
||||
})
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *plan) marshalResourceChanges(changes *plans.Changes, schemas *terraform.Schemas) error {
|
||||
if changes == nil {
|
||||
// Nothing to do!
|
||||
return nil
|
||||
}
|
||||
for _, rc := range changes.Resources {
|
||||
for _, rc := range resources {
|
||||
var r resourceChange
|
||||
addr := rc.Addr
|
||||
r.Address = addr.String()
|
||||
if !addr.Equal(rc.PrevRunAddr) {
|
||||
r.PreviousAddress = rc.PrevRunAddr.String()
|
||||
}
|
||||
|
||||
dataSource := addr.Resource.Resource.Mode == addrs.DataResourceMode
|
||||
// We create "delete" actions for data resources so we can clean up
|
||||
|
@ -349,12 +231,12 @@ func (p *plan) marshalResourceChanges(changes *plans.Changes, schemas *terraform
|
|||
addr.Resource.Resource.Type,
|
||||
)
|
||||
if schema == nil {
|
||||
return fmt.Errorf("no schema found for %s (in provider %s)", r.Address, rc.ProviderAddr.Provider)
|
||||
return nil, fmt.Errorf("no schema found for %s (in provider %s)", r.Address, rc.ProviderAddr.Provider)
|
||||
}
|
||||
|
||||
changeV, err := rc.Decode(schema.ImpliedType())
|
||||
if err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
// We drop the marks from the change, as decoding is only an
|
||||
// intermediate step to re-encode the values as json
|
||||
|
@ -368,7 +250,7 @@ func (p *plan) marshalResourceChanges(changes *plans.Changes, schemas *terraform
|
|||
if changeV.Before != cty.NilVal {
|
||||
before, err = ctyjson.Marshal(changeV.Before, changeV.Before.Type())
|
||||
if err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
marks := rc.BeforeValMarks
|
||||
if schema.ContainsSensitive() {
|
||||
|
@ -377,14 +259,14 @@ func (p *plan) marshalResourceChanges(changes *plans.Changes, schemas *terraform
|
|||
bs := jsonstate.SensitiveAsBool(changeV.Before.MarkWithPaths(marks))
|
||||
beforeSensitive, err = ctyjson.Marshal(bs, bs.Type())
|
||||
if err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
if changeV.After != cty.NilVal {
|
||||
if changeV.After.IsWhollyKnown() {
|
||||
after, err = ctyjson.Marshal(changeV.After, changeV.After.Type())
|
||||
if err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
afterUnknown = cty.EmptyObjectVal
|
||||
} else {
|
||||
|
@ -394,7 +276,7 @@ func (p *plan) marshalResourceChanges(changes *plans.Changes, schemas *terraform
|
|||
} else {
|
||||
after, err = ctyjson.Marshal(filteredAfter, filteredAfter.Type())
|
||||
if err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
afterUnknown = unknownAsBool(changeV.After)
|
||||
|
@ -406,17 +288,17 @@ func (p *plan) marshalResourceChanges(changes *plans.Changes, schemas *terraform
|
|||
as := jsonstate.SensitiveAsBool(changeV.After.MarkWithPaths(marks))
|
||||
afterSensitive, err = ctyjson.Marshal(as, as.Type())
|
||||
if err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
a, err := ctyjson.Marshal(afterUnknown, afterUnknown.Type())
|
||||
if err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
replacePaths, err := encodePaths(rc.RequiredReplace)
|
||||
if err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
|
||||
r.Change = change{
|
||||
|
@ -444,7 +326,7 @@ func (p *plan) marshalResourceChanges(changes *plans.Changes, schemas *terraform
|
|||
case addrs.DataResourceMode:
|
||||
r.Mode = "data"
|
||||
default:
|
||||
return fmt.Errorf("resource %s has an unsupported mode %s", r.Address, addr.Resource.Resource.Mode.String())
|
||||
return nil, fmt.Errorf("resource %s has an unsupported mode %s", r.Address, addr.Resource.Resource.Mode.String())
|
||||
}
|
||||
r.ModuleAddress = addr.Module.String()
|
||||
r.Name = addr.Resource.Resource.Name
|
||||
|
@ -460,19 +342,29 @@ func (p *plan) marshalResourceChanges(changes *plans.Changes, schemas *terraform
|
|||
r.ActionReason = "replace_because_tainted"
|
||||
case plans.ResourceInstanceReplaceByRequest:
|
||||
r.ActionReason = "replace_by_request"
|
||||
case plans.ResourceInstanceDeleteBecauseNoResourceConfig:
|
||||
r.ActionReason = "delete_because_no_resource_config"
|
||||
case plans.ResourceInstanceDeleteBecauseWrongRepetition:
|
||||
r.ActionReason = "delete_because_wrong_repetition"
|
||||
case plans.ResourceInstanceDeleteBecauseCountIndex:
|
||||
r.ActionReason = "delete_because_count_index"
|
||||
case plans.ResourceInstanceDeleteBecauseEachKey:
|
||||
r.ActionReason = "delete_because_each_key"
|
||||
case plans.ResourceInstanceDeleteBecauseNoModule:
|
||||
r.ActionReason = "delete_because_no_module"
|
||||
default:
|
||||
return fmt.Errorf("resource %s has an unsupported action reason %s", r.Address, rc.ActionReason)
|
||||
return nil, fmt.Errorf("resource %s has an unsupported action reason %s", r.Address, rc.ActionReason)
|
||||
}
|
||||
|
||||
p.ResourceChanges = append(p.ResourceChanges, r)
|
||||
ret = append(ret, r)
|
||||
|
||||
}
|
||||
|
||||
sort.Slice(p.ResourceChanges, func(i, j int) bool {
|
||||
return p.ResourceChanges[i].Address < p.ResourceChanges[j].Address
|
||||
sort.Slice(ret, func(i, j int) bool {
|
||||
return ret[i].Address < ret[j].Address
|
||||
})
|
||||
|
||||
return nil
|
||||
return ret, nil
|
||||
}
|
||||
|
||||
func (p *plan) marshalOutputChanges(changes *plans.Changes) error {
|
||||
|
|
|
@ -48,6 +48,18 @@ type resourceChange struct {
|
|||
// Address is the absolute resource address
|
||||
Address string `json:"address,omitempty"`
|
||||
|
||||
// PreviousAddress is the absolute address that this resource instance had
|
||||
// at the conclusion of a previous run.
|
||||
//
|
||||
// This will typically be omitted, but will be present if the previous
|
||||
// resource instance was subject to a "moved" block that we handled in the
|
||||
// process of creating this plan.
|
||||
//
|
||||
// Note that this behavior diverges from the internal plan data structure,
|
||||
// where the previous address is set equal to the current address in the
|
||||
// common case, rather than being omitted.
|
||||
PreviousAddress string `json:"previous_address,omitempty"`
|
||||
|
||||
// ModuleAddress is the module portion of the above address. Omitted if the
|
||||
// instance is in the root module.
|
||||
ModuleAddress string `json:"module_address,omitempty"`
|
||||
|
|
|
@ -274,28 +274,3 @@ func marshalPlanModules(
|
|||
|
||||
return ret, nil
|
||||
}
|
||||
|
||||
// marshalSensitiveValues returns a map of sensitive attributes, with the value
|
||||
// set to true. It returns nil if the value is nil or if there are no sensitive
|
||||
// vals.
|
||||
func marshalSensitiveValues(value cty.Value) map[string]bool {
|
||||
if value.RawEquals(cty.NilVal) || value.IsNull() {
|
||||
return nil
|
||||
}
|
||||
|
||||
ret := make(map[string]bool)
|
||||
|
||||
it := value.ElementIterator()
|
||||
for it.Next() {
|
||||
k, v := it.Element()
|
||||
s := jsonstate.SensitiveAsBool(v)
|
||||
if !s.RawEquals(cty.False) {
|
||||
ret[k.AsString()] = true
|
||||
}
|
||||
}
|
||||
|
||||
if len(ret) == 0 {
|
||||
return nil
|
||||
}
|
||||
return ret
|
||||
}
|
||||
|
|
|
@ -9,7 +9,7 @@ import (
|
|||
// FormatVersion represents the version of the json format and will be
|
||||
// incremented for any change to this format that requires changes to a
|
||||
// consuming parser.
|
||||
const FormatVersion = "0.2"
|
||||
const FormatVersion = "1.0"
|
||||
|
||||
// providers is the top-level object returned when exporting provider schemas
|
||||
type providers struct {
|
||||
|
|
|
@ -18,7 +18,7 @@ import (
|
|||
// FormatVersion represents the version of the json format and will be
|
||||
// incremented for any change to this format that requires changes to a
|
||||
// consuming parser.
|
||||
const FormatVersion = "0.2"
|
||||
const FormatVersion = "1.0"
|
||||
|
||||
// state is the top-level representation of the json format of a terraform
|
||||
// state.
|
||||
|
|
|
@ -311,13 +311,9 @@ func (c *LoginCommand) outputDefaultTFELoginSuccess(dispHostname string) {
|
|||
}
|
||||
|
||||
func (c *LoginCommand) outputDefaultTFCLoginSuccess() {
|
||||
c.Ui.Output(
|
||||
fmt.Sprintf(
|
||||
c.Colorize().Color(strings.TrimSpace(`
|
||||
c.Ui.Output(c.Colorize().Color(strings.TrimSpace(`
|
||||
[green][bold]Success![reset] [bold]Logged in to Terraform Cloud[reset]
|
||||
`)),
|
||||
) + "\n",
|
||||
)
|
||||
` + "\n")))
|
||||
}
|
||||
|
||||
func (c *LoginCommand) logMOTDError(err error) {
|
||||
|
|
|
@ -15,9 +15,10 @@ import (
|
|||
"time"
|
||||
|
||||
plugin "github.com/hashicorp/go-plugin"
|
||||
"github.com/hashicorp/hcl/v2"
|
||||
"github.com/hashicorp/hcl/v2/hclsyntax"
|
||||
"github.com/hashicorp/terraform-svchost/disco"
|
||||
"github.com/mitchellh/cli"
|
||||
"github.com/mitchellh/colorstring"
|
||||
|
||||
"github.com/hashicorp/terraform/internal/addrs"
|
||||
"github.com/hashicorp/terraform/internal/backend"
|
||||
"github.com/hashicorp/terraform/internal/backend/local"
|
||||
|
@ -25,17 +26,15 @@ import (
|
|||
"github.com/hashicorp/terraform/internal/command/format"
|
||||
"github.com/hashicorp/terraform/internal/command/views"
|
||||
"github.com/hashicorp/terraform/internal/command/webbrowser"
|
||||
"github.com/hashicorp/terraform/internal/command/workdir"
|
||||
"github.com/hashicorp/terraform/internal/configs/configload"
|
||||
"github.com/hashicorp/terraform/internal/getproviders"
|
||||
legacy "github.com/hashicorp/terraform/internal/legacy/terraform"
|
||||
"github.com/hashicorp/terraform/internal/providers"
|
||||
"github.com/hashicorp/terraform/internal/provisioners"
|
||||
"github.com/hashicorp/terraform/internal/terminal"
|
||||
"github.com/hashicorp/terraform/internal/terraform"
|
||||
"github.com/hashicorp/terraform/internal/tfdiags"
|
||||
"github.com/mitchellh/cli"
|
||||
"github.com/mitchellh/colorstring"
|
||||
|
||||
legacy "github.com/hashicorp/terraform/internal/legacy/terraform"
|
||||
)
|
||||
|
||||
// Meta are the meta-options that are available on all or most commands.
|
||||
|
@ -44,16 +43,19 @@ type Meta struct {
|
|||
// command with a Meta field. These are expected to be set externally
|
||||
// (not from within the command itself).
|
||||
|
||||
// OriginalWorkingDir, if set, is the actual working directory where
|
||||
// Terraform was run from. This might not be the _actual_ current working
|
||||
// directory, because users can add the -chdir=... option to the beginning
|
||||
// of their command line to ask Terraform to switch.
|
||||
// WorkingDir is an object representing the "working directory" where we're
|
||||
// running commands. In the normal case this literally refers to the
|
||||
// working directory of the Terraform process, though this can take on
|
||||
// a more symbolic meaning when the user has overridden default behavior
|
||||
// to specify a different working directory or to override the special
|
||||
// data directory where we'll persist settings that must survive between
|
||||
// consecutive commands.
|
||||
//
|
||||
// Most things should just use the current working directory in order to
|
||||
// respect the user's override, but we retain this for exceptional
|
||||
// situations where we need to refer back to the original working directory
|
||||
// for some reason.
|
||||
OriginalWorkingDir string
|
||||
// We're currently gradually migrating the various bits of state that
|
||||
// must persist between consecutive commands in a session to be encapsulated
|
||||
// in here, but we're not there yet and so there are also some methods on
|
||||
// Meta which directly read and modify paths inside the data directory.
|
||||
WorkingDir *workdir.Dir
|
||||
|
||||
// Streams tracks the raw Stdout, Stderr, and Stdin handles along with
|
||||
// some basic metadata about them, such as whether each is connected to
|
||||
|
@ -104,11 +106,6 @@ type Meta struct {
|
|||
// provider version can be obtained.
|
||||
ProviderSource getproviders.Source
|
||||
|
||||
// OverrideDataDir, if non-empty, overrides the return value of the
|
||||
// DataDir method for situations where the local .terraform/ directory
|
||||
// is not suitable, e.g. because of a read-only filesystem.
|
||||
OverrideDataDir string
|
||||
|
||||
// BrowserLauncher is used by commands that need to open a URL in a
|
||||
// web browser.
|
||||
BrowserLauncher webbrowser.Launcher
|
||||
|
@ -137,10 +134,6 @@ type Meta struct {
|
|||
// Protected: commands can set these
|
||||
//----------------------------------------------------------
|
||||
|
||||
// Modify the data directory location. This should be accessed through the
|
||||
// DataDir method.
|
||||
dataDir string
|
||||
|
||||
// pluginPath is a user defined set of directories to look for plugins.
|
||||
// This is set during init with the `-plugin-dir` flag, saved to a file in
|
||||
// the data directory.
|
||||
|
@ -267,13 +260,25 @@ func (m *Meta) Colorize() *colorstring.Colorize {
|
|||
}
|
||||
}
|
||||
|
||||
// fixupMissingWorkingDir is a compensation for various existing tests which
|
||||
// directly construct incomplete "Meta" objects. Specifically, it deals with
|
||||
// a test that omits a WorkingDir value by constructing one just-in-time.
|
||||
//
|
||||
// We shouldn't ever rely on this in any real codepath, because it doesn't
|
||||
// take into account the various ways users can override our default
|
||||
// directory selection behaviors.
|
||||
func (m *Meta) fixupMissingWorkingDir() {
|
||||
if m.WorkingDir == nil {
|
||||
log.Printf("[WARN] This 'Meta' object is missing its WorkingDir, so we're creating a default one suitable only for tests")
|
||||
m.WorkingDir = workdir.NewDir(".")
|
||||
}
|
||||
}
|
||||
|
||||
// DataDir returns the directory where local data will be stored.
|
||||
// Defaults to DefaultDataDir in the current working directory.
|
||||
func (m *Meta) DataDir() string {
|
||||
if m.OverrideDataDir != "" {
|
||||
return m.OverrideDataDir
|
||||
}
|
||||
return DefaultDataDir
|
||||
m.fixupMissingWorkingDir()
|
||||
return m.WorkingDir.DataDir()
|
||||
}
|
||||
|
||||
const (
|
||||
|
@ -444,7 +449,6 @@ func (m *Meta) contextOpts() (*terraform.ContextOpts, error) {
|
|||
|
||||
var opts terraform.ContextOpts
|
||||
|
||||
opts.Targets = m.targets
|
||||
opts.UIInput = m.UIInput()
|
||||
opts.Parallelism = m.parallelism
|
||||
|
||||
|
@ -457,52 +461,15 @@ func (m *Meta) contextOpts() (*terraform.ContextOpts, error) {
|
|||
} else {
|
||||
providerFactories, err := m.providerFactories()
|
||||
if err != nil {
|
||||
// providerFactories can fail if the plugin selections file is
|
||||
// invalid in some way, but we don't have any way to report that
|
||||
// from here so we'll just behave as if no providers are available
|
||||
// in that case. However, we will produce a warning in case this
|
||||
// shows up unexpectedly and prompts a bug report.
|
||||
// This situation shouldn't arise commonly in practice because
|
||||
// the selections file is generated programmatically.
|
||||
log.Printf("[WARN] Failed to determine selected providers: %s", err)
|
||||
|
||||
// variable providerFactories may now be incomplete, which could
|
||||
// lead to errors reported downstream from here. providerFactories
|
||||
// tries to populate as many providers as possible even in an
|
||||
// error case, so that operations not using problematic providers
|
||||
// can still succeed.
|
||||
return nil, err
|
||||
}
|
||||
opts.Providers = providerFactories
|
||||
opts.Provisioners = m.provisionerFactories()
|
||||
|
||||
// Read the dependency locks so that they can be verified against the
|
||||
// provider requirements in the configuration
|
||||
lockedDependencies, diags := m.lockedDependencies()
|
||||
|
||||
// If the locks file is invalid, we should fail early rather than
|
||||
// ignore it. A missing locks file will return no error.
|
||||
if diags.HasErrors() {
|
||||
return nil, diags.Err()
|
||||
}
|
||||
opts.LockedDependencies = lockedDependencies
|
||||
|
||||
// If any unmanaged providers or dev overrides are enabled, they must
|
||||
// be listed in the context so that they can be ignored when verifying
|
||||
// the locks against the configuration
|
||||
opts.ProvidersInDevelopment = make(map[addrs.Provider]struct{})
|
||||
for provider := range m.UnmanagedProviders {
|
||||
opts.ProvidersInDevelopment[provider] = struct{}{}
|
||||
}
|
||||
for provider := range m.ProviderDevOverrides {
|
||||
opts.ProvidersInDevelopment[provider] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
opts.ProviderSHA256s = m.providerPluginsLock().Read()
|
||||
|
||||
opts.Meta = &terraform.ContextMeta{
|
||||
Env: workspace,
|
||||
OriginalWorkingDir: m.OriginalWorkingDir,
|
||||
OriginalWorkingDir: m.WorkingDir.OriginalWorkingDir(),
|
||||
}
|
||||
|
||||
return &opts, nil
|
||||
|
@ -555,43 +522,6 @@ func (m *Meta) extendedFlagSet(n string) *flag.FlagSet {
|
|||
return f
|
||||
}
|
||||
|
||||
// parseTargetFlags must be called for any commands supporting -target
|
||||
// arguments. This method attempts to parse each -target flag into an
|
||||
// addrs.Target, storing in the Meta.targets slice.
|
||||
//
|
||||
// If any flags cannot be parsed, we rewrap the first error diagnostic with a
|
||||
// custom title to clarify the source of the error. The normal approach of
|
||||
// directly returning the diags from HCL or the addrs package results in
|
||||
// confusing incorrect "source" results when presented.
|
||||
func (m *Meta) parseTargetFlags() tfdiags.Diagnostics {
|
||||
var diags tfdiags.Diagnostics
|
||||
m.targets = nil
|
||||
for _, tf := range m.targetFlags {
|
||||
traversal, syntaxDiags := hclsyntax.ParseTraversalAbs([]byte(tf), "", hcl.Pos{Line: 1, Column: 1})
|
||||
if syntaxDiags.HasErrors() {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
fmt.Sprintf("Invalid target %q", tf),
|
||||
syntaxDiags[0].Detail,
|
||||
))
|
||||
continue
|
||||
}
|
||||
|
||||
target, targetDiags := addrs.ParseTarget(traversal)
|
||||
if targetDiags.HasErrors() {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
fmt.Sprintf("Invalid target %q", tf),
|
||||
targetDiags[0].Description().Detail,
|
||||
))
|
||||
continue
|
||||
}
|
||||
|
||||
m.targets = append(m.targets, target.Subject)
|
||||
}
|
||||
return diags
|
||||
}
|
||||
|
||||
// process will process any -no-color entries out of the arguments. This
|
||||
// will potentially modify the args in-place. It will return the resulting
|
||||
// slice, and update the Meta and Ui.
|
||||
|
|
|
@ -4,15 +4,16 @@ package command
|
|||
// exported and private.
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"log"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/hashicorp/errwrap"
|
||||
"github.com/hashicorp/hcl/v2"
|
||||
"github.com/hashicorp/hcl/v2/hcldec"
|
||||
"github.com/hashicorp/terraform/internal/backend"
|
||||
|
@ -106,7 +107,32 @@ func (m *Meta) Backend(opts *BackendOpts) (backend.Enhanced, tfdiags.Diagnostics
|
|||
// Set up the CLI opts we pass into backends that support it.
|
||||
cliOpts, err := m.backendCLIOpts()
|
||||
if err != nil {
|
||||
if errs := providerPluginErrors(nil); errors.As(err, &errs) {
|
||||
// This is a special type returned by m.providerFactories, which
|
||||
// indicates one or more inconsistencies between the dependency
|
||||
// lock file and the provider plugins actually available in the
|
||||
// local cache directory.
|
||||
var buf bytes.Buffer
|
||||
for addr, err := range errs {
|
||||
fmt.Fprintf(&buf, "\n - %s: %s", addr, err)
|
||||
}
|
||||
suggestion := "To download the plugins required for this configuration, run:\n terraform init"
|
||||
if m.RunningInAutomation {
|
||||
// Don't mention "terraform init" specifically if we're running in an automation wrapper
|
||||
suggestion = "You must install the required plugins before running Terraform operations."
|
||||
}
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Required plugins are not installed",
|
||||
fmt.Sprintf(
|
||||
"The installed provider plugins are not consistent with the packages selected in the dependency lock file:%s\n\nTerraform uses external plugins to integrate with a variety of different infrastructure services. %s",
|
||||
buf.String(), suggestion,
|
||||
),
|
||||
))
|
||||
} else {
|
||||
// All other errors just get generic handling.
|
||||
diags = diags.Append(err)
|
||||
}
|
||||
return nil, diags
|
||||
}
|
||||
cliOpts.Validation = true
|
||||
|
@ -197,13 +223,19 @@ func (m *Meta) selectWorkspace(b backend.Backend) error {
|
|||
var list strings.Builder
|
||||
for i, w := range workspaces {
|
||||
if w == workspace {
|
||||
log.Printf("[TRACE] Meta.selectWorkspace: the currently selected workspace is present in the configured backend (%s)", workspace)
|
||||
return nil
|
||||
}
|
||||
fmt.Fprintf(&list, "%d. %s\n", i+1, w)
|
||||
}
|
||||
|
||||
// If the selected workspace doesn't exist, ask the user to select
|
||||
// a workspace from the list of existing workspaces.
|
||||
// If the backend only has a single workspace, select that as the current workspace
|
||||
if len(workspaces) == 1 {
|
||||
log.Printf("[TRACE] Meta.selectWorkspace: automatically selecting the single workspace provided by the backend (%s)", workspaces[0])
|
||||
return m.SetWorkspace(workspaces[0])
|
||||
}
|
||||
|
||||
// Otherwise, ask the user to select a workspace from the list of existing workspaces.
|
||||
v, err := m.UIInput().Input(context.Background(), &terraform.InputOpts{
|
||||
Id: "select-workspace",
|
||||
Query: fmt.Sprintf(
|
||||
|
@ -221,7 +253,9 @@ func (m *Meta) selectWorkspace(b backend.Backend) error {
|
|||
return fmt.Errorf("Failed to select workspace: input not a valid number")
|
||||
}
|
||||
|
||||
return m.SetWorkspace(workspaces[idx-1])
|
||||
workspace = workspaces[idx-1]
|
||||
log.Printf("[TRACE] Meta.selectWorkspace: setting the current workpace according to user selection (%s)", workspace)
|
||||
return m.SetWorkspace(workspace)
|
||||
}
|
||||
|
||||
// BackendForPlan is similar to Backend, but uses backend settings that were
|
||||
|
@ -244,7 +278,7 @@ func (m *Meta) BackendForPlan(settings plans.Backend) (backend.Enhanced, tfdiags
|
|||
schema := b.ConfigSchema()
|
||||
configVal, err := settings.Config.Decode(schema.ImpliedType())
|
||||
if err != nil {
|
||||
diags = diags.Append(errwrap.Wrapf("saved backend configuration is invalid: {{err}}", err))
|
||||
diags = diags.Append(fmt.Errorf("saved backend configuration is invalid: %w", err))
|
||||
return nil, diags
|
||||
}
|
||||
|
||||
|
@ -348,14 +382,24 @@ func (m *Meta) Operation(b backend.Backend) *backend.Operation {
|
|||
stateLocker = clistate.NewLocker(m.stateLockTimeout, view)
|
||||
}
|
||||
|
||||
depLocks, diags := m.lockedDependencies()
|
||||
if diags.HasErrors() {
|
||||
// We can't actually report errors from here, but m.lockedDependencies
|
||||
// should always have been called earlier to prepare the "ContextOpts"
|
||||
// for the backend anyway, so we should never actually get here in
|
||||
// a real situation. If we do get here then the backend will inevitably
|
||||
// fail downstream somwhere if it tries to use the empty depLocks.
|
||||
log.Printf("[WARN] Failed to load dependency locks while preparing backend operation (ignored): %s", diags.Err().Error())
|
||||
}
|
||||
|
||||
return &backend.Operation{
|
||||
PlanOutBackend: planOutBackend,
|
||||
Parallelism: m.parallelism,
|
||||
Targets: m.targets,
|
||||
UIIn: m.UIInput(),
|
||||
UIOut: m.Ui,
|
||||
Workspace: workspace,
|
||||
StateLocker: stateLocker,
|
||||
DependencyLocks: depLocks,
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -709,10 +753,10 @@ func (m *Meta) backend_c_r_S(c *configs.Backend, cHash int, sMgr *clistate.Local
|
|||
|
||||
// Perform the migration
|
||||
err := m.backendMigrateState(&backendMigrateOpts{
|
||||
OneType: s.Backend.Type,
|
||||
TwoType: "local",
|
||||
One: b,
|
||||
Two: localB,
|
||||
SourceType: s.Backend.Type,
|
||||
DestinationType: "local",
|
||||
Source: b,
|
||||
Destination: localB,
|
||||
})
|
||||
if err != nil {
|
||||
diags = diags.Append(err)
|
||||
|
@ -785,10 +829,10 @@ func (m *Meta) backend_C_r_s(c *configs.Backend, cHash int, sMgr *clistate.Local
|
|||
if len(localStates) > 0 {
|
||||
// Perform the migration
|
||||
err = m.backendMigrateState(&backendMigrateOpts{
|
||||
OneType: "local",
|
||||
TwoType: c.Type,
|
||||
One: localB,
|
||||
Two: b,
|
||||
SourceType: "local",
|
||||
DestinationType: c.Type,
|
||||
Source: localB,
|
||||
Destination: b,
|
||||
})
|
||||
if err != nil {
|
||||
diags = diags.Append(err)
|
||||
|
@ -900,10 +944,10 @@ func (m *Meta) backend_C_r_S_changed(c *configs.Backend, cHash int, sMgr *clista
|
|||
|
||||
// Perform the migration
|
||||
err := m.backendMigrateState(&backendMigrateOpts{
|
||||
OneType: s.Backend.Type,
|
||||
TwoType: c.Type,
|
||||
One: oldB,
|
||||
Two: b,
|
||||
SourceType: s.Backend.Type,
|
||||
DestinationType: c.Type,
|
||||
Source: oldB,
|
||||
Destination: b,
|
||||
})
|
||||
if err != nil {
|
||||
diags = diags.Append(err)
|
||||
|
|
|
@ -21,13 +21,13 @@ import (
|
|||
)
|
||||
|
||||
type backendMigrateOpts struct {
|
||||
OneType, TwoType string
|
||||
One, Two backend.Backend
|
||||
SourceType, DestinationType string
|
||||
Source, Destination backend.Backend
|
||||
|
||||
// Fields below are set internally when migrate is called
|
||||
|
||||
oneEnv string // source env
|
||||
twoEnv string // dest env
|
||||
sourceWorkspace string
|
||||
destinationWorkspace string
|
||||
force bool // if true, won't ask for confirmation
|
||||
}
|
||||
|
||||
|
@ -43,45 +43,45 @@ type backendMigrateOpts struct {
|
|||
//
|
||||
// This will attempt to lock both states for the migration.
|
||||
func (m *Meta) backendMigrateState(opts *backendMigrateOpts) error {
|
||||
log.Printf("[TRACE] backendMigrateState: need to migrate from %q to %q backend config", opts.OneType, opts.TwoType)
|
||||
log.Printf("[TRACE] backendMigrateState: need to migrate from %q to %q backend config", opts.SourceType, opts.DestinationType)
|
||||
// We need to check what the named state status is. If we're converting
|
||||
// from multi-state to single-state for example, we need to handle that.
|
||||
var oneSingle, twoSingle bool
|
||||
oneStates, err := opts.One.Workspaces()
|
||||
var sourceSingleState, destinationSingleState bool
|
||||
sourceWorkspaces, err := opts.Source.Workspaces()
|
||||
if err == backend.ErrWorkspacesNotSupported {
|
||||
oneSingle = true
|
||||
sourceSingleState = true
|
||||
err = nil
|
||||
}
|
||||
if err != nil {
|
||||
return fmt.Errorf(strings.TrimSpace(
|
||||
errMigrateLoadStates), opts.OneType, err)
|
||||
errMigrateLoadStates), opts.SourceType, err)
|
||||
}
|
||||
|
||||
twoWorkspaces, err := opts.Two.Workspaces()
|
||||
destinationWorkspaces, err := opts.Destination.Workspaces()
|
||||
if err == backend.ErrWorkspacesNotSupported {
|
||||
twoSingle = true
|
||||
destinationSingleState = true
|
||||
err = nil
|
||||
}
|
||||
if err != nil {
|
||||
return fmt.Errorf(strings.TrimSpace(
|
||||
errMigrateLoadStates), opts.TwoType, err)
|
||||
errMigrateLoadStates), opts.DestinationType, err)
|
||||
}
|
||||
|
||||
// Set up defaults
|
||||
opts.oneEnv = backend.DefaultStateName
|
||||
opts.twoEnv = backend.DefaultStateName
|
||||
opts.sourceWorkspace = backend.DefaultStateName
|
||||
opts.destinationWorkspace = backend.DefaultStateName
|
||||
opts.force = m.forceInitCopy
|
||||
|
||||
// Disregard remote Terraform version for the state source backend. If it's a
|
||||
// Terraform Cloud remote backend, we don't care about the remote version,
|
||||
// as we are migrating away and will not break a remote workspace.
|
||||
m.ignoreRemoteBackendVersionConflict(opts.One)
|
||||
m.ignoreRemoteBackendVersionConflict(opts.Source)
|
||||
|
||||
for _, twoWorkspace := range twoWorkspaces {
|
||||
for _, workspace := range destinationWorkspaces {
|
||||
// Check the remote Terraform version for the state destination backend. If
|
||||
// it's a Terraform Cloud remote backend, we want to ensure that we don't
|
||||
// break the workspace by uploading an incompatible state file.
|
||||
diags := m.remoteBackendVersionCheck(opts.Two, twoWorkspace)
|
||||
diags := m.remoteBackendVersionCheck(opts.Destination, workspace)
|
||||
if diags.HasErrors() {
|
||||
return diags.Err()
|
||||
}
|
||||
|
@ -92,20 +92,20 @@ func (m *Meta) backendMigrateState(opts *backendMigrateOpts) error {
|
|||
switch {
|
||||
// Single-state to single-state. This is the easiest case: we just
|
||||
// copy the default state directly.
|
||||
case oneSingle && twoSingle:
|
||||
case sourceSingleState && destinationSingleState:
|
||||
return m.backendMigrateState_s_s(opts)
|
||||
|
||||
// Single-state to multi-state. This is easy since we just copy
|
||||
// the default state and ignore the rest in the destination.
|
||||
case oneSingle && !twoSingle:
|
||||
case sourceSingleState && !destinationSingleState:
|
||||
return m.backendMigrateState_s_s(opts)
|
||||
|
||||
// Multi-state to single-state. If the source has more than the default
|
||||
// state this is complicated since we have to ask the user what to do.
|
||||
case !oneSingle && twoSingle:
|
||||
case !sourceSingleState && destinationSingleState:
|
||||
// If the source only has one state and it is the default,
|
||||
// treat it as if it doesn't support multi-state.
|
||||
if len(oneStates) == 1 && oneStates[0] == backend.DefaultStateName {
|
||||
if len(sourceWorkspaces) == 1 && sourceWorkspaces[0] == backend.DefaultStateName {
|
||||
return m.backendMigrateState_s_s(opts)
|
||||
}
|
||||
|
||||
|
@ -113,10 +113,10 @@ func (m *Meta) backendMigrateState(opts *backendMigrateOpts) error {
|
|||
|
||||
// Multi-state to multi-state. We merge the states together (migrating
|
||||
// each from the source to the destination one by one).
|
||||
case !oneSingle && !twoSingle:
|
||||
case !sourceSingleState && !destinationSingleState:
|
||||
// If the source only has one state and it is the default,
|
||||
// treat it as if it doesn't support multi-state.
|
||||
if len(oneStates) == 1 && oneStates[0] == backend.DefaultStateName {
|
||||
if len(sourceWorkspaces) == 1 && sourceWorkspaces[0] == backend.DefaultStateName {
|
||||
return m.backendMigrateState_s_s(opts)
|
||||
}
|
||||
|
||||
|
@ -146,39 +146,43 @@ func (m *Meta) backendMigrateState(opts *backendMigrateOpts) error {
|
|||
func (m *Meta) backendMigrateState_S_S(opts *backendMigrateOpts) error {
|
||||
log.Print("[TRACE] backendMigrateState: migrating all named workspaces")
|
||||
|
||||
migrate := opts.force
|
||||
if !migrate {
|
||||
var err error
|
||||
// Ask the user if they want to migrate their existing remote state
|
||||
migrate, err := m.confirm(&terraform.InputOpts{
|
||||
migrate, err = m.confirm(&terraform.InputOpts{
|
||||
Id: "backend-migrate-multistate-to-multistate",
|
||||
Query: fmt.Sprintf(
|
||||
"Do you want to migrate all workspaces to %q?",
|
||||
opts.TwoType),
|
||||
opts.DestinationType),
|
||||
Description: fmt.Sprintf(
|
||||
strings.TrimSpace(inputBackendMigrateMultiToMulti),
|
||||
opts.OneType, opts.TwoType),
|
||||
opts.SourceType, opts.DestinationType),
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf(
|
||||
"Error asking for state migration action: %s", err)
|
||||
}
|
||||
}
|
||||
if !migrate {
|
||||
return fmt.Errorf("Migration aborted by user.")
|
||||
}
|
||||
|
||||
// Read all the states
|
||||
oneStates, err := opts.One.Workspaces()
|
||||
sourceWorkspaces, err := opts.Source.Workspaces()
|
||||
if err != nil {
|
||||
return fmt.Errorf(strings.TrimSpace(
|
||||
errMigrateLoadStates), opts.OneType, err)
|
||||
errMigrateLoadStates), opts.SourceType, err)
|
||||
}
|
||||
|
||||
// Sort the states so they're always copied alphabetically
|
||||
sort.Strings(oneStates)
|
||||
sort.Strings(sourceWorkspaces)
|
||||
|
||||
// Go through each and migrate
|
||||
for _, name := range oneStates {
|
||||
for _, name := range sourceWorkspaces {
|
||||
// Copy the same names
|
||||
opts.oneEnv = name
|
||||
opts.twoEnv = name
|
||||
opts.sourceWorkspace = name
|
||||
opts.destinationWorkspace = name
|
||||
|
||||
// Force it, we confirmed above
|
||||
opts.force = true
|
||||
|
@ -186,7 +190,7 @@ func (m *Meta) backendMigrateState_S_S(opts *backendMigrateOpts) error {
|
|||
// Perform the migration
|
||||
if err := m.backendMigrateState_s_s(opts); err != nil {
|
||||
return fmt.Errorf(strings.TrimSpace(
|
||||
errMigrateMulti), name, opts.OneType, opts.TwoType, err)
|
||||
errMigrateMulti), name, opts.SourceType, opts.DestinationType, err)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -195,7 +199,7 @@ func (m *Meta) backendMigrateState_S_S(opts *backendMigrateOpts) error {
|
|||
|
||||
// Multi-state to single state.
|
||||
func (m *Meta) backendMigrateState_S_s(opts *backendMigrateOpts) error {
|
||||
log.Printf("[TRACE] backendMigrateState: target backend type %q does not support named workspaces", opts.TwoType)
|
||||
log.Printf("[TRACE] backendMigrateState: destination backend type %q does not support named workspaces", opts.DestinationType)
|
||||
|
||||
currentEnv, err := m.Workspace()
|
||||
if err != nil {
|
||||
|
@ -211,10 +215,10 @@ func (m *Meta) backendMigrateState_S_s(opts *backendMigrateOpts) error {
|
|||
Query: fmt.Sprintf(
|
||||
"Destination state %q doesn't support workspaces.\n"+
|
||||
"Do you want to copy only your current workspace?",
|
||||
opts.TwoType),
|
||||
opts.DestinationType),
|
||||
Description: fmt.Sprintf(
|
||||
strings.TrimSpace(inputBackendMigrateMultiToSingle),
|
||||
opts.OneType, opts.TwoType, currentEnv),
|
||||
opts.SourceType, opts.DestinationType, currentEnv),
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf(
|
||||
|
@ -227,7 +231,7 @@ func (m *Meta) backendMigrateState_S_s(opts *backendMigrateOpts) error {
|
|||
}
|
||||
|
||||
// Copy the default state
|
||||
opts.oneEnv = currentEnv
|
||||
opts.sourceWorkspace = currentEnv
|
||||
|
||||
// now switch back to the default env so we can acccess the new backend
|
||||
m.SetWorkspace(backend.DefaultStateName)
|
||||
|
@ -237,46 +241,46 @@ func (m *Meta) backendMigrateState_S_s(opts *backendMigrateOpts) error {
|
|||
|
||||
// Single state to single state, assumed default state name.
|
||||
func (m *Meta) backendMigrateState_s_s(opts *backendMigrateOpts) error {
|
||||
log.Printf("[TRACE] backendMigrateState: migrating %q workspace to %q workspace", opts.oneEnv, opts.twoEnv)
|
||||
log.Printf("[TRACE] backendMigrateState: migrating %q workspace to %q workspace", opts.sourceWorkspace, opts.destinationWorkspace)
|
||||
|
||||
stateOne, err := opts.One.StateMgr(opts.oneEnv)
|
||||
sourceState, err := opts.Source.StateMgr(opts.sourceWorkspace)
|
||||
if err != nil {
|
||||
return fmt.Errorf(strings.TrimSpace(
|
||||
errMigrateSingleLoadDefault), opts.OneType, err)
|
||||
errMigrateSingleLoadDefault), opts.SourceType, err)
|
||||
}
|
||||
if err := stateOne.RefreshState(); err != nil {
|
||||
if err := sourceState.RefreshState(); err != nil {
|
||||
return fmt.Errorf(strings.TrimSpace(
|
||||
errMigrateSingleLoadDefault), opts.OneType, err)
|
||||
errMigrateSingleLoadDefault), opts.SourceType, err)
|
||||
}
|
||||
|
||||
// Do not migrate workspaces without state.
|
||||
if stateOne.State().Empty() {
|
||||
if sourceState.State().Empty() {
|
||||
log.Print("[TRACE] backendMigrateState: source workspace has empty state, so nothing to migrate")
|
||||
return nil
|
||||
}
|
||||
|
||||
stateTwo, err := opts.Two.StateMgr(opts.twoEnv)
|
||||
destinationState, err := opts.Destination.StateMgr(opts.destinationWorkspace)
|
||||
if err == backend.ErrDefaultWorkspaceNotSupported {
|
||||
// If the backend doesn't support using the default state, we ask the user
|
||||
// for a new name and migrate the default state to the given named state.
|
||||
stateTwo, err = func() (statemgr.Full, error) {
|
||||
log.Print("[TRACE] backendMigrateState: target doesn't support a default workspace, so we must prompt for a new name")
|
||||
destinationState, err = func() (statemgr.Full, error) {
|
||||
log.Print("[TRACE] backendMigrateState: destination doesn't support a default workspace, so we must prompt for a new name")
|
||||
name, err := m.UIInput().Input(context.Background(), &terraform.InputOpts{
|
||||
Id: "new-state-name",
|
||||
Query: fmt.Sprintf(
|
||||
"[reset][bold][yellow]The %q backend configuration only allows "+
|
||||
"named workspaces![reset]",
|
||||
opts.TwoType),
|
||||
opts.DestinationType),
|
||||
Description: strings.TrimSpace(inputBackendNewWorkspaceName),
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Error asking for new state name: %s", err)
|
||||
}
|
||||
|
||||
// Update the name of the target state.
|
||||
opts.twoEnv = name
|
||||
// Update the name of the destination state.
|
||||
opts.destinationWorkspace = name
|
||||
|
||||
stateTwo, err := opts.Two.StateMgr(opts.twoEnv)
|
||||
destinationState, err := opts.Destination.StateMgr(opts.destinationWorkspace)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -287,34 +291,34 @@ func (m *Meta) backendMigrateState_s_s(opts *backendMigrateOpts) error {
|
|||
// If the currently selected workspace is the default workspace, then set
|
||||
// the named workspace as the new selected workspace.
|
||||
if workspace == backend.DefaultStateName {
|
||||
if err := m.SetWorkspace(opts.twoEnv); err != nil {
|
||||
if err := m.SetWorkspace(opts.destinationWorkspace); err != nil {
|
||||
return nil, fmt.Errorf("Failed to set new workspace: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
return stateTwo, nil
|
||||
return destinationState, nil
|
||||
}()
|
||||
}
|
||||
if err != nil {
|
||||
return fmt.Errorf(strings.TrimSpace(
|
||||
errMigrateSingleLoadDefault), opts.TwoType, err)
|
||||
errMigrateSingleLoadDefault), opts.DestinationType, err)
|
||||
}
|
||||
if err := stateTwo.RefreshState(); err != nil {
|
||||
if err := destinationState.RefreshState(); err != nil {
|
||||
return fmt.Errorf(strings.TrimSpace(
|
||||
errMigrateSingleLoadDefault), opts.TwoType, err)
|
||||
errMigrateSingleLoadDefault), opts.DestinationType, err)
|
||||
}
|
||||
|
||||
// Check if we need migration at all.
|
||||
// This is before taking a lock, because they may also correspond to the same lock.
|
||||
one := stateOne.State()
|
||||
two := stateTwo.State()
|
||||
source := sourceState.State()
|
||||
destination := destinationState.State()
|
||||
|
||||
// no reason to migrate if the state is already there
|
||||
if one.Equal(two) {
|
||||
if source.Equal(destination) {
|
||||
// Equal isn't identical; it doesn't check lineage.
|
||||
sm1, _ := stateOne.(statemgr.PersistentMeta)
|
||||
sm2, _ := stateTwo.(statemgr.PersistentMeta)
|
||||
if one != nil && two != nil {
|
||||
sm1, _ := sourceState.(statemgr.PersistentMeta)
|
||||
sm2, _ := destinationState.(statemgr.PersistentMeta)
|
||||
if source != nil && destination != nil {
|
||||
if sm1 == nil || sm2 == nil {
|
||||
log.Print("[TRACE] backendMigrateState: both source and destination workspaces have no state, so no migration is needed")
|
||||
return nil
|
||||
|
@ -332,56 +336,56 @@ func (m *Meta) backendMigrateState_s_s(opts *backendMigrateOpts) error {
|
|||
view := views.NewStateLocker(arguments.ViewHuman, m.View)
|
||||
locker := clistate.NewLocker(m.stateLockTimeout, view)
|
||||
|
||||
lockerOne := locker.WithContext(lockCtx)
|
||||
if diags := lockerOne.Lock(stateOne, "migration source state"); diags.HasErrors() {
|
||||
lockerSource := locker.WithContext(lockCtx)
|
||||
if diags := lockerSource.Lock(sourceState, "migration source state"); diags.HasErrors() {
|
||||
return diags.Err()
|
||||
}
|
||||
defer lockerOne.Unlock()
|
||||
defer lockerSource.Unlock()
|
||||
|
||||
lockerTwo := locker.WithContext(lockCtx)
|
||||
if diags := lockerTwo.Lock(stateTwo, "migration destination state"); diags.HasErrors() {
|
||||
lockerDestination := locker.WithContext(lockCtx)
|
||||
if diags := lockerDestination.Lock(destinationState, "migration destination state"); diags.HasErrors() {
|
||||
return diags.Err()
|
||||
}
|
||||
defer lockerTwo.Unlock()
|
||||
defer lockerDestination.Unlock()
|
||||
|
||||
// We now own a lock, so double check that we have the version
|
||||
// corresponding to the lock.
|
||||
log.Print("[TRACE] backendMigrateState: refreshing source workspace state")
|
||||
if err := stateOne.RefreshState(); err != nil {
|
||||
if err := sourceState.RefreshState(); err != nil {
|
||||
return fmt.Errorf(strings.TrimSpace(
|
||||
errMigrateSingleLoadDefault), opts.OneType, err)
|
||||
errMigrateSingleLoadDefault), opts.SourceType, err)
|
||||
}
|
||||
log.Print("[TRACE] backendMigrateState: refreshing target workspace state")
|
||||
if err := stateTwo.RefreshState(); err != nil {
|
||||
log.Print("[TRACE] backendMigrateState: refreshing destination workspace state")
|
||||
if err := destinationState.RefreshState(); err != nil {
|
||||
return fmt.Errorf(strings.TrimSpace(
|
||||
errMigrateSingleLoadDefault), opts.OneType, err)
|
||||
errMigrateSingleLoadDefault), opts.SourceType, err)
|
||||
}
|
||||
|
||||
one = stateOne.State()
|
||||
two = stateTwo.State()
|
||||
source = sourceState.State()
|
||||
destination = destinationState.State()
|
||||
}
|
||||
|
||||
var confirmFunc func(statemgr.Full, statemgr.Full, *backendMigrateOpts) (bool, error)
|
||||
switch {
|
||||
// No migration necessary
|
||||
case one.Empty() && two.Empty():
|
||||
case source.Empty() && destination.Empty():
|
||||
log.Print("[TRACE] backendMigrateState: both source and destination workspaces have empty state, so no migration is required")
|
||||
return nil
|
||||
|
||||
// No migration necessary if we're inheriting state.
|
||||
case one.Empty() && !two.Empty():
|
||||
case source.Empty() && !destination.Empty():
|
||||
log.Print("[TRACE] backendMigrateState: source workspace has empty state, so no migration is required")
|
||||
return nil
|
||||
|
||||
// We have existing state moving into no state. Ask the user if
|
||||
// they'd like to do this.
|
||||
case !one.Empty() && two.Empty():
|
||||
log.Print("[TRACE] backendMigrateState: target workspace has empty state, so might copy source workspace state")
|
||||
case !source.Empty() && destination.Empty():
|
||||
log.Print("[TRACE] backendMigrateState: destination workspace has empty state, so might copy source workspace state")
|
||||
confirmFunc = m.backendMigrateEmptyConfirm
|
||||
|
||||
// Both states are non-empty, meaning we need to determine which
|
||||
// state should be used and update accordingly.
|
||||
case !one.Empty() && !two.Empty():
|
||||
case !source.Empty() && !destination.Empty():
|
||||
log.Print("[TRACE] backendMigrateState: both source and destination workspaces have states, so might overwrite destination with source")
|
||||
confirmFunc = m.backendMigrateNonEmptyConfirm
|
||||
}
|
||||
|
@ -398,7 +402,7 @@ func (m *Meta) backendMigrateState_s_s(opts *backendMigrateOpts) error {
|
|||
}
|
||||
|
||||
// Confirm with the user whether we want to copy state over
|
||||
confirm, err := confirmFunc(stateOne, stateTwo, opts)
|
||||
confirm, err := confirmFunc(sourceState, destinationState, opts)
|
||||
if err != nil {
|
||||
log.Print("[TRACE] backendMigrateState: error reading input, so aborting migration")
|
||||
return err
|
||||
|
@ -413,36 +417,36 @@ func (m *Meta) backendMigrateState_s_s(opts *backendMigrateOpts) error {
|
|||
// includes preserving any lineage/serial information where possible, if
|
||||
// both managers support such metadata.
|
||||
log.Print("[TRACE] backendMigrateState: migration confirmed, so migrating")
|
||||
if err := statemgr.Migrate(stateTwo, stateOne); err != nil {
|
||||
if err := statemgr.Migrate(destinationState, sourceState); err != nil {
|
||||
return fmt.Errorf(strings.TrimSpace(errBackendStateCopy),
|
||||
opts.OneType, opts.TwoType, err)
|
||||
opts.SourceType, opts.DestinationType, err)
|
||||
}
|
||||
if err := stateTwo.PersistState(); err != nil {
|
||||
if err := destinationState.PersistState(); err != nil {
|
||||
return fmt.Errorf(strings.TrimSpace(errBackendStateCopy),
|
||||
opts.OneType, opts.TwoType, err)
|
||||
opts.SourceType, opts.DestinationType, err)
|
||||
}
|
||||
|
||||
// And we're done.
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *Meta) backendMigrateEmptyConfirm(one, two statemgr.Full, opts *backendMigrateOpts) (bool, error) {
|
||||
func (m *Meta) backendMigrateEmptyConfirm(source, destination statemgr.Full, opts *backendMigrateOpts) (bool, error) {
|
||||
inputOpts := &terraform.InputOpts{
|
||||
Id: "backend-migrate-copy-to-empty",
|
||||
Query: "Do you want to copy existing state to the new backend?",
|
||||
Description: fmt.Sprintf(
|
||||
strings.TrimSpace(inputBackendMigrateEmpty),
|
||||
opts.OneType, opts.TwoType),
|
||||
opts.SourceType, opts.DestinationType),
|
||||
}
|
||||
|
||||
return m.confirm(inputOpts)
|
||||
}
|
||||
|
||||
func (m *Meta) backendMigrateNonEmptyConfirm(
|
||||
stateOne, stateTwo statemgr.Full, opts *backendMigrateOpts) (bool, error) {
|
||||
sourceState, destinationState statemgr.Full, opts *backendMigrateOpts) (bool, error) {
|
||||
// We need to grab both states so we can write them to a file
|
||||
one := stateOne.State()
|
||||
two := stateTwo.State()
|
||||
source := sourceState.State()
|
||||
destination := destinationState.State()
|
||||
|
||||
// Save both to a temporary
|
||||
td, err := ioutil.TempDir("", "terraform")
|
||||
|
@ -458,12 +462,12 @@ func (m *Meta) backendMigrateNonEmptyConfirm(
|
|||
}
|
||||
|
||||
// Write the states
|
||||
onePath := filepath.Join(td, fmt.Sprintf("1-%s.tfstate", opts.OneType))
|
||||
twoPath := filepath.Join(td, fmt.Sprintf("2-%s.tfstate", opts.TwoType))
|
||||
if err := saveHelper(opts.OneType, onePath, one); err != nil {
|
||||
sourcePath := filepath.Join(td, fmt.Sprintf("1-%s.tfstate", opts.SourceType))
|
||||
destinationPath := filepath.Join(td, fmt.Sprintf("2-%s.tfstate", opts.DestinationType))
|
||||
if err := saveHelper(opts.SourceType, sourcePath, source); err != nil {
|
||||
return false, fmt.Errorf("Error saving temporary state: %s", err)
|
||||
}
|
||||
if err := saveHelper(opts.TwoType, twoPath, two); err != nil {
|
||||
if err := saveHelper(opts.DestinationType, destinationPath, destination); err != nil {
|
||||
return false, fmt.Errorf("Error saving temporary state: %s", err)
|
||||
}
|
||||
|
||||
|
@ -473,7 +477,7 @@ func (m *Meta) backendMigrateNonEmptyConfirm(
|
|||
Query: "Do you want to copy existing state to the new backend?",
|
||||
Description: fmt.Sprintf(
|
||||
strings.TrimSpace(inputBackendMigrateNonEmpty),
|
||||
opts.OneType, opts.TwoType, onePath, twoPath),
|
||||
opts.SourceType, opts.DestinationType, sourcePath, destinationPath),
|
||||
}
|
||||
|
||||
// Confirm with the user that the copy should occur
|
||||
|
@ -549,7 +553,7 @@ const inputBackendMigrateMultiToSingle = `
|
|||
The existing %[1]q backend supports workspaces and you currently are
|
||||
using more than one. The newly configured %[2]q backend doesn't support
|
||||
workspaces. If you continue, Terraform will copy your current workspace %[3]q
|
||||
to the default workspace in the target backend. Your existing workspaces in the
|
||||
to the default workspace in the new backend. Your existing workspaces in the
|
||||
source backend won't be modified. If you want to switch workspaces, back them
|
||||
up, or cancel altogether, answer "no" and Terraform will abort.
|
||||
`
|
||||
|
|
|
@ -789,6 +789,84 @@ func TestMetaBackend_reconfigureChange(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
// Initializing a backend which supports workspaces and does *not* have
|
||||
// the currently selected workspace should prompt the user with a list of
|
||||
// workspaces to choose from to select a valid one, if more than one workspace
|
||||
// is available.
|
||||
func TestMetaBackend_initSelectedWorkspaceDoesNotExist(t *testing.T) {
|
||||
// Create a temporary working directory that is empty
|
||||
td := tempDir(t)
|
||||
testCopyDir(t, testFixturePath("init-backend-selected-workspace-doesnt-exist-multi"), td)
|
||||
defer os.RemoveAll(td)
|
||||
defer testChdir(t, td)()
|
||||
|
||||
// Setup the meta
|
||||
m := testMetaBackend(t, nil)
|
||||
|
||||
defer testInputMap(t, map[string]string{
|
||||
"select-workspace": "2",
|
||||
})()
|
||||
|
||||
// Get the backend
|
||||
_, diags := m.Backend(&BackendOpts{Init: true})
|
||||
if diags.HasErrors() {
|
||||
t.Fatal(diags.Err())
|
||||
}
|
||||
|
||||
expected := "foo"
|
||||
actual, err := m.Workspace()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if actual != expected {
|
||||
t.Fatalf("expected selected workspace to be %q, but was %q", expected, actual)
|
||||
}
|
||||
}
|
||||
|
||||
// Initializing a backend which supports workspaces and does *not* have the
|
||||
// currently selected workspace - and which only has a single workspace - should
|
||||
// automatically select that single workspace.
|
||||
func TestMetaBackend_initSelectedWorkspaceDoesNotExistAutoSelect(t *testing.T) {
|
||||
// Create a temporary working directory that is empty
|
||||
td := tempDir(t)
|
||||
testCopyDir(t, testFixturePath("init-backend-selected-workspace-doesnt-exist-single"), td)
|
||||
defer os.RemoveAll(td)
|
||||
defer testChdir(t, td)()
|
||||
|
||||
// Setup the meta
|
||||
m := testMetaBackend(t, nil)
|
||||
|
||||
// this should not ask for input
|
||||
m.input = false
|
||||
|
||||
// Assert test precondition: The current selected workspace is "bar"
|
||||
previousName, err := m.Workspace()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if previousName != "bar" {
|
||||
t.Fatalf("expected test fixture to start with 'bar' as the current selected workspace")
|
||||
}
|
||||
|
||||
// Get the backend
|
||||
_, diags := m.Backend(&BackendOpts{Init: true})
|
||||
if diags.HasErrors() {
|
||||
t.Fatal(diags.Err())
|
||||
}
|
||||
|
||||
expected := "default"
|
||||
actual, err := m.Workspace()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if actual != expected {
|
||||
t.Fatalf("expected selected workspace to be %q, but was %q", expected, actual)
|
||||
}
|
||||
}
|
||||
|
||||
// Changing a configured backend, copying state
|
||||
func TestMetaBackend_configuredChangeCopy(t *testing.T) {
|
||||
// Create a temporary working directory that is empty
|
||||
|
@ -1267,7 +1345,6 @@ func TestMetaBackend_configuredChangeCopy_multiToNoDefaultWithoutDefault(t *test
|
|||
// Ask input
|
||||
defer testInputMap(t, map[string]string{
|
||||
"backend-migrate-multistate-to-multistate": "yes",
|
||||
"select-workspace": "1",
|
||||
})()
|
||||
|
||||
// Setup the meta
|
||||
|
@ -1855,17 +1932,18 @@ func TestMetaBackend_configToExtra(t *testing.T) {
|
|||
|
||||
// no config; return inmem backend stored in state
|
||||
func TestBackendFromState(t *testing.T) {
|
||||
td := tempDir(t)
|
||||
testCopyDir(t, testFixturePath("backend-from-state"), td)
|
||||
defer os.RemoveAll(td)
|
||||
defer testChdir(t, td)()
|
||||
wd := tempWorkingDirFixture(t, "backend-from-state")
|
||||
defer testChdir(t, wd.RootModuleDir())()
|
||||
|
||||
// Setup the meta
|
||||
m := testMetaBackend(t, nil)
|
||||
m.WorkingDir = wd
|
||||
// terraform caches a small "state" file that stores the backend config.
|
||||
// This test must override m.dataDir so it loads the "terraform.tfstate" file in the
|
||||
// test directory as the backend config cache
|
||||
m.OverrideDataDir = td
|
||||
// test directory as the backend config cache. This fixture is really a
|
||||
// fixture for the data dir rather than the module dir, so we'll override
|
||||
// them to match just for this test.
|
||||
wd.OverrideDataDir(".")
|
||||
|
||||
stateBackend, diags := m.backendFromState()
|
||||
if diags.HasErrors() {
|
||||
|
|
|
@ -27,27 +27,8 @@ import (
|
|||
// paths used to load configuration, because we want to prefer recording
|
||||
// relative paths in source code references within the configuration.
|
||||
func (m *Meta) normalizePath(path string) string {
|
||||
var err error
|
||||
|
||||
// First we will make it absolute so that we have a consistent place
|
||||
// to start.
|
||||
path, err = filepath.Abs(path)
|
||||
if err != nil {
|
||||
// We'll just accept what we were given, then.
|
||||
return path
|
||||
}
|
||||
|
||||
cwd, err := os.Getwd()
|
||||
if err != nil || !filepath.IsAbs(cwd) {
|
||||
return path
|
||||
}
|
||||
|
||||
ret, err := filepath.Rel(cwd, path)
|
||||
if err != nil {
|
||||
return path
|
||||
}
|
||||
|
||||
return ret
|
||||
m.fixupMissingWorkingDir()
|
||||
return m.WorkingDir.NormalizePath(path)
|
||||
}
|
||||
|
||||
// loadConfig reads a configuration from the given directory, which should
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
package command
|
||||
|
||||
import (
|
||||
"log"
|
||||
"os"
|
||||
|
||||
"github.com/hashicorp/terraform/internal/depsfile"
|
||||
|
@ -48,10 +49,11 @@ func (m *Meta) lockedDependencies() (*depsfile.Locks, tfdiags.Diagnostics) {
|
|||
// promising to support two concurrent dependency installation processes.
|
||||
_, err := os.Stat(dependencyLockFilename)
|
||||
if os.IsNotExist(err) {
|
||||
return depsfile.NewLocks(), nil
|
||||
return m.annotateDependencyLocksWithOverrides(depsfile.NewLocks()), nil
|
||||
}
|
||||
|
||||
return depsfile.LoadLocksFromFile(dependencyLockFilename)
|
||||
ret, diags := depsfile.LoadLocksFromFile(dependencyLockFilename)
|
||||
return m.annotateDependencyLocksWithOverrides(ret), diags
|
||||
}
|
||||
|
||||
// replaceLockedDependencies creates or overwrites the lock file in the
|
||||
|
@ -60,3 +62,32 @@ func (m *Meta) lockedDependencies() (*depsfile.Locks, tfdiags.Diagnostics) {
|
|||
func (m *Meta) replaceLockedDependencies(new *depsfile.Locks) tfdiags.Diagnostics {
|
||||
return depsfile.SaveLocksToFile(new, dependencyLockFilename)
|
||||
}
|
||||
|
||||
// annotateDependencyLocksWithOverrides modifies the given Locks object in-place
|
||||
// to track as overridden any provider address that's subject to testing
|
||||
// overrides, development overrides, or "unmanaged provider" status.
|
||||
//
|
||||
// This is just an implementation detail of the lockedDependencies method,
|
||||
// not intended for use anywhere else.
|
||||
func (m *Meta) annotateDependencyLocksWithOverrides(ret *depsfile.Locks) *depsfile.Locks {
|
||||
if ret == nil {
|
||||
return ret
|
||||
}
|
||||
|
||||
for addr := range m.ProviderDevOverrides {
|
||||
log.Printf("[DEBUG] Provider %s is overridden by dev_overrides", addr)
|
||||
ret.SetProviderOverridden(addr)
|
||||
}
|
||||
for addr := range m.UnmanagedProviders {
|
||||
log.Printf("[DEBUG] Provider %s is overridden as an \"unmanaged provider\"", addr)
|
||||
ret.SetProviderOverridden(addr)
|
||||
}
|
||||
if m.testingOverrides != nil {
|
||||
for addr := range m.testingOverrides.Providers {
|
||||
log.Printf("[DEBUG] Provider %s is overridden in Meta.testingOverrides", addr)
|
||||
ret.SetProviderOverridden(addr)
|
||||
}
|
||||
}
|
||||
|
||||
return ret
|
||||
}
|
||||
|
|
|
@ -1,15 +1,14 @@
|
|||
package command
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/hashicorp/go-multierror"
|
||||
plugin "github.com/hashicorp/go-plugin"
|
||||
|
||||
"github.com/hashicorp/terraform/internal/addrs"
|
||||
|
@ -109,7 +108,8 @@ func (m *Meta) providerCustomLocalDirectorySource(dirs []string) getproviders.So
|
|||
// Only one object returned from this method should be live at any time,
|
||||
// because objects inside contain caches that must be maintained properly.
|
||||
func (m *Meta) providerLocalCacheDir() *providercache.Dir {
|
||||
dir := filepath.Join(m.DataDir(), "providers")
|
||||
m.fixupMissingWorkingDir()
|
||||
dir := m.WorkingDir.ProviderLocalCacheDir()
|
||||
return providercache.NewDir(dir)
|
||||
}
|
||||
|
||||
|
@ -236,7 +236,7 @@ func (m *Meta) providerFactories() (map[addrs.Provider]providers.Factory, error)
|
|||
// where appropriate and so that callers can potentially make use of the
|
||||
// partial result we return if e.g. they want to enumerate which providers
|
||||
// are available, or call into one of the providers that didn't fail.
|
||||
var err error
|
||||
errs := make(map[addrs.Provider]error)
|
||||
|
||||
// For the providers from the lock file, we expect them to be already
|
||||
// available in the provider cache because "terraform init" should already
|
||||
|
@ -274,7 +274,7 @@ func (m *Meta) providerFactories() (map[addrs.Provider]providers.Factory, error)
|
|||
}
|
||||
for provider, lock := range providerLocks {
|
||||
reportError := func(thisErr error) {
|
||||
err = multierror.Append(err, thisErr)
|
||||
errs[provider] = thisErr
|
||||
// We'll populate a provider factory that just echoes our error
|
||||
// again if called, which allows us to still report a helpful
|
||||
// error even if it gets detected downstream somewhere from the
|
||||
|
@ -282,6 +282,12 @@ func (m *Meta) providerFactories() (map[addrs.Provider]providers.Factory, error)
|
|||
factories[provider] = providerFactoryError(thisErr)
|
||||
}
|
||||
|
||||
if locks.ProviderIsOverridden(provider) {
|
||||
// Overridden providers we'll handle with the other separate
|
||||
// loops below, for dev overrides etc.
|
||||
continue
|
||||
}
|
||||
|
||||
version := lock.Version()
|
||||
cached := cacheDir.ProviderVersion(provider, version)
|
||||
if cached == nil {
|
||||
|
@ -313,13 +319,16 @@ func (m *Meta) providerFactories() (map[addrs.Provider]providers.Factory, error)
|
|||
factories[provider] = providerFactory(cached)
|
||||
}
|
||||
for provider, localDir := range devOverrideProviders {
|
||||
// It's likely that providers in this map will conflict with providers
|
||||
// in providerLocks
|
||||
factories[provider] = devOverrideProviderFactory(provider, localDir)
|
||||
}
|
||||
for provider, reattach := range unmanagedProviders {
|
||||
factories[provider] = unmanagedProviderFactory(provider, reattach)
|
||||
}
|
||||
|
||||
var err error
|
||||
if len(errs) > 0 {
|
||||
err = providerPluginErrors(errs)
|
||||
}
|
||||
return factories, err
|
||||
}
|
||||
|
||||
|
@ -471,3 +480,25 @@ func providerFactoryError(err error) providers.Factory {
|
|||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
// providerPluginErrors is an error implementation we can return from
|
||||
// Meta.providerFactories to capture potentially multiple errors about the
|
||||
// locally-cached plugins (or lack thereof) for particular external providers.
|
||||
//
|
||||
// Some functions closer to the UI layer can sniff for this error type in order
|
||||
// to return a more helpful error message.
|
||||
type providerPluginErrors map[addrs.Provider]error
|
||||
|
||||
func (errs providerPluginErrors) Error() string {
|
||||
if len(errs) == 1 {
|
||||
for addr, err := range errs {
|
||||
return fmt.Sprintf("%s: %s", addr, err)
|
||||
}
|
||||
}
|
||||
var buf bytes.Buffer
|
||||
fmt.Fprintf(&buf, "missing or corrupted provider plugins:")
|
||||
for addr, err := range errs {
|
||||
fmt.Fprintf(&buf, "\n - %s: %s", addr, err)
|
||||
}
|
||||
return buf.String()
|
||||
}
|
||||
|
|
|
@ -1051,7 +1051,7 @@ func TestPlan_init_required(t *testing.T) {
|
|||
t.Fatalf("expected error, got success")
|
||||
}
|
||||
got := output.Stderr()
|
||||
if !strings.Contains(got, `Error: Could not load plugin`) {
|
||||
if !(strings.Contains(got, "terraform init") && strings.Contains(got, "provider registry.terraform.io/hashicorp/test: required by this configuration but no version is selected")) {
|
||||
t.Fatal("wrong error message in output:", got)
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,11 +1,8 @@
|
|||
package command
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
|
@ -36,46 +33,21 @@ func (m *Meta) storePluginPath(pluginPath []string) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
path := filepath.Join(m.DataDir(), PluginPathFile)
|
||||
m.fixupMissingWorkingDir()
|
||||
|
||||
// remove the plugin dir record if the path was set to an empty string
|
||||
if len(pluginPath) == 1 && (pluginPath[0] == "") {
|
||||
err := os.Remove(path)
|
||||
if !os.IsNotExist(err) {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
return m.WorkingDir.SetForcedPluginDirs(nil)
|
||||
}
|
||||
|
||||
js, err := json.MarshalIndent(pluginPath, "", " ")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// if this fails, so will WriteFile
|
||||
os.MkdirAll(m.DataDir(), 0755)
|
||||
|
||||
return ioutil.WriteFile(path, js, 0644)
|
||||
return m.WorkingDir.SetForcedPluginDirs(pluginPath)
|
||||
}
|
||||
|
||||
// Load the user-defined plugin search path into Meta.pluginPath if the file
|
||||
// exists.
|
||||
func (m *Meta) loadPluginPath() ([]string, error) {
|
||||
js, err := ioutil.ReadFile(filepath.Join(m.DataDir(), PluginPathFile))
|
||||
if os.IsNotExist(err) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var pluginPath []string
|
||||
if err := json.Unmarshal(js, &pluginPath); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return pluginPath, nil
|
||||
m.fixupMissingWorkingDir()
|
||||
return m.WorkingDir.ForcedPluginDirs()
|
||||
}
|
||||
|
||||
// the default location for automatically installed plugins
|
||||
|
|
|
@ -5,15 +5,8 @@ import (
|
|||
"fmt"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"path/filepath"
|
||||
)
|
||||
|
||||
func (m *Meta) providerPluginsLock() *pluginSHA256LockFile {
|
||||
return &pluginSHA256LockFile{
|
||||
Filename: filepath.Join(m.pluginDir(), "lock.json"),
|
||||
}
|
||||
}
|
||||
|
||||
type pluginSHA256LockFile struct {
|
||||
Filename string
|
||||
}
|
||||
|
|
|
@ -89,14 +89,20 @@ func (c *ProvidersSchemaCommand) Run(args []string) int {
|
|||
}
|
||||
|
||||
// Get the context
|
||||
ctx, _, ctxDiags := local.Context(opReq)
|
||||
lr, _, ctxDiags := local.LocalRun(opReq)
|
||||
diags = diags.Append(ctxDiags)
|
||||
if ctxDiags.HasErrors() {
|
||||
c.showDiagnostics(diags)
|
||||
return 1
|
||||
}
|
||||
|
||||
schemas := ctx.Schemas()
|
||||
schemas, moreDiags := lr.Core.Schemas(lr.Config, lr.InputState)
|
||||
diags = diags.Append(moreDiags)
|
||||
if moreDiags.HasErrors() {
|
||||
c.showDiagnostics(diags)
|
||||
return 1
|
||||
}
|
||||
|
||||
jsonSchemas, err := jsonprovider.Marshal(schemas)
|
||||
if err != nil {
|
||||
c.Ui.Error(fmt.Sprintf("Failed to marshal provider schemas to json: %s", err))
|
||||
|
|
|
@ -26,8 +26,7 @@ func (c *RefreshCommand) Run(rawArgs []string) int {
|
|||
|
||||
// Instantiate the view, even if there are flag errors, so that we render
|
||||
// diagnostics according to the desired view
|
||||
var view views.Refresh
|
||||
view = views.NewRefresh(args.ViewType, c.View)
|
||||
view := views.NewRefresh(args.ViewType, c.View)
|
||||
|
||||
if diags.HasErrors() {
|
||||
view.Diagnostics(diags)
|
||||
|
|
|
@ -101,7 +101,7 @@ func (c *ShowCommand) Run(args []string) int {
|
|||
}
|
||||
|
||||
// Get the context
|
||||
ctx, _, ctxDiags := local.Context(opReq)
|
||||
lr, _, ctxDiags := local.LocalRun(opReq)
|
||||
diags = diags.Append(ctxDiags)
|
||||
if ctxDiags.HasErrors() {
|
||||
c.showDiagnostics(diags)
|
||||
|
@ -109,7 +109,12 @@ func (c *ShowCommand) Run(args []string) int {
|
|||
}
|
||||
|
||||
// Get the schemas from the context
|
||||
schemas := ctx.Schemas()
|
||||
schemas, moreDiags := lr.Core.Schemas(lr.Config, lr.InputState)
|
||||
diags = diags.Append(moreDiags)
|
||||
if moreDiags.HasErrors() {
|
||||
c.showDiagnostics(diags)
|
||||
return 1
|
||||
}
|
||||
|
||||
var planErr, stateErr error
|
||||
var plan *plans.Plan
|
||||
|
@ -148,7 +153,7 @@ func (c *ShowCommand) Run(args []string) int {
|
|||
|
||||
if plan != nil {
|
||||
if jsonOutput {
|
||||
config := ctx.Config()
|
||||
config := lr.Config
|
||||
jsonPlan, err := jsonplan.Marshal(config, plan, stateFile, schemas)
|
||||
|
||||
if err != nil {
|
||||
|
|
|
@ -103,8 +103,6 @@ func TestShow_aliasedProvider(t *testing.T) {
|
|||
},
|
||||
}
|
||||
|
||||
fmt.Println(os.Getwd())
|
||||
|
||||
// the statefile created by testStateFile is named state.tfstate
|
||||
args := []string{"state.tfstate"}
|
||||
if code := c.Run(args); code != 0 {
|
||||
|
|
|
@ -121,7 +121,7 @@ func (c *StateMeta) lookupResourceInstanceAddr(state *states.State, allowMissing
|
|||
}
|
||||
}
|
||||
|
||||
if found == false && !allowMissing {
|
||||
if !found && !allowMissing {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Unknown module",
|
||||
|
|
|
@ -9,17 +9,11 @@ import (
|
|||
|
||||
"github.com/google/go-cmp/cmp"
|
||||
"github.com/mitchellh/cli"
|
||||
"github.com/mitchellh/colorstring"
|
||||
|
||||
"github.com/hashicorp/terraform/internal/addrs"
|
||||
"github.com/hashicorp/terraform/internal/states"
|
||||
)
|
||||
|
||||
var disabledColorize = &colorstring.Colorize{
|
||||
Colors: colorstring.DefaultColors,
|
||||
Disable: true,
|
||||
}
|
||||
|
||||
func TestStateMv(t *testing.T) {
|
||||
state := states.BuildState(func(s *states.SyncState) {
|
||||
s.SetResourceInstanceCurrent(
|
||||
|
|
|
@ -82,14 +82,18 @@ func (c *StateShowCommand) Run(args []string) int {
|
|||
}
|
||||
|
||||
// Get the context (required to get the schemas)
|
||||
ctx, _, ctxDiags := local.Context(opReq)
|
||||
lr, _, ctxDiags := local.LocalRun(opReq)
|
||||
if ctxDiags.HasErrors() {
|
||||
c.showDiagnostics(ctxDiags)
|
||||
return 1
|
||||
}
|
||||
|
||||
// Get the schemas from the context
|
||||
schemas := ctx.Schemas()
|
||||
schemas, diags := lr.Core.Schemas(lr.Config, lr.InputState)
|
||||
if diags.HasErrors() {
|
||||
c.showDiagnostics(diags)
|
||||
return 1
|
||||
}
|
||||
|
||||
// Get the state
|
||||
env, err := c.Workspace()
|
||||
|
|
|
@ -495,7 +495,16 @@ func (c *TestCommand) testSuiteProviders(suiteDirs testCommandSuiteDirs, testPro
|
|||
return ret, diags
|
||||
}
|
||||
|
||||
func (c *TestCommand) testSuiteContext(suiteDirs testCommandSuiteDirs, providerFactories map[addrs.Provider]providers.Factory, state *states.State, plan *plans.Plan, destroy bool) (*terraform.Context, tfdiags.Diagnostics) {
|
||||
type testSuiteRunContext struct {
|
||||
Core *terraform.Context
|
||||
|
||||
PlanMode plans.Mode
|
||||
Config *configs.Config
|
||||
InputState *states.State
|
||||
Changes *plans.Changes
|
||||
}
|
||||
|
||||
func (c *TestCommand) testSuiteContext(suiteDirs testCommandSuiteDirs, providerFactories map[addrs.Provider]providers.Factory, state *states.State, plan *plans.Plan, destroy bool) (*testSuiteRunContext, tfdiags.Diagnostics) {
|
||||
var changes *plans.Changes
|
||||
if plan != nil {
|
||||
changes = plan.Changes
|
||||
|
@ -506,8 +515,7 @@ func (c *TestCommand) testSuiteContext(suiteDirs testCommandSuiteDirs, providerF
|
|||
planMode = plans.DestroyMode
|
||||
}
|
||||
|
||||
return terraform.NewContext(&terraform.ContextOpts{
|
||||
Config: suiteDirs.Config,
|
||||
tfCtx, diags := terraform.NewContext(&terraform.ContextOpts{
|
||||
Providers: providerFactories,
|
||||
|
||||
// We just use the provisioners from the main Meta here, because
|
||||
|
@ -519,73 +527,83 @@ func (c *TestCommand) testSuiteContext(suiteDirs testCommandSuiteDirs, providerF
|
|||
Meta: &terraform.ContextMeta{
|
||||
Env: "test_" + suiteDirs.SuiteName,
|
||||
},
|
||||
|
||||
State: state,
|
||||
Changes: changes,
|
||||
PlanMode: planMode,
|
||||
})
|
||||
if diags.HasErrors() {
|
||||
return nil, diags
|
||||
}
|
||||
return &testSuiteRunContext{
|
||||
Core: tfCtx,
|
||||
|
||||
PlanMode: planMode,
|
||||
Config: suiteDirs.Config,
|
||||
InputState: state,
|
||||
Changes: changes,
|
||||
}, diags
|
||||
}
|
||||
|
||||
func (c *TestCommand) testSuitePlan(ctx context.Context, suiteDirs testCommandSuiteDirs, providerFactories map[addrs.Provider]providers.Factory) (*plans.Plan, tfdiags.Diagnostics) {
|
||||
log.Printf("[TRACE] terraform test: create plan for suite %q", suiteDirs.SuiteName)
|
||||
tfCtx, diags := c.testSuiteContext(suiteDirs, providerFactories, nil, nil, false)
|
||||
runCtx, diags := c.testSuiteContext(suiteDirs, providerFactories, nil, nil, false)
|
||||
if diags.HasErrors() {
|
||||
return nil, diags
|
||||
}
|
||||
|
||||
// We'll also validate as part of planning, since the "terraform plan"
|
||||
// command would typically do that and so inconsistencies we detect only
|
||||
// during planning typically produce error messages saying that they are
|
||||
// a bug in Terraform.
|
||||
// (It's safe to use the same context for both validate and plan, because
|
||||
// validate doesn't generate any new sticky content inside the context
|
||||
// as plan and apply both do.)
|
||||
moreDiags := tfCtx.Validate()
|
||||
// We'll also validate as part of planning, to ensure that the test
|
||||
// configuration would pass "terraform validate". This is actually
|
||||
// largely redundant with the runCtx.Core.Plan call below, but was
|
||||
// included here originally because Plan did _originally_ assume that
|
||||
// an earlier Validate had already passed, but now does its own
|
||||
// validation work as (mostly) a superset of validate.
|
||||
moreDiags := runCtx.Core.Validate(runCtx.Config)
|
||||
diags = diags.Append(moreDiags)
|
||||
if diags.HasErrors() {
|
||||
return nil, diags
|
||||
}
|
||||
|
||||
plan, moreDiags := tfCtx.Plan()
|
||||
plan, moreDiags := runCtx.Core.Plan(
|
||||
runCtx.Config, runCtx.InputState, &terraform.PlanOpts{Mode: runCtx.PlanMode},
|
||||
)
|
||||
diags = diags.Append(moreDiags)
|
||||
return plan, diags
|
||||
}
|
||||
|
||||
func (c *TestCommand) testSuiteApply(ctx context.Context, plan *plans.Plan, suiteDirs testCommandSuiteDirs, providerFactories map[addrs.Provider]providers.Factory) (*states.State, tfdiags.Diagnostics) {
|
||||
log.Printf("[TRACE] terraform test: apply plan for suite %q", suiteDirs.SuiteName)
|
||||
tfCtx, diags := c.testSuiteContext(suiteDirs, providerFactories, nil, plan, false)
|
||||
runCtx, diags := c.testSuiteContext(suiteDirs, providerFactories, nil, plan, false)
|
||||
if diags.HasErrors() {
|
||||
// To make things easier on the caller, we'll return a valid empty
|
||||
// state even in this case.
|
||||
return states.NewState(), diags
|
||||
}
|
||||
|
||||
state, moreDiags := tfCtx.Apply()
|
||||
state, moreDiags := runCtx.Core.Apply(plan, runCtx.Config)
|
||||
diags = diags.Append(moreDiags)
|
||||
return state, diags
|
||||
}
|
||||
|
||||
func (c *TestCommand) testSuiteDestroy(ctx context.Context, state *states.State, suiteDirs testCommandSuiteDirs, providerFactories map[addrs.Provider]providers.Factory) (*states.State, tfdiags.Diagnostics) {
|
||||
log.Printf("[TRACE] terraform test: plan to destroy any existing objects for suite %q", suiteDirs.SuiteName)
|
||||
tfCtx, diags := c.testSuiteContext(suiteDirs, providerFactories, state, nil, true)
|
||||
runCtx, diags := c.testSuiteContext(suiteDirs, providerFactories, state, nil, true)
|
||||
if diags.HasErrors() {
|
||||
return state, diags
|
||||
}
|
||||
|
||||
plan, moreDiags := tfCtx.Plan()
|
||||
plan, moreDiags := runCtx.Core.Plan(
|
||||
runCtx.Config, runCtx.InputState, &terraform.PlanOpts{Mode: runCtx.PlanMode},
|
||||
)
|
||||
diags = diags.Append(moreDiags)
|
||||
if diags.HasErrors() {
|
||||
return state, diags
|
||||
}
|
||||
|
||||
log.Printf("[TRACE] terraform test: apply the plan to destroy any existing objects for suite %q", suiteDirs.SuiteName)
|
||||
tfCtx, moreDiags = c.testSuiteContext(suiteDirs, providerFactories, state, plan, true)
|
||||
runCtx, moreDiags = c.testSuiteContext(suiteDirs, providerFactories, state, plan, true)
|
||||
diags = diags.Append(moreDiags)
|
||||
if diags.HasErrors() {
|
||||
return state, diags
|
||||
}
|
||||
|
||||
state, moreDiags = tfCtx.Apply()
|
||||
state, moreDiags = runCtx.Core.Apply(plan, runCtx.Config)
|
||||
diags = diags.Append(moreDiags)
|
||||
return state, diags
|
||||
}
|
||||
|
|
|
@ -0,0 +1 @@
|
|||
bar
|
|
@ -0,0 +1,23 @@
|
|||
{
|
||||
"version": 3,
|
||||
"serial": 2,
|
||||
"lineage": "2f3864a6-1d3e-1999-0f84-36cdb61179d3",
|
||||
"backend": {
|
||||
"type": "local",
|
||||
"config": {
|
||||
"path": null,
|
||||
"workspace_dir": null
|
||||
},
|
||||
"hash": 666019178
|
||||
},
|
||||
"modules": [
|
||||
{
|
||||
"path": [
|
||||
"root"
|
||||
],
|
||||
"outputs": {},
|
||||
"resources": {},
|
||||
"depends_on": []
|
||||
}
|
||||
]
|
||||
}
|
7
internal/command/testdata/init-backend-selected-workspace-doesnt-exist-multi/main.tf
vendored
Normal file
7
internal/command/testdata/init-backend-selected-workspace-doesnt-exist-multi/main.tf
vendored
Normal file
|
@ -0,0 +1,7 @@
|
|||
terraform {
|
||||
backend "local" {}
|
||||
}
|
||||
|
||||
output "foo" {
|
||||
value = "bar"
|
||||
}
|
13
internal/command/testdata/init-backend-selected-workspace-doesnt-exist-multi/terraform.tfstate
vendored
Normal file
13
internal/command/testdata/init-backend-selected-workspace-doesnt-exist-multi/terraform.tfstate
vendored
Normal file
|
@ -0,0 +1,13 @@
|
|||
{
|
||||
"version": 4,
|
||||
"terraform_version": "1.1.0",
|
||||
"serial": 1,
|
||||
"lineage": "cc4bb587-aa35-87ad-b3b7-7abdb574f2a1",
|
||||
"outputs": {
|
||||
"foo": {
|
||||
"value": "bar",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"resources": []
|
||||
}
|
|
@ -0,0 +1,13 @@
|
|||
{
|
||||
"version": 4,
|
||||
"terraform_version": "1.1.0",
|
||||
"serial": 1,
|
||||
"lineage": "8ad3c77d-51aa-d90a-4f12-176f538b6e8b",
|
||||
"outputs": {
|
||||
"foo": {
|
||||
"value": "bar",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"resources": []
|
||||
}
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue