Merge remote-tracking branch 'origin/master' into validate-ignore-empty-provider
This commit is contained in:
commit
a39273cfa3
|
@ -7,7 +7,7 @@ When a bug report is filed, our goal is to either:
|
|||
|
||||
## Process
|
||||
|
||||
### 1. [Newly created issues](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Anew+label%3Abug+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+-label%3Aconfirmed+-label%3A%22pending+project%22+-label%3A%22waiting+for+reproduction%22+-label%3A%22waiting-response%22+-label%3Aexplained) require initial filtering.
|
||||
### 1. [Newly created issues](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Anew+label%3Abug+-label%3Abackend%2Fk8s+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+-label%3Aconfirmed+-label%3A%22pending+project%22+-label%3A%22waiting+for+reproduction%22+-label%3A%22waiting-response%22+-label%3Aexplained+) require initial filtering.
|
||||
|
||||
These are raw reports that need categorization and support clarifying them. They need the following done:
|
||||
|
||||
|
@ -20,7 +20,7 @@ If an issue requires discussion with the user to get it out of this initial stat
|
|||
|
||||
Once this initial filtering has been done, remove the new label. If an issue subjectively looks very high-impact and likely to impact many users, assign it to the [appropriate milestone](https://github.com/hashicorp/terraform/milestones) to mark it as being urgent.
|
||||
|
||||
### 2. Clarify [unreproduced issues](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+created%3A%3E2020-05-01+-label%3Aprovisioner%2Fsalt-masterless+-label%3Adocumentation+-label%3Aprovider%2Fazuredevops+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent+-label%3Abackend%2Fmanta+-label%3Abackend%2Fatlas+-label%3Abackend%2Fetcdv3+-label%3Abackend%2Fetcdv2+-label%3Aconfirmed+-label%3A%22pending+project%22+-label%3Anew+-label%3A%22waiting+for+reproduction%22+-label%3Awaiting-response+-label%3Aexplained+sort%3Acreated-asc)
|
||||
### 2. Clarify [unreproduced issues](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+created%3A%3E2020-05-01+-label%3Abackend%2Fk8s+-label%3Aprovisioner%2Fsalt-masterless+-label%3Adocumentation+-label%3Aprovider%2Fazuredevops+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent+-label%3Abackend%2Fmanta+-label%3Abackend%2Fatlas+-label%3Abackend%2Fetcdv3+-label%3Abackend%2Fetcdv2+-label%3Aconfirmed+-label%3A%22pending+project%22+-label%3Anew+-label%3A%22waiting+for+reproduction%22+-label%3Awaiting-response+-label%3Aexplained+sort%3Acreated-asc+)
|
||||
|
||||
A core team member initially determines whether the issue is immediately reproducible. If they cannot readily reproduce it, they label it "waiting for reproduction" and correspond with the reporter to describe what is needed. When the issue is reproduced by a core team member, they label it "confirmed".
|
||||
|
||||
|
@ -29,15 +29,15 @@ A core team member initially determines whether the issue is immediately reprodu
|
|||
Note that the link above excludes issues reported before May 2020; this is to avoid including issues that were reported prior to this new process being implemented. [Unreproduced issues reported before May 2020](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+created%3A%3C2020-05-01+-label%3Aprovisioner%2Fsalt-masterless+-label%3Adocumentation+-label%3Aprovider%2Fazuredevops+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent+-label%3Abackend%2Fmanta+-label%3Abackend%2Fatlas+-label%3Abackend%2Fetcdv3+-label%3Abackend%2Fetcdv2+-label%3Aconfirmed+-label%3A%22pending+project%22+-label%3Anew+-label%3A%22waiting+for+reproduction%22+-label%3Awaiting-response+-label%3Aexplained+sort%3Areactions-%2B1-desc) will be triaged as capacity permits.
|
||||
|
||||
|
||||
### 3. Explain or fix [confirmed issues](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+-label%3Aexplained+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+label%3Aconfirmed+-label%3A%22pending+project%22+)
|
||||
### 3. Explain or fix [confirmed issues](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+-label%3Aexplained+-label%3Abackend%2Foss+-label%3Abackend%2Fk8s+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+label%3Aconfirmed+-label%3A%22pending+project%22+)
|
||||
The next step for confirmed issues is to either:
|
||||
|
||||
* explain why the behavior is expected, label the issue as "working as designed", and close it, or
|
||||
* locate the cause of the defect in the codebase. When the defect is located, and that description is posted on the issue, the issue is labeled "explained". In many cases, this step will get skipped if the fix is obvious, and engineers will jump forward and make a PR.
|
||||
|
||||
[Confirmed crashes](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Acrash+label%3Abug+-label%3Aexplained+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+label%3Aconfirmed+-label%3A%22pending+project%22+) should generally be considered high impact
|
||||
[Confirmed crashes](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Acrash+label%3Abug+-label%3Abackend%2Fk8s+-label%3Aexplained+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+label%3Aconfirmed+-label%3A%22pending+project%22+) should generally be considered high impact
|
||||
|
||||
### 4. The last step for [explained issues](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+label%3Aexplained+no%3Amilestone+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+label%3Aconfirmed+-label%3A%22pending+project%22+) is to make a PR to fix them.
|
||||
### 4. The last step for [explained issues](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+label%3Aexplained+no%3Amilestone+-label%3Abackend%2Fk8s+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+label%3Aconfirmed+-label%3A%22pending+project%22+) is to make a PR to fix them.
|
||||
|
||||
Explained issues that are expected to be fixed in a future release should be assigned to a milestone
|
||||
|
||||
|
@ -54,23 +54,23 @@ working as designed | confirmed as reported and closed because the behavior
|
|||
pending project | issue is confirmed but will require a significant project to fix
|
||||
|
||||
## Lack of response and unreproducible issues
|
||||
When bugs that have been [labeled waiting response](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent+-label%3Abackend%2Fmanta+-label%3Abackend%2Fatlas+-label%3Abackend%2Fetcdv3+-label%3Abackend%2Fetcdv2+-label%3Aconfirmed+-label%3A%22pending+project%22+-label%3A%22waiting+for+reproduction%22+label%3Awaiting-response+-label%3Aexplained+sort%3Aupdated-asc) or [labeled "waiting for reproduction"](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent+-label%3Abackend%2Fmanta+-label%3Abackend%2Fatlas+-label%3Abackend%2Fetcdv3+-label%3Abackend%2Fetcdv2+-label%3Aconfirmed+-label%3A%22pending+project%22+label%3A%22waiting+for+reproduction%22+-label%3Aexplained+sort%3Aupdated-asc+) for more than 30 days, we'll use our best judgement to determine whether it's more helpful to close it or prompt the reporter again. If they again go without a response for 30 days, they can be closed with a polite message explaining why and inviting the person to submit the needed information or reproduction case in the future.
|
||||
When bugs that have been [labeled waiting response](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+-label%3Abackend%2Foss+-label%3Abackend%2Fk8s+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent+-label%3Abackend%2Fmanta+-label%3Abackend%2Fatlas+-label%3Abackend%2Fetcdv3+-label%3Abackend%2Fetcdv2+-label%3Aconfirmed+-label%3A%22pending+project%22+-label%3A%22waiting+for+reproduction%22+label%3Awaiting-response+-label%3Aexplained+sort%3Aupdated-asc+) or [labeled "waiting for reproduction"](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent+-label%3Abackend%2Fmanta+-label%3Abackend%2Fatlas+-label%3Abackend%2Fetcdv3+-label%3Abackend%2Fetcdv2+-label%3Aconfirmed+-label%3A%22pending+project%22+label%3A%22waiting+for+reproduction%22+-label%3Aexplained+sort%3Aupdated-asc+) for more than 30 days, we'll use our best judgement to determine whether it's more helpful to close it or prompt the reporter again. If they again go without a response for 30 days, they can be closed with a polite message explaining why and inviting the person to submit the needed information or reproduction case in the future.
|
||||
|
||||
The intent of this process is to get fix the maximum number of bugs in Terraform as quickly as possible, and having un-actionable bug reports makes it harder for Terraform Core team members and community contributors to find bugs they can actually work on.
|
||||
|
||||
## Helpful GitHub Filters
|
||||
|
||||
### Triage Process
|
||||
1. [Newly created issues](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Anew+label%3Abug+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+-label%3Aconfirmed+-label%3A%22pending+project%22+-label%3A%22waiting+for+reproduction%22+-label%3A%22waiting-response%22+-label%3Aexplained) require initial filtering.
|
||||
2. Clarify [unreproduced issues](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+created%3A%3E2020-05-01+-label%3Aprovisioner%2Fsalt-masterless+-label%3Adocumentation+-label%3Aprovider%2Fazuredevops+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent+-label%3Abackend%2Fmanta+-label%3Abackend%2Fatlas+-label%3Abackend%2Fetcdv3+-label%3Abackend%2Fetcdv2+-label%3Aconfirmed+-label%3A%22pending+project%22+-label%3Anew+-label%3A%22waiting+for+reproduction%22+-label%3Awaiting-response+-label%3Aexplained+sort%3Acreated-asc)
|
||||
3. Explain or fix [confirmed issues](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+-label%3Aexplained+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+label%3Aconfirmed+-label%3A%22pending+project%22+). Prioritize [confirmed crashes](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Acrash+label%3Abug+-label%3Aexplained+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+label%3Aconfirmed+-label%3A%22pending+project%22+).
|
||||
4. Fix [explained issues](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+label%3Aexplained+no%3Amilestone+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+label%3Aconfirmed+-label%3A%22pending+project%22+)
|
||||
1. [Newly created issues](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Anew+label%3Abug+-label%3Abackend%2Foss+-label%3Abackend%2Fk8s+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+-label%3Aconfirmed+-label%3A%22pending+project%22+-label%3A%22waiting+for+reproduction%22+-label%3A%22waiting-response%22+-label%3Aexplained+) require initial filtering.
|
||||
2. Clarify [unreproduced issues](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+created%3A%3E2020-05-01+-label%3Abackend%2Fk8s+-label%3Aprovisioner%2Fsalt-masterless+-label%3Adocumentation+-label%3Aprovider%2Fazuredevops+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent+-label%3Abackend%2Fmanta+-label%3Abackend%2Fatlas+-label%3Abackend%2Fetcdv3+-label%3Abackend%2Fetcdv2+-label%3Aconfirmed+-label%3A%22pending+project%22+-label%3Anew+-label%3A%22waiting+for+reproduction%22+-label%3Awaiting-response+-label%3Aexplained+sort%3Acreated-asc+)
|
||||
3. Explain or fix [confirmed issues](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+-label%3Aexplained+-label%3Abackend%2Fk8s+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+label%3Aconfirmed+-label%3A%22pending+project%22+). Prioritize [confirmed crashes](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Acrash+label%3Abug+-label%3Aexplained+-label%3Abackend%2Fk8s+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+label%3Aconfirmed+-label%3A%22pending+project%22+).
|
||||
4. Fix [explained issues](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+label%3Aexplained+no%3Amilestone+-label%3Abackend%2Fk8s+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+label%3Aconfirmed+-label%3A%22pending+project%22+)
|
||||
|
||||
### Other Backlog
|
||||
|
||||
[Confirmed needs for documentation fixes](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+label%3Adocumentation++label%3Aconfirmed+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+)
|
||||
[Confirmed needs for documentation fixes](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+label%3Adocumentation++label%3Aconfirmed+-label%3Abackend%2Fk8s+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+)
|
||||
|
||||
[Confirmed bugs that will require significant projects to fix](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+label%3Aconfirmed+label%3A%22pending+project%22++-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2)
|
||||
[Confirmed bugs that will require significant projects to fix](https://github.com/hashicorp/terraform/issues?q=is%3Aopen+label%3Abug+label%3Aconfirmed+label%3A%22pending+project%22+-label%3Abackend%2Fk8s+-label%3Abackend%2Foss+-label%3Abackend%2Fazure+-label%3Abackend%2Fs3+-label%3Abackend%2Fgcs+-label%3Abackend%2Fconsul+-label%3Abackend%2Fartifactory+-label%3Aterraform-cloud+-label%3Abackend%2Fremote+-label%3Abackend%2Fswift+-label%3Abackend%2Fpg+-label%3Abackend%2Ftencent++-label%3Abackend%2Fmanta++-label%3Abackend%2Fatlas++-label%3Abackend%2Fetcdv3++-label%3Abackend%2Fetcdv2+)
|
||||
|
||||
### Milestone Use
|
||||
|
||||
|
|
|
@ -21,6 +21,8 @@ BUG FIXES:
|
|||
* cli: Exit with an error if unable to gather input from the UI. For example, this may happen when running in a non-interactive environment but without `-input=false`. Previously Terraform would interpret these errors as empty strings, which could be confusing. [GH-26509]
|
||||
* cli: TF_LOG levels other than `trace` will now work correctly [GH-26632]
|
||||
* cli: Core and Provider logs can now be enabled separately for debugging, using `TF_LOG_CORE` and `TF_LOG_PROVIDER` [GH-26685]
|
||||
* command/console: expressions using `path` (`path.root`, `path.module`) now return the same result as they would in a configuration [GH-27263]
|
||||
* command/state list: fix bug where nested modules' resources were missing from `state list` output [GH-27268]
|
||||
|
||||
## Previous Releases
|
||||
|
||||
|
|
|
@ -241,6 +241,30 @@ test_instance.foo:
|
|||
assertBackendStateUnlocked(t, b)
|
||||
}
|
||||
|
||||
func TestLocal_applyRefreshFalse(t *testing.T) {
|
||||
b, cleanup := TestLocal(t)
|
||||
defer cleanup()
|
||||
|
||||
p := TestLocalProvider(t, b, "test", planFixtureSchema())
|
||||
testStateFile(t, b.StatePath, testPlanState())
|
||||
|
||||
op, configCleanup := testOperationApply(t, "./testdata/plan")
|
||||
defer configCleanup()
|
||||
|
||||
run, err := b.Operation(context.Background(), op)
|
||||
if err != nil {
|
||||
t.Fatalf("bad: %s", err)
|
||||
}
|
||||
<-run.Done()
|
||||
if run.Result != backend.OperationSuccess {
|
||||
t.Fatalf("plan operation failed")
|
||||
}
|
||||
|
||||
if p.ReadResourceCalled {
|
||||
t.Fatal("ReadResource should not be called")
|
||||
}
|
||||
}
|
||||
|
||||
type backendWithFailingState struct {
|
||||
Local
|
||||
}
|
||||
|
|
|
@ -78,7 +78,7 @@ func (b *Local) context(op *backend.Operation) (*terraform.Context, *configload.
|
|||
opts.Targets = op.Targets
|
||||
opts.UIInput = op.UIIn
|
||||
|
||||
opts.SkipRefresh = op.Type == backend.OperationTypePlan && !op.PlanRefresh
|
||||
opts.SkipRefresh = op.Type != backend.OperationTypeRefresh && !op.PlanRefresh
|
||||
if opts.SkipRefresh {
|
||||
log.Printf("[DEBUG] backend/local: skipping refresh of managed resources")
|
||||
}
|
||||
|
|
|
@ -641,7 +641,10 @@ func (b *Remote) StateMgr(name string) (statemgr.Full, error) {
|
|||
// accidentally upgrade state with a new code path, and the version check
|
||||
// logic is coarser and simpler.
|
||||
if !b.ignoreVersionConflict {
|
||||
if workspace.TerraformVersion != tfversion.String() {
|
||||
wsv := workspace.TerraformVersion
|
||||
// Explicitly ignore the pseudo-version "latest" here, as it will cause
|
||||
// plan and apply to always fail.
|
||||
if wsv != tfversion.String() && wsv != "latest" {
|
||||
return nil, fmt.Errorf("Remote workspace Terraform version %q does not match local Terraform version %q", workspace.TerraformVersion, tfversion.String())
|
||||
}
|
||||
}
|
||||
|
@ -890,6 +893,13 @@ func (b *Remote) VerifyWorkspaceTerraformVersion(workspaceName string) tfdiags.D
|
|||
return diags
|
||||
}
|
||||
|
||||
// If the workspace has the pseudo-version "latest", all bets are off. We
|
||||
// cannot reasonably determine what the intended Terraform version is, so
|
||||
// we'll skip version verification.
|
||||
if workspace.TerraformVersion == "latest" {
|
||||
return nil
|
||||
}
|
||||
|
||||
remoteVersion, err := version.NewSemver(workspace.TerraformVersion)
|
||||
if err != nil {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
|
|
|
@ -515,6 +515,45 @@ func TestRemote_StateMgr_versionCheck(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestRemote_StateMgr_versionCheckLatest(t *testing.T) {
|
||||
b, bCleanup := testBackendDefault(t)
|
||||
defer bCleanup()
|
||||
|
||||
v0140 := version.Must(version.NewSemver("0.14.0"))
|
||||
|
||||
// Save original local version state and restore afterwards
|
||||
p := tfversion.Prerelease
|
||||
v := tfversion.Version
|
||||
s := tfversion.SemVer
|
||||
defer func() {
|
||||
tfversion.Prerelease = p
|
||||
tfversion.Version = v
|
||||
tfversion.SemVer = s
|
||||
}()
|
||||
|
||||
// For this test, the local Terraform version is set to 0.14.0
|
||||
tfversion.Prerelease = ""
|
||||
tfversion.Version = v0140.String()
|
||||
tfversion.SemVer = v0140
|
||||
|
||||
// Update the remote workspace to the pseudo-version "latest"
|
||||
if _, err := b.client.Workspaces.Update(
|
||||
context.Background(),
|
||||
b.organization,
|
||||
b.workspace,
|
||||
tfe.WorkspaceUpdateOptions{
|
||||
TerraformVersion: tfe.String("latest"),
|
||||
},
|
||||
); err != nil {
|
||||
t.Fatalf("error: %v", err)
|
||||
}
|
||||
|
||||
// This should succeed despite not being a string match
|
||||
if _, err := b.StateMgr(backend.DefaultStateName); err != nil {
|
||||
t.Fatalf("expected no error, got %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRemote_VerifyWorkspaceTerraformVersion(t *testing.T) {
|
||||
testCases := []struct {
|
||||
local string
|
||||
|
@ -528,6 +567,7 @@ func TestRemote_VerifyWorkspaceTerraformVersion(t *testing.T) {
|
|||
{"0.14.0", "1.1.0", true},
|
||||
{"1.2.0", "1.2.99", false},
|
||||
{"1.2.0", "1.3.0", true},
|
||||
{"0.15.0", "latest", false},
|
||||
}
|
||||
for _, tc := range testCases {
|
||||
t.Run(fmt.Sprintf("local %s, remote %s", tc.local, tc.remote), func(t *testing.T) {
|
||||
|
@ -535,7 +575,6 @@ func TestRemote_VerifyWorkspaceTerraformVersion(t *testing.T) {
|
|||
defer bCleanup()
|
||||
|
||||
local := version.Must(version.NewSemver(tc.local))
|
||||
remote := version.Must(version.NewSemver(tc.remote))
|
||||
|
||||
// Save original local version state and restore afterwards
|
||||
p := tfversion.Prerelease
|
||||
|
@ -559,7 +598,7 @@ func TestRemote_VerifyWorkspaceTerraformVersion(t *testing.T) {
|
|||
b.organization,
|
||||
b.workspace,
|
||||
tfe.WorkspaceUpdateOptions{
|
||||
TerraformVersion: tfe.String(remote.String()),
|
||||
TerraformVersion: tfe.String(tc.remote),
|
||||
},
|
||||
); err != nil {
|
||||
t.Fatalf("error: %v", err)
|
||||
|
|
|
@ -5,6 +5,7 @@ import (
|
|||
"log"
|
||||
|
||||
"github.com/hashicorp/terraform/backend"
|
||||
"github.com/hashicorp/terraform/backend/remote"
|
||||
"github.com/hashicorp/terraform/configs/configschema"
|
||||
"github.com/hashicorp/terraform/providers"
|
||||
"github.com/hashicorp/terraform/tfdiags"
|
||||
|
@ -233,6 +234,12 @@ func getBackend(cfg cty.Value) (backend.Backend, cty.Value, tfdiags.Diagnostics)
|
|||
return nil, cty.NilVal, diags
|
||||
}
|
||||
|
||||
// If this is the enhanced remote backend, we want to disable the version
|
||||
// check, because this is a read-only operation
|
||||
if rb, ok := b.(*remote.Remote); ok {
|
||||
rb.IgnoreVersionConflict()
|
||||
}
|
||||
|
||||
return b, newVal, diags
|
||||
}
|
||||
|
||||
|
|
|
@ -35,6 +35,7 @@ func (c *ConsoleCommand) Run(args []string) int {
|
|||
c.Ui.Error(err.Error())
|
||||
return 1
|
||||
}
|
||||
configPath = c.Meta.normalizePath(configPath)
|
||||
|
||||
// Check for user-supplied plugin path
|
||||
if c.pluginPath, err = c.loadPluginPath(); err != nil {
|
||||
|
|
110
command/init.go
110
command/init.go
|
@ -5,7 +5,6 @@ import (
|
|||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/hashicorp/hcl/v2"
|
||||
|
@ -31,14 +30,11 @@ import (
|
|||
// module and clones it to the working directory.
|
||||
type InitCommand struct {
|
||||
Meta
|
||||
|
||||
// getPlugins is for the -get-plugins flag
|
||||
getPlugins bool
|
||||
}
|
||||
|
||||
func (c *InitCommand) Run(args []string) int {
|
||||
var flagFromModule string
|
||||
var flagBackend, flagGet, flagUpgrade bool
|
||||
var flagBackend, flagGet, flagUpgrade, getPlugins bool
|
||||
var flagPluginPath FlagStringSlice
|
||||
var flagVerifyPlugins bool
|
||||
flagConfigExtra := newRawFlags("-backend-config")
|
||||
|
@ -49,7 +45,7 @@ func (c *InitCommand) Run(args []string) int {
|
|||
cmdFlags.Var(flagConfigExtra, "backend-config", "")
|
||||
cmdFlags.StringVar(&flagFromModule, "from-module", "", "copy the source of the given module into the directory before init")
|
||||
cmdFlags.BoolVar(&flagGet, "get", true, "")
|
||||
cmdFlags.BoolVar(&c.getPlugins, "get-plugins", true, "")
|
||||
cmdFlags.BoolVar(&getPlugins, "get-plugins", true, "no-op flag, use provider_installation blocks to customize provider installation")
|
||||
cmdFlags.BoolVar(&c.forceInitCopy, "force-copy", false, "suppress prompts about copying state data")
|
||||
cmdFlags.BoolVar(&c.Meta.stateLock, "lock", true, "lock state")
|
||||
cmdFlags.DurationVar(&c.Meta.stateLockTimeout, "lock-timeout", 0, "lock timeout")
|
||||
|
@ -66,7 +62,16 @@ func (c *InitCommand) Run(args []string) int {
|
|||
|
||||
if len(flagPluginPath) > 0 {
|
||||
c.pluginPath = flagPluginPath
|
||||
c.getPlugins = false
|
||||
}
|
||||
|
||||
// If users are setting the no-op get-plugins command, give them a warning,
|
||||
// this should allow us to remove the flag in the future.
|
||||
if !getPlugins {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Warning,
|
||||
"No-op -get-plugins flag used",
|
||||
`As of Terraform 0.13+, the -get-plugins=false command is a no-op flag. If you would like to customize provider installation, use a provider_installation block or other available Terraform settings.`,
|
||||
))
|
||||
}
|
||||
|
||||
// Validate the arg count
|
||||
|
@ -468,12 +473,15 @@ func (c *InitCommand) getProviders(config *configs.Config, state *states.State,
|
|||
log.Printf("[DEBUG] will search for provider plugins in %s", pluginDirs)
|
||||
}
|
||||
|
||||
// Installation can be aborted by interruption signals
|
||||
ctx, done := c.InterruptibleContext()
|
||||
defer done()
|
||||
|
||||
// Because we're currently just streaming a series of events sequentially
|
||||
// into the terminal, we're showing only a subset of the events to keep
|
||||
// things relatively concise. Later it'd be nice to have a progress UI
|
||||
// where statuses update in-place, but we can't do that as long as we
|
||||
// are shimming our vt100 output to the legacy console API on Windows.
|
||||
missingProviders := make(map[addrs.Provider]struct{})
|
||||
evts := &providercache.InstallerEvents{
|
||||
PendingProviders: func(reqs map[addrs.Provider]getproviders.VersionConstraints) {
|
||||
c.Ui.Output(c.Colorize().Color(
|
||||
|
@ -511,10 +519,6 @@ func (c *InitCommand) getProviders(config *configs.Config, state *states.State,
|
|||
c.Ui.Info(fmt.Sprintf("- Installing %s v%s...", provider.ForDisplay(), version))
|
||||
},
|
||||
QueryPackagesFailure: func(provider addrs.Provider, err error) {
|
||||
// We track providers that had missing metadata because we might
|
||||
// generate additional hints for some of them at the end.
|
||||
missingProviders[provider] = struct{}{}
|
||||
|
||||
switch errorTy := err.(type) {
|
||||
case getproviders.ErrProviderNotFound:
|
||||
sources := errorTy.Sources
|
||||
|
@ -530,11 +534,22 @@ func (c *InitCommand) getProviders(config *configs.Config, state *states.State,
|
|||
),
|
||||
))
|
||||
case getproviders.ErrRegistryProviderNotKnown:
|
||||
// We might be able to suggest an alternative provider to use
|
||||
// instead of this one.
|
||||
var suggestion string
|
||||
alternative := getproviders.MissingProviderSuggestion(ctx, provider, inst.ProviderSource())
|
||||
if alternative != provider {
|
||||
suggestion = fmt.Sprintf(
|
||||
"\n\nDid you intend to use %s? If so, you must specify that source address in each module which requires that provider. To see which modules are currently depending on %s, run the following command:\n terraform providers",
|
||||
alternative.ForDisplay(), provider.ForDisplay(),
|
||||
)
|
||||
}
|
||||
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Failed to query available provider packages",
|
||||
fmt.Sprintf("Could not retrieve the list of available versions for provider %s: %s",
|
||||
provider.ForDisplay(), err,
|
||||
fmt.Sprintf("Could not retrieve the list of available versions for provider %s: %s%s",
|
||||
provider.ForDisplay(), err, suggestion,
|
||||
),
|
||||
))
|
||||
case getproviders.ErrHostNoProviders:
|
||||
|
@ -730,6 +745,7 @@ func (c *InitCommand) getProviders(config *configs.Config, state *states.State,
|
|||
))
|
||||
},
|
||||
}
|
||||
ctx = evts.OnContext(ctx)
|
||||
|
||||
// Dev overrides cause the result of "terraform init" to be irrelevant for
|
||||
// any overridden providers, so we'll warn about it to avoid later
|
||||
|
@ -741,70 +757,12 @@ func (c *InitCommand) getProviders(config *configs.Config, state *states.State,
|
|||
if upgrade {
|
||||
mode = providercache.InstallUpgrades
|
||||
}
|
||||
// Installation can be aborted by interruption signals
|
||||
ctx, done := c.InterruptibleContext()
|
||||
defer done()
|
||||
ctx = evts.OnContext(ctx)
|
||||
newLocks, err := inst.EnsureProviderVersions(ctx, previousLocks, reqs, mode)
|
||||
if ctx.Err() == context.Canceled {
|
||||
c.showDiagnostics(diags)
|
||||
c.Ui.Error("Provider installation was canceled by an interrupt signal.")
|
||||
return true, true, diags
|
||||
}
|
||||
if len(missingProviders) > 0 {
|
||||
// If we encountered requirements for one or more providers where we
|
||||
// weren't able to find any metadata, that _might_ be because a
|
||||
// user had previously (before 0.14) been incorrectly using the
|
||||
// .terraform/plugins directory as if it were a local filesystem
|
||||
// mirror, rather than as the main cache directory.
|
||||
//
|
||||
// We no longer allow that because it'd be ambiguous whether plugins in
|
||||
// there are explictly intended to be a local mirror or if they are
|
||||
// just leftover cache entries from provider installation in
|
||||
// Terraform 0.13.
|
||||
//
|
||||
// To help those users migrate we have a specialized warning message
|
||||
// for it, which we'll produce only if one of the missing providers can
|
||||
// be seen in the "legacy" cache directory, which is what we're now
|
||||
// considering .terraform/plugins to be. (The _current_ cache directory
|
||||
// is .terraform/providers.)
|
||||
//
|
||||
// This is only a heuristic, so it might potentially produce false
|
||||
// positives if a user happens to encounter another sort of error
|
||||
// while they are upgrading from Terraform 0.13 to 0.14. Aside from
|
||||
// upgrading users should not end up in here because they won't
|
||||
// have a legacy cache directory at all.
|
||||
legacyDir := c.providerLegacyCacheDir()
|
||||
if legacyDir != nil { // if the legacy directory is present at all
|
||||
for missingProvider := range missingProviders {
|
||||
if missingProvider.IsDefault() {
|
||||
// If we get here for a default provider then it's more
|
||||
// likely that something _else_ went wrong, like a network
|
||||
// problem, so we'll skip the warning in this case to
|
||||
// avoid potentially misleading the user into creating an
|
||||
// unnecessary local mirror for an official provider.
|
||||
continue
|
||||
}
|
||||
entry := legacyDir.ProviderLatestVersion(missingProvider)
|
||||
if entry == nil {
|
||||
continue
|
||||
}
|
||||
// If we get here then the missing provider was cached, which
|
||||
// implies that it might be an in-house provider the user
|
||||
// placed manually to try to make Terraform use it as if it
|
||||
// were a local mirror directory.
|
||||
wantDir := filepath.FromSlash(fmt.Sprintf("terraform.d/plugins/%s/%s/%s", missingProvider, entry.Version, getproviders.CurrentPlatform))
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Warning,
|
||||
"Missing provider is in legacy cache directory",
|
||||
fmt.Sprintf(
|
||||
"Terraform supports a number of local directories that can serve as automatic local filesystem mirrors, but .terraform/plugins is not one of them because Terraform v0.13 and earlier used this directory to cache copies of provider plugins retrieved from elsewhere.\n\nIf you intended to use this directory as a filesystem mirror for %s, place it instead in the following directory:\n %s",
|
||||
missingProvider, wantDir,
|
||||
),
|
||||
))
|
||||
}
|
||||
}
|
||||
}
|
||||
if err != nil {
|
||||
// The errors captured in "err" should be redundant with what we
|
||||
// received via the InstallerEvents callbacks above, so we'll
|
||||
|
@ -971,7 +929,6 @@ func (c *InitCommand) AutocompleteFlags() complete.Flags {
|
|||
"-force-copy": complete.PredictNothing,
|
||||
"-from-module": completePredictModuleSource,
|
||||
"-get": completePredictBoolean,
|
||||
"-get-plugins": completePredictBoolean,
|
||||
"-input": completePredictBoolean,
|
||||
"-lock": completePredictBoolean,
|
||||
"-lock-timeout": complete.PredictAnything,
|
||||
|
@ -1024,6 +981,9 @@ Options:
|
|||
-get=true Download any modules for this configuration.
|
||||
|
||||
-get-plugins=true Download any missing plugins for this configuration.
|
||||
This command is a no-op in Terraform 0.13+: use
|
||||
-plugin-dir settings or provider_installation blocks
|
||||
instead.
|
||||
|
||||
-input=true Ask for input if necessary. If false, will error if
|
||||
input was required.
|
||||
|
@ -1042,8 +1002,8 @@ Options:
|
|||
-reconfigure Reconfigure the backend, ignoring any saved
|
||||
configuration.
|
||||
|
||||
-upgrade=false If installing modules (-get) or plugins (-get-plugins),
|
||||
ignore previously-downloaded objects and install the
|
||||
-upgrade=false If installing modules (-get) or plugins, ignore
|
||||
previously-downloaded objects and install the
|
||||
latest version allowed within configured constraints.
|
||||
|
||||
-verify-plugins=true Verify the authenticity and integrity of automatically
|
||||
|
|
|
@ -226,7 +226,6 @@ func TestInit_getUpgradeModules(t *testing.T) {
|
|||
|
||||
args := []string{
|
||||
"-get=true",
|
||||
"-get-plugins=false",
|
||||
"-upgrade",
|
||||
testFixturePath("init-get"),
|
||||
}
|
||||
|
@ -1054,85 +1053,6 @@ func TestInit_getProviderSource(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestInit_getProviderInLegacyPluginCacheDir(t *testing.T) {
|
||||
// Create a temporary working directory that is empty
|
||||
td := tempDir(t)
|
||||
testCopyDir(t, testFixturePath("init-legacy-provider-cache"), td)
|
||||
defer os.RemoveAll(td)
|
||||
defer testChdir(t, td)()
|
||||
|
||||
// The test fixture has placeholder os_arch directories which we must
|
||||
// now rename to match the current platform, or else the entries inside
|
||||
// will be ignored.
|
||||
platformStr := getproviders.CurrentPlatform.String()
|
||||
if err := os.Rename(
|
||||
".terraform/plugins/example.com/test/b/1.1.0/os_arch",
|
||||
".terraform/plugins/example.com/test/b/1.1.0/"+platformStr,
|
||||
); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := os.Rename(
|
||||
".terraform/plugins/registry.terraform.io/hashicorp/c/2.0.0/os_arch",
|
||||
".terraform/plugins/registry.terraform.io/hashicorp/c/2.0.0/"+platformStr,
|
||||
); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// An empty MultiSource serves as a way to make sure no providers are
|
||||
// actually available for installation, which suits us here because
|
||||
// we're testing an error case.
|
||||
providerSource := getproviders.MultiSource{}
|
||||
|
||||
ui := cli.NewMockUi()
|
||||
m := Meta{
|
||||
Ui: ui,
|
||||
ProviderSource: providerSource,
|
||||
}
|
||||
|
||||
c := &InitCommand{
|
||||
Meta: m,
|
||||
}
|
||||
|
||||
args := []string{
|
||||
"-backend=false",
|
||||
}
|
||||
if code := c.Run(args); code == 0 {
|
||||
t.Fatalf("succeeded; want error\n%s", ui.OutputWriter.String())
|
||||
}
|
||||
|
||||
// We remove all of the newlines so that we don't need to contend with
|
||||
// the automatic word wrapping that our diagnostic printer does.
|
||||
stderr := strings.Replace(ui.ErrorWriter.String(), "\n", " ", -1)
|
||||
|
||||
if got, want := stderr, `example.com/test/a: no available releases match the given constraints`; !strings.Contains(got, want) {
|
||||
t.Errorf("missing error about example.com/test/a\nwant substring: %s\n%s", want, got)
|
||||
}
|
||||
if got, want := stderr, `example.com/test/b: no available releases match the given constraints`; !strings.Contains(got, want) {
|
||||
t.Errorf("missing error about example.com/test/b\nwant substring: %s\n%s", want, got)
|
||||
}
|
||||
if got, want := stderr, `hashicorp/c: no available releases match the given constraints`; !strings.Contains(got, want) {
|
||||
t.Errorf("missing error about registry.terraform.io/hashicorp/c\nwant substring: %s\n%s", want, got)
|
||||
}
|
||||
|
||||
if got, want := stderr, `terraform.d/plugins/example.com/test/a`; strings.Contains(got, want) {
|
||||
// We _don't_ expect to see a warning about the "a" provider, because
|
||||
// there's no copy of that in the legacy plugin cache dir.
|
||||
t.Errorf("unexpected suggested path for local example.com/test/a\ndon't want substring: %s\n%s", want, got)
|
||||
}
|
||||
if got, want := stderr, `terraform.d/plugins/example.com/test/b/1.1.0/`+platformStr; !strings.Contains(got, want) {
|
||||
// ...but we should see a warning about the "b" provider, because
|
||||
// there's an entry for that in the legacy cache dir.
|
||||
t.Errorf("missing suggested path for local example.com/test/b 1.0.0 on %s\nwant substring: %s\n%s", platformStr, want, got)
|
||||
}
|
||||
if got, want := stderr, `terraform.d/plugins/registry.terraform.io/hashicorp/c`; strings.Contains(got, want) {
|
||||
// We _don't_ expect to see a warning about the "a" provider, even
|
||||
// though it's in the cache dir, because it's an official provider
|
||||
// and so we assume it ended up there as a result of normal provider
|
||||
// installation in Terraform 0.13.
|
||||
t.Errorf("unexpected suggested path for local hashicorp/c\ndon't want substring: %s\n%s", want, got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestInit_getProviderLegacyFromState(t *testing.T) {
|
||||
// Create a temporary working directory that is empty
|
||||
td := tempDir(t)
|
||||
|
|
|
@ -128,31 +128,6 @@ func (m *Meta) providerGlobalCacheDir() *providercache.Dir {
|
|||
return providercache.NewDir(dir)
|
||||
}
|
||||
|
||||
// providerLegacyCacheDir returns an object representing the former location
|
||||
// of the local cache directory from Terraform 0.13 and earlier.
|
||||
//
|
||||
// This is no longer viable for use as a real cache directory because some
|
||||
// incorrect documentation called for Terraform Cloud users to use it as if it
|
||||
// were an implied local filesystem mirror directory. Therefore we now use it
|
||||
// only to generate some hopefully-helpful migration guidance during
|
||||
// "terraform init" for anyone who _was_ trying to use it as a local filesystem
|
||||
// mirror directory.
|
||||
//
|
||||
// providerLegacyCacheDir returns nil if the legacy cache directory isn't
|
||||
// present or isn't a directory, so that callers can more easily skip over
|
||||
// any backward compatibility behavior that applies only when the directory
|
||||
// is present.
|
||||
//
|
||||
// Callers must use the resulting object in a read-only mode only. Don't
|
||||
// install any new providers into this directory.
|
||||
func (m *Meta) providerLegacyCacheDir() *providercache.Dir {
|
||||
dir := filepath.Join(m.DataDir(), "plugins")
|
||||
if info, err := os.Stat(dir); err != nil || !info.IsDir() {
|
||||
return nil
|
||||
}
|
||||
return providercache.NewDir(dir)
|
||||
}
|
||||
|
||||
// providerInstallSource returns an object that knows how to consult one or
|
||||
// more external sources to determine the availability of and package
|
||||
// locations for versions of Terraform providers that are available for
|
||||
|
|
|
@ -5,6 +5,8 @@ import (
|
|||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
"github.com/zclconf/go-cty/cty/convert"
|
||||
ctyjson "github.com/zclconf/go-cty/cty/json"
|
||||
|
||||
"github.com/hashicorp/terraform/addrs"
|
||||
|
@ -17,14 +19,19 @@ import (
|
|||
// from a Terraform state and prints it.
|
||||
type OutputCommand struct {
|
||||
Meta
|
||||
|
||||
// Unit tests may set rawPrint to capture the output from the -raw
|
||||
// option, which would normally go to stdout directly.
|
||||
rawPrint func(string)
|
||||
}
|
||||
|
||||
func (c *OutputCommand) Run(args []string) int {
|
||||
args = c.Meta.process(args)
|
||||
var module, statePath string
|
||||
var jsonOutput bool
|
||||
var jsonOutput, rawOutput bool
|
||||
cmdFlags := c.Meta.defaultFlagSet("output")
|
||||
cmdFlags.BoolVar(&jsonOutput, "json", false, "json")
|
||||
cmdFlags.BoolVar(&rawOutput, "raw", false, "raw")
|
||||
cmdFlags.StringVar(&statePath, "state", "", "path")
|
||||
cmdFlags.StringVar(&module, "module", "", "module")
|
||||
cmdFlags.Usage = func() { c.Ui.Error(c.Help()) }
|
||||
|
@ -42,6 +49,18 @@ func (c *OutputCommand) Run(args []string) int {
|
|||
return 1
|
||||
}
|
||||
|
||||
if jsonOutput && rawOutput {
|
||||
c.Ui.Error("The -raw and -json options are mutually-exclusive.\n")
|
||||
cmdFlags.Usage()
|
||||
return 1
|
||||
}
|
||||
|
||||
if rawOutput && len(args) == 0 {
|
||||
c.Ui.Error("You must give the name of a single output value when using the -raw option.\n")
|
||||
cmdFlags.Usage()
|
||||
return 1
|
||||
}
|
||||
|
||||
name := ""
|
||||
if len(args) > 0 {
|
||||
name = args[0]
|
||||
|
@ -187,14 +206,65 @@ func (c *OutputCommand) Run(args []string) int {
|
|||
}
|
||||
v := os.Value
|
||||
|
||||
if jsonOutput {
|
||||
switch {
|
||||
case jsonOutput:
|
||||
jsonOutput, err := ctyjson.Marshal(v, v.Type())
|
||||
if err != nil {
|
||||
return 1
|
||||
}
|
||||
|
||||
c.Ui.Output(string(jsonOutput))
|
||||
} else {
|
||||
case rawOutput:
|
||||
strV, err := convert.Convert(v, cty.String)
|
||||
if err != nil {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Unsupported value for raw output",
|
||||
fmt.Sprintf(
|
||||
"The -raw option only supports strings, numbers, and boolean values, but output value %q is %s.\n\nUse the -json option for machine-readable representations of output values that have complex types.",
|
||||
name, v.Type().FriendlyName(),
|
||||
),
|
||||
))
|
||||
c.showDiagnostics(diags)
|
||||
return 1
|
||||
}
|
||||
if strV.IsNull() {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Unsupported value for raw output",
|
||||
fmt.Sprintf(
|
||||
"The value for output value %q is null, so -raw mode cannot print it.",
|
||||
name,
|
||||
),
|
||||
))
|
||||
c.showDiagnostics(diags)
|
||||
return 1
|
||||
}
|
||||
if !strV.IsKnown() {
|
||||
// Since we're working with values from the state it would be very
|
||||
// odd to end up in here, but we'll handle it anyway to avoid a
|
||||
// panic in case our rules somehow change in future.
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Unsupported value for raw output",
|
||||
fmt.Sprintf(
|
||||
"The value for output value %q won't be known until after a successful terraform apply, so -raw mode cannot print it.",
|
||||
name,
|
||||
),
|
||||
))
|
||||
c.showDiagnostics(diags)
|
||||
return 1
|
||||
}
|
||||
// If we get out here then we should have a valid string to print.
|
||||
// We're writing it directly to the output here so that a shell caller
|
||||
// will get exactly the value and no extra whitespace.
|
||||
str := strV.AsString()
|
||||
if c.rawPrint != nil {
|
||||
c.rawPrint(str)
|
||||
} else {
|
||||
fmt.Print(str)
|
||||
}
|
||||
default:
|
||||
result := repl.FormatValue(v, 0)
|
||||
c.Ui.Output(result)
|
||||
}
|
||||
|
@ -219,8 +289,12 @@ Options:
|
|||
-no-color If specified, output won't contain any color.
|
||||
|
||||
-json If specified, machine readable output will be
|
||||
printed in JSON format
|
||||
printed in JSON format.
|
||||
|
||||
-raw For value types that can be automatically
|
||||
converted to a string, will print the raw
|
||||
string directly, rather than a human-oriented
|
||||
representation of the value.
|
||||
`
|
||||
return strings.TrimSpace(helpText)
|
||||
}
|
||||
|
|
|
@ -130,6 +130,92 @@ func TestOutput_json(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestOutput_raw(t *testing.T) {
|
||||
originalState := states.BuildState(func(s *states.SyncState) {
|
||||
s.SetOutputValue(
|
||||
addrs.OutputValue{Name: "str"}.Absolute(addrs.RootModuleInstance),
|
||||
cty.StringVal("bar"),
|
||||
false,
|
||||
)
|
||||
s.SetOutputValue(
|
||||
addrs.OutputValue{Name: "multistr"}.Absolute(addrs.RootModuleInstance),
|
||||
cty.StringVal("bar\nbaz"),
|
||||
false,
|
||||
)
|
||||
s.SetOutputValue(
|
||||
addrs.OutputValue{Name: "num"}.Absolute(addrs.RootModuleInstance),
|
||||
cty.NumberIntVal(2),
|
||||
false,
|
||||
)
|
||||
s.SetOutputValue(
|
||||
addrs.OutputValue{Name: "bool"}.Absolute(addrs.RootModuleInstance),
|
||||
cty.True,
|
||||
false,
|
||||
)
|
||||
s.SetOutputValue(
|
||||
addrs.OutputValue{Name: "obj"}.Absolute(addrs.RootModuleInstance),
|
||||
cty.EmptyObjectVal,
|
||||
false,
|
||||
)
|
||||
s.SetOutputValue(
|
||||
addrs.OutputValue{Name: "null"}.Absolute(addrs.RootModuleInstance),
|
||||
cty.NullVal(cty.String),
|
||||
false,
|
||||
)
|
||||
})
|
||||
|
||||
statePath := testStateFile(t, originalState)
|
||||
|
||||
tests := map[string]struct {
|
||||
WantOutput string
|
||||
WantErr bool
|
||||
}{
|
||||
"str": {WantOutput: "bar"},
|
||||
"multistr": {WantOutput: "bar\nbaz"},
|
||||
"num": {WantOutput: "2"},
|
||||
"bool": {WantOutput: "true"},
|
||||
"obj": {WantErr: true},
|
||||
"null": {WantErr: true},
|
||||
}
|
||||
|
||||
for name, test := range tests {
|
||||
t.Run(name, func(t *testing.T) {
|
||||
var printed string
|
||||
ui := cli.NewMockUi()
|
||||
c := &OutputCommand{
|
||||
Meta: Meta{
|
||||
testingOverrides: metaOverridesForProvider(testProvider()),
|
||||
Ui: ui,
|
||||
},
|
||||
rawPrint: func(s string) {
|
||||
printed = s
|
||||
},
|
||||
}
|
||||
args := []string{
|
||||
"-state", statePath,
|
||||
"-raw",
|
||||
name,
|
||||
}
|
||||
code := c.Run(args)
|
||||
|
||||
if code != 0 {
|
||||
if !test.WantErr {
|
||||
t.Errorf("unexpected failure\n%s", ui.ErrorWriter.String())
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
if test.WantErr {
|
||||
t.Fatalf("succeeded, but want error")
|
||||
}
|
||||
|
||||
if got, want := printed, test.WantOutput; got != want {
|
||||
t.Errorf("wrong result\ngot: %q\nwant: %q", got, want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestOutput_emptyOutputs(t *testing.T) {
|
||||
originalState := states.NewState()
|
||||
statePath := testStateFile(t, originalState)
|
||||
|
|
|
@ -202,6 +202,75 @@ func TestStateList_noState(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestStateList_modules(t *testing.T) {
|
||||
// Create a temporary working directory that is empty
|
||||
td := tempDir(t)
|
||||
testCopyDir(t, testFixturePath("state-list-nested-modules"), td)
|
||||
defer os.RemoveAll(td)
|
||||
defer testChdir(t, td)()
|
||||
|
||||
p := testProvider()
|
||||
ui := cli.NewMockUi()
|
||||
c := &StateListCommand{
|
||||
Meta: Meta{
|
||||
testingOverrides: metaOverridesForProvider(p),
|
||||
Ui: ui,
|
||||
},
|
||||
}
|
||||
|
||||
t.Run("list resources in module and submodules", func(t *testing.T) {
|
||||
args := []string{"module.nest"}
|
||||
if code := c.Run(args); code != 0 {
|
||||
t.Fatalf("bad: %d", code)
|
||||
}
|
||||
|
||||
// resources in the module and any submodules should be included in the outputs
|
||||
expected := "module.nest.test_instance.nest\nmodule.nest.module.subnest.test_instance.subnest\n"
|
||||
actual := ui.OutputWriter.String()
|
||||
if actual != expected {
|
||||
t.Fatalf("Expected:\n%q\n\nTo equal: %q", actual, expected)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("submodule has resources only", func(t *testing.T) {
|
||||
// now get the state for a module that has no resources, only another nested module
|
||||
ui.OutputWriter.Reset()
|
||||
args := []string{"module.nonexist"}
|
||||
if code := c.Run(args); code != 0 {
|
||||
t.Fatalf("bad: %d", code)
|
||||
}
|
||||
expected := "module.nonexist.module.child.test_instance.child\n"
|
||||
actual := ui.OutputWriter.String()
|
||||
if actual != expected {
|
||||
t.Fatalf("Expected:\n%q\n\nTo equal: %q", actual, expected)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("expanded module", func(t *testing.T) {
|
||||
// finally get the state for a module with an index
|
||||
ui.OutputWriter.Reset()
|
||||
args := []string{"module.count"}
|
||||
if code := c.Run(args); code != 0 {
|
||||
t.Fatalf("bad: %d", code)
|
||||
}
|
||||
expected := "module.count[0].test_instance.count\nmodule.count[1].test_instance.count\n"
|
||||
actual := ui.OutputWriter.String()
|
||||
if actual != expected {
|
||||
t.Fatalf("Expected:\n%q\n\nTo equal: %q", actual, expected)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("completely nonexistent module", func(t *testing.T) {
|
||||
// finally get the state for a module with an index
|
||||
ui.OutputWriter.Reset()
|
||||
args := []string{"module.notevenalittlebit"}
|
||||
if code := c.Run(args); code != 1 {
|
||||
t.Fatalf("bad: %d", code)
|
||||
}
|
||||
})
|
||||
|
||||
}
|
||||
|
||||
const testStateListOutput = `
|
||||
test_instance.foo
|
||||
`
|
||||
|
|
|
@ -103,24 +103,32 @@ func (c *StateMeta) lookupResourceInstanceAddr(state *states.State, allowMissing
|
|||
case addrs.ModuleInstance:
|
||||
// Matches all instances within the indicated module and all of its
|
||||
// descendent modules.
|
||||
|
||||
// found is used to identify cases where the selected module has no
|
||||
// resources, but one or more of its submodules does.
|
||||
found := false
|
||||
ms := state.Module(addr)
|
||||
if ms == nil {
|
||||
if !allowMissing {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Unknown module",
|
||||
fmt.Sprintf(`The current state contains no module at %s. If you've just added this module to the configuration, you must run "terraform apply" first to create the module's entry in the state.`, addr),
|
||||
))
|
||||
}
|
||||
break
|
||||
if ms != nil {
|
||||
found = true
|
||||
ret = append(ret, c.collectModuleResourceInstances(ms)...)
|
||||
}
|
||||
ret = append(ret, c.collectModuleResourceInstances(ms)...)
|
||||
for _, cms := range state.Modules {
|
||||
candidateAddr := ms.Addr
|
||||
if len(candidateAddr) > len(addr) && candidateAddr[:len(addr)].Equal(addr) {
|
||||
ret = append(ret, c.collectModuleResourceInstances(cms)...)
|
||||
if !addr.Equal(cms.Addr) {
|
||||
if addr.IsAncestor(cms.Addr) || addr.TargetContains(cms.Addr) {
|
||||
found = true
|
||||
ret = append(ret, c.collectModuleResourceInstances(cms)...)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if found == false && !allowMissing {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Unknown module",
|
||||
fmt.Sprintf(`The current state contains no module at %s. If you've just added this module to the configuration, you must run "terraform apply" first to create the module's entry in the state.`, addr),
|
||||
))
|
||||
}
|
||||
|
||||
case addrs.AbsResource:
|
||||
// Matches all instances of the specific selected resource.
|
||||
rs := state.Resource(addr)
|
||||
|
|
|
@ -0,0 +1,91 @@
|
|||
{
|
||||
"version": 4,
|
||||
"terraform_version": "0.15.0",
|
||||
"serial": 8,
|
||||
"lineage": "00bfda35-ad61-ec8d-c013-14b0320bc416",
|
||||
"resources": [
|
||||
{
|
||||
"mode": "managed",
|
||||
"type": "test_instance",
|
||||
"name": "root",
|
||||
"provider": "provider[\"registry.terraform.io/hashicorp/test\"]",
|
||||
"instances": [
|
||||
{
|
||||
"attributes": {
|
||||
"id": "root"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"module": "module.nest",
|
||||
"mode": "managed",
|
||||
"type": "test_instance",
|
||||
"name": "nest",
|
||||
"provider": "provider[\"registry.terraform.io/hashicorp/test\"]",
|
||||
"instances": [
|
||||
{
|
||||
"attributes": {
|
||||
"ami": "nested"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"module": "module.nest.module.subnest",
|
||||
"mode": "managed",
|
||||
"type": "test_instance",
|
||||
"name": "subnest",
|
||||
"provider": "provider[\"registry.terraform.io/hashicorp/test\"]",
|
||||
"instances": [
|
||||
{
|
||||
"attributes": {
|
||||
"id": "subnested"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"module": "module.nonexist.module.child",
|
||||
"mode": "managed",
|
||||
"type": "test_instance",
|
||||
"name": "child",
|
||||
"provider": "provider[\"registry.terraform.io/hashicorp/test\"]",
|
||||
"instances": [
|
||||
{
|
||||
"attributes": {
|
||||
"id": "child"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"module": "module.count[0]",
|
||||
"mode": "managed",
|
||||
"type": "test_instance",
|
||||
"name": "count",
|
||||
"provider": "provider[\"registry.terraform.io/hashicorp/test\"]",
|
||||
"instances": [
|
||||
{
|
||||
"attributes": {
|
||||
"id": "zero"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"module": "module.count[1]",
|
||||
"mode": "managed",
|
||||
"type": "test_instance",
|
||||
"name": "count",
|
||||
"provider": "provider[\"registry.terraform.io/hashicorp/test\"]",
|
||||
"instances": [
|
||||
{
|
||||
"attributes": {
|
||||
"id": "one"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
|
@ -0,0 +1,133 @@
|
|||
{
|
||||
"valid": false,
|
||||
"error_count": 6,
|
||||
"warning_count": 1,
|
||||
"diagnostics": [
|
||||
{
|
||||
"severity": "error",
|
||||
"summary": "Missing required argument",
|
||||
"detail": "The argument \"source\" is required, but no definition was found.",
|
||||
"range": {
|
||||
"filename": "testdata/validate-invalid/incorrectmodulename/main.tf",
|
||||
"start": {
|
||||
"line": 1,
|
||||
"column": 23,
|
||||
"byte": 22
|
||||
},
|
||||
"end": {
|
||||
"line": 1,
|
||||
"column": 23,
|
||||
"byte": 22
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"severity": "error",
|
||||
"summary": "Invalid module instance name",
|
||||
"detail": "A name must start with a letter or underscore and may contain only letters, digits, underscores, and dashes.",
|
||||
"range": {
|
||||
"filename": "testdata/validate-invalid/incorrectmodulename/main.tf",
|
||||
"start": {
|
||||
"line": 1,
|
||||
"column": 8,
|
||||
"byte": 7
|
||||
},
|
||||
"end": {
|
||||
"line": 1,
|
||||
"column": 22,
|
||||
"byte": 21
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"severity": "warning",
|
||||
"summary": "Interpolation-only expressions are deprecated",
|
||||
"detail": "Terraform 0.11 and earlier required all non-constant expressions to be provided via interpolation syntax, but this pattern is now deprecated. To silence this warning, remove the \"${ sequence from the start and the }\" sequence from the end of this expression, leaving just the inner expression.\n\nTemplate interpolation syntax is still used to construct strings from expressions when the template includes multiple interpolation sequences or a mixture of literal strings and interpolations. This deprecation applies only to templates that consist entirely of a single interpolation sequence.",
|
||||
"range": {
|
||||
"filename": "testdata/validate-invalid/incorrectmodulename/main.tf",
|
||||
"start": {
|
||||
"line": 5,
|
||||
"column": 12,
|
||||
"byte": 55
|
||||
},
|
||||
"end": {
|
||||
"line": 5,
|
||||
"column": 31,
|
||||
"byte": 74
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"severity": "error",
|
||||
"summary": "Variables not allowed",
|
||||
"detail": "Variables may not be used here.",
|
||||
"range": {
|
||||
"filename": "testdata/validate-invalid/incorrectmodulename/main.tf",
|
||||
"start": {
|
||||
"line": 5,
|
||||
"column": 15,
|
||||
"byte": 58
|
||||
},
|
||||
"end": {
|
||||
"line": 5,
|
||||
"column": 18,
|
||||
"byte": 61
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"severity": "error",
|
||||
"summary": "Unsuitable value type",
|
||||
"detail": "Unsuitable value: value must be known",
|
||||
"range": {
|
||||
"filename": "testdata/validate-invalid/incorrectmodulename/main.tf",
|
||||
"start": {
|
||||
"line": 5,
|
||||
"column": 12,
|
||||
"byte": 55
|
||||
},
|
||||
"end": {
|
||||
"line": 5,
|
||||
"column": 31,
|
||||
"byte": 74
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"severity": "error",
|
||||
"summary": "Module not installed",
|
||||
"detail": "This module is not yet installed. Run \"terraform init\" to install all modules required by this configuration.",
|
||||
"range": {
|
||||
"filename": "testdata/validate-invalid/incorrectmodulename/main.tf",
|
||||
"start": {
|
||||
"line": 4,
|
||||
"column": 1,
|
||||
"byte": 27
|
||||
},
|
||||
"end": {
|
||||
"line": 4,
|
||||
"column": 15,
|
||||
"byte": 41
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"severity": "error",
|
||||
"summary": "Module not installed",
|
||||
"detail": "This module is not yet installed. Run \"terraform init\" to install all modules required by this configuration.",
|
||||
"range": {
|
||||
"filename": "testdata/validate-invalid/incorrectmodulename/main.tf",
|
||||
"start": {
|
||||
"line": 1,
|
||||
"column": 1,
|
||||
"byte": 0
|
||||
},
|
||||
"end": {
|
||||
"line": 1,
|
||||
"column": 22,
|
||||
"byte": 21
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
|
@ -0,0 +1,43 @@
|
|||
{
|
||||
"valid": false,
|
||||
"error_count": 2,
|
||||
"warning_count": 0,
|
||||
"diagnostics": [
|
||||
{
|
||||
"severity": "error",
|
||||
"summary": "Variables not allowed",
|
||||
"detail": "Variables may not be used here.",
|
||||
"range": {
|
||||
"filename": "testdata/validate-invalid/interpolation/main.tf",
|
||||
"start": {
|
||||
"line": 6,
|
||||
"column": 16,
|
||||
"byte": 122
|
||||
},
|
||||
"end": {
|
||||
"line": 6,
|
||||
"column": 19,
|
||||
"byte": 125
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"severity": "error",
|
||||
"summary": "Invalid expression",
|
||||
"detail": "A single static variable reference is required: only attribute access and indexing with constant keys. No calculations, function calls, template expressions, etc are allowed here.",
|
||||
"range": {
|
||||
"filename": "testdata/validate-invalid/interpolation/main.tf",
|
||||
"start": {
|
||||
"line": 10,
|
||||
"column": 17,
|
||||
"byte": 197
|
||||
},
|
||||
"end": {
|
||||
"line": 10,
|
||||
"column": 44,
|
||||
"byte": 224
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
|
@ -0,0 +1,6 @@
|
|||
{
|
||||
"valid": true,
|
||||
"error_count": 0,
|
||||
"warning_count": 0,
|
||||
"diagnostics": []
|
||||
}
|
|
@ -0,0 +1,25 @@
|
|||
{
|
||||
"valid": false,
|
||||
"error_count": 1,
|
||||
"warning_count": 0,
|
||||
"diagnostics": [
|
||||
{
|
||||
"severity": "error",
|
||||
"summary": "Invalid reference",
|
||||
"detail": "A reference to a resource type must be followed by at least one attribute access, specifying the resource name.",
|
||||
"range": {
|
||||
"filename": "testdata/validate-invalid/missing_quote/main.tf",
|
||||
"start": {
|
||||
"line": 6,
|
||||
"column": 14,
|
||||
"byte": 110
|
||||
},
|
||||
"end": {
|
||||
"line": 6,
|
||||
"column": 18,
|
||||
"byte": 114
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
|
@ -0,0 +1,43 @@
|
|||
{
|
||||
"valid": false,
|
||||
"error_count": 1,
|
||||
"warning_count": 1,
|
||||
"diagnostics": [
|
||||
{
|
||||
"severity": "warning",
|
||||
"summary": "Interpolation-only expressions are deprecated",
|
||||
"detail": "Terraform 0.11 and earlier required all non-constant expressions to be provided via interpolation syntax, but this pattern is now deprecated. To silence this warning, remove the \"${ sequence from the start and the }\" sequence from the end of this expression, leaving just the inner expression.\n\nTemplate interpolation syntax is still used to construct strings from expressions when the template includes multiple interpolation sequences or a mixture of literal strings and interpolations. This deprecation applies only to templates that consist entirely of a single interpolation sequence.",
|
||||
"range": {
|
||||
"filename": "testdata/validate-invalid/missing_var/main.tf",
|
||||
"start": {
|
||||
"line": 6,
|
||||
"column": 21,
|
||||
"byte": 117
|
||||
},
|
||||
"end": {
|
||||
"line": 6,
|
||||
"column": 41,
|
||||
"byte": 137
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"severity": "error",
|
||||
"summary": "Reference to undeclared input variable",
|
||||
"detail": "An input variable with the name \"description\" has not been declared. This variable can be declared with a variable \"description\" {} block.",
|
||||
"range": {
|
||||
"filename": "testdata/validate-invalid/missing_var/main.tf",
|
||||
"start": {
|
||||
"line": 6,
|
||||
"column": 24,
|
||||
"byte": 120
|
||||
},
|
||||
"end": {
|
||||
"line": 6,
|
||||
"column": 39,
|
||||
"byte": 135
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
|
@ -0,0 +1,43 @@
|
|||
{
|
||||
"valid": false,
|
||||
"error_count": 2,
|
||||
"warning_count": 0,
|
||||
"diagnostics": [
|
||||
{
|
||||
"severity": "error",
|
||||
"summary": "Duplicate module call",
|
||||
"detail": "A module call named \"multi_module\" was already defined at testdata/validate-invalid/multiple_modules/main.tf:1,1-22. Module calls must have unique names within a module.",
|
||||
"range": {
|
||||
"filename": "testdata/validate-invalid/multiple_modules/main.tf",
|
||||
"start": {
|
||||
"line": 5,
|
||||
"column": 1,
|
||||
"byte": 46
|
||||
},
|
||||
"end": {
|
||||
"line": 5,
|
||||
"column": 22,
|
||||
"byte": 67
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"severity": "error",
|
||||
"summary": "Module not installed",
|
||||
"detail": "This module is not yet installed. Run \"terraform init\" to install all modules required by this configuration.",
|
||||
"range": {
|
||||
"filename": "testdata/validate-invalid/multiple_modules/main.tf",
|
||||
"start": {
|
||||
"line": 5,
|
||||
"column": 1,
|
||||
"byte": 46
|
||||
},
|
||||
"end": {
|
||||
"line": 5,
|
||||
"column": 22,
|
||||
"byte": 67
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
|
@ -0,0 +1,25 @@
|
|||
{
|
||||
"valid": false,
|
||||
"error_count": 1,
|
||||
"warning_count": 0,
|
||||
"diagnostics": [
|
||||
{
|
||||
"severity": "error",
|
||||
"summary": "Duplicate provider configuration",
|
||||
"detail": "A default (non-aliased) provider configuration for \"aws\" was already given at testdata/validate-invalid/multiple_providers/main.tf:1,1-15. If multiple configurations are required, set the \"alias\" argument for alternative configurations.",
|
||||
"range": {
|
||||
"filename": "testdata/validate-invalid/multiple_providers/main.tf",
|
||||
"start": {
|
||||
"line": 7,
|
||||
"column": 1,
|
||||
"byte": 85
|
||||
},
|
||||
"end": {
|
||||
"line": 7,
|
||||
"column": 15,
|
||||
"byte": 99
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
|
@ -0,0 +1,25 @@
|
|||
{
|
||||
"valid": false,
|
||||
"error_count": 1,
|
||||
"warning_count": 0,
|
||||
"diagnostics": [
|
||||
{
|
||||
"severity": "error",
|
||||
"summary": "Duplicate resource \"aws_instance\" configuration",
|
||||
"detail": "A aws_instance resource named \"web\" was already declared at testdata/validate-invalid/multiple_resources/main.tf:1,1-30. Resource names must be unique per type in each module.",
|
||||
"range": {
|
||||
"filename": "testdata/validate-invalid/multiple_resources/main.tf",
|
||||
"start": {
|
||||
"line": 4,
|
||||
"column": 1,
|
||||
"byte": 35
|
||||
},
|
||||
"end": {
|
||||
"line": 4,
|
||||
"column": 30,
|
||||
"byte": 64
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
|
@ -0,0 +1,25 @@
|
|||
{
|
||||
"valid": false,
|
||||
"error_count": 1,
|
||||
"warning_count": 0,
|
||||
"diagnostics": [
|
||||
{
|
||||
"severity": "error",
|
||||
"summary": "Unsupported block type",
|
||||
"detail": "Blocks of type \"resorce\" are not expected here. Did you mean \"resource\"?",
|
||||
"range": {
|
||||
"filename": "testdata/validate-invalid/main.tf",
|
||||
"start": {
|
||||
"line": 1,
|
||||
"column": 1,
|
||||
"byte": 0
|
||||
},
|
||||
"end": {
|
||||
"line": 1,
|
||||
"column": 8,
|
||||
"byte": 7
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
|
@ -0,0 +1,43 @@
|
|||
{
|
||||
"valid": false,
|
||||
"error_count": 2,
|
||||
"warning_count": 0,
|
||||
"diagnostics": [
|
||||
{
|
||||
"severity": "error",
|
||||
"summary": "Missing required argument",
|
||||
"detail": "The argument \"value\" is required, but no definition was found.",
|
||||
"range": {
|
||||
"filename": "testdata/validate-invalid/outputs/main.tf",
|
||||
"start": {
|
||||
"line": 1,
|
||||
"column": 18,
|
||||
"byte": 17
|
||||
},
|
||||
"end": {
|
||||
"line": 1,
|
||||
"column": 18,
|
||||
"byte": 17
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"severity": "error",
|
||||
"summary": "Unsupported argument",
|
||||
"detail": "An argument named \"values\" is not expected here. Did you mean \"value\"?",
|
||||
"range": {
|
||||
"filename": "testdata/validate-invalid/outputs/main.tf",
|
||||
"start": {
|
||||
"line": 2,
|
||||
"column": 3,
|
||||
"byte": 21
|
||||
},
|
||||
"end": {
|
||||
"line": 2,
|
||||
"column": 9,
|
||||
"byte": 27
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
|
@ -0,0 +1,6 @@
|
|||
{
|
||||
"valid": true,
|
||||
"error_count": 0,
|
||||
"warning_count": 0,
|
||||
"diagnostics": []
|
||||
}
|
|
@ -1,10 +1,14 @@
|
|||
package command
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/google/go-cmp/cmp"
|
||||
"github.com/mitchellh/cli"
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
|
||||
|
@ -28,6 +32,7 @@ func setupTest(fixturepath string, args ...string) (*cli.MockUi, int) {
|
|||
Attributes: map[string]*configschema.Attribute{
|
||||
"device_index": {Type: cty.String, Optional: true},
|
||||
"description": {Type: cty.String, Optional: true},
|
||||
"name": {Type: cty.String, Optional: true},
|
||||
},
|
||||
},
|
||||
},
|
||||
|
@ -83,29 +88,25 @@ func TestValidateFailingCommand(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestValidateFailingCommandMissingQuote(t *testing.T) {
|
||||
// FIXME: Re-enable once we've updated core for new data structures
|
||||
t.Skip("test temporarily disabled until deep validate supports new config structures")
|
||||
|
||||
ui, code := setupTest("validate-invalid/missing_quote")
|
||||
|
||||
if code != 1 {
|
||||
t.Fatalf("Should have failed: %d\n\n%s", code, ui.ErrorWriter.String())
|
||||
}
|
||||
if !strings.HasSuffix(strings.TrimSpace(ui.ErrorWriter.String()), "IDENT test") {
|
||||
t.Fatalf("Should have failed: %d\n\n'%s'", code, ui.ErrorWriter.String())
|
||||
wantError := "Error: Invalid reference"
|
||||
if !strings.Contains(ui.ErrorWriter.String(), wantError) {
|
||||
t.Fatalf("Missing error string %q\n\n'%s'", wantError, ui.ErrorWriter.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestValidateFailingCommandMissingVariable(t *testing.T) {
|
||||
// FIXME: Re-enable once we've updated core for new data structures
|
||||
t.Skip("test temporarily disabled until deep validate supports new config structures")
|
||||
|
||||
ui, code := setupTest("validate-invalid/missing_var")
|
||||
if code != 1 {
|
||||
t.Fatalf("Should have failed: %d\n\n%s", code, ui.ErrorWriter.String())
|
||||
}
|
||||
if !strings.HasSuffix(strings.TrimSpace(ui.ErrorWriter.String()), "config: unknown variable referenced: 'description'; define it with a 'variable' block") {
|
||||
t.Fatalf("Should have failed: %d\n\n'%s'", code, ui.ErrorWriter.String())
|
||||
wantError := "Error: Reference to undeclared input variable"
|
||||
if !strings.Contains(ui.ErrorWriter.String(), wantError) {
|
||||
t.Fatalf("Missing error string %q\n\n'%s'", wantError, ui.ErrorWriter.String())
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -197,3 +198,65 @@ func TestMissingDefinedVar(t *testing.T) {
|
|||
t.Fatalf("Should have passed: %d\n\n%s", code, ui.ErrorWriter.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestValidate_json(t *testing.T) {
|
||||
tests := []struct {
|
||||
path string
|
||||
valid bool
|
||||
}{
|
||||
{"validate-valid", true},
|
||||
{"validate-invalid", false},
|
||||
{"validate-invalid/missing_quote", false},
|
||||
{"validate-invalid/missing_var", false},
|
||||
{"validate-invalid/multiple_providers", false},
|
||||
{"validate-invalid/multiple_modules", false},
|
||||
{"validate-invalid/multiple_resources", false},
|
||||
{"validate-invalid/outputs", false},
|
||||
{"validate-invalid/incorrectmodulename", false},
|
||||
{"validate-invalid/interpolation", false},
|
||||
{"validate-invalid/missing_defined_var", true},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
t.Run(tc.path, func(t *testing.T) {
|
||||
var want, got map[string]interface{}
|
||||
|
||||
wantFile, err := os.Open(path.Join(testFixturePath(tc.path), "output.json"))
|
||||
if err != nil {
|
||||
t.Fatalf("failed to open output file: %s", err)
|
||||
}
|
||||
defer wantFile.Close()
|
||||
wantBytes, err := ioutil.ReadAll(wantFile)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read output file: %s", err)
|
||||
}
|
||||
err = json.Unmarshal([]byte(wantBytes), &want)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to unmarshal expected JSON: %s", err)
|
||||
}
|
||||
|
||||
ui, code := setupTest(tc.path, "-json")
|
||||
|
||||
gotString := ui.OutputWriter.String()
|
||||
err = json.Unmarshal([]byte(gotString), &got)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to unmarshal actual JSON: %s", err)
|
||||
}
|
||||
|
||||
if !cmp.Equal(got, want) {
|
||||
t.Errorf("wrong output:\n %v\n", cmp.Diff(got, want))
|
||||
t.Errorf("raw output:\n%s\n", gotString)
|
||||
}
|
||||
|
||||
if tc.valid && code != 0 {
|
||||
t.Errorf("wrong exit code: want 0, got %d", code)
|
||||
} else if !tc.valid && code != 1 {
|
||||
t.Errorf("wrong exit code: want 1, got %d", code)
|
||||
}
|
||||
|
||||
if errorOutput := ui.ErrorWriter.String(); errorOutput != "" {
|
||||
t.Errorf("unexpected error output:\n%s", errorOutput)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
|
@ -2,8 +2,6 @@ package webbrowser
|
|||
|
||||
import (
|
||||
"github.com/pkg/browser"
|
||||
"os/exec"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// NewNativeLauncher creates and returns a Launcher that will attempt to interact
|
||||
|
@ -15,18 +13,6 @@ func NewNativeLauncher() Launcher {
|
|||
|
||||
type nativeLauncher struct{}
|
||||
|
||||
func hasProgram(name string) bool {
|
||||
_, err := exec.LookPath(name)
|
||||
return err == nil
|
||||
}
|
||||
|
||||
func (l nativeLauncher) OpenURL(url string) error {
|
||||
// Windows Subsystem for Linux (bash for Windows) doesn't have xdg-open available
|
||||
// but you can execute cmd.exe from there; try to identify it
|
||||
if !hasProgram("xdg-open") && hasProgram("cmd.exe") {
|
||||
r := strings.NewReplacer("&", "^&")
|
||||
exec.Command("cmd.exe", "/c", "start", r.Replace(url)).Run()
|
||||
}
|
||||
|
||||
return browser.OpenURL(url)
|
||||
}
|
||||
|
|
5
go.mod
5
go.mod
|
@ -64,7 +64,7 @@ require (
|
|||
github.com/hashicorp/go-uuid v1.0.1
|
||||
github.com/hashicorp/go-version v1.2.0
|
||||
github.com/hashicorp/hcl v0.0.0-20170504190234-a4b07c25de5f
|
||||
github.com/hashicorp/hcl/v2 v2.7.2
|
||||
github.com/hashicorp/hcl/v2 v2.8.0
|
||||
github.com/hashicorp/memberlist v0.1.0 // indirect
|
||||
github.com/hashicorp/serf v0.0.0-20160124182025-e4ec8cc423bb // indirect
|
||||
github.com/hashicorp/terraform-config-inspect v0.0.0-20191212124732-c6ae6269b9d7
|
||||
|
@ -97,7 +97,7 @@ require (
|
|||
github.com/nu7hatch/gouuid v0.0.0-20131221200532-179d4d0c4d8d // indirect
|
||||
github.com/packer-community/winrmcp v0.0.0-20180921211025-c76d91c1e7db
|
||||
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c // indirect
|
||||
github.com/pkg/browser v0.0.0-20180916011732-0a3d74bf9ce4
|
||||
github.com/pkg/browser v0.0.0-20201207095918-0426ae3fba23
|
||||
github.com/pkg/errors v0.9.1
|
||||
github.com/posener/complete v1.2.1
|
||||
github.com/prometheus/client_golang v0.9.3-0.20190127221311-3c4408c8b829 // indirect
|
||||
|
@ -130,7 +130,6 @@ require (
|
|||
google.golang.org/grpc v1.31.1
|
||||
google.golang.org/protobuf v1.25.0
|
||||
gopkg.in/ini.v1 v1.42.0 // indirect
|
||||
gopkg.in/yaml.v2 v2.3.0
|
||||
k8s.io/api v0.0.0-20190620084959-7cf5895f2711
|
||||
k8s.io/apimachinery v0.0.0-20190913080033-27d36303b655
|
||||
k8s.io/client-go v10.0.0+incompatible
|
||||
|
|
6
go.sum
6
go.sum
|
@ -345,8 +345,8 @@ github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ
|
|||
github.com/hashicorp/hcl v0.0.0-20170504190234-a4b07c25de5f h1:UdxlrJz4JOnY8W+DbLISwf2B8WXEolNRA8BGCwI9jws=
|
||||
github.com/hashicorp/hcl v0.0.0-20170504190234-a4b07c25de5f/go.mod h1:oZtUIOe8dh44I2q6ScRibXws4Ajl+d+nod3AaR9vL5w=
|
||||
github.com/hashicorp/hcl/v2 v2.0.0/go.mod h1:oVVDG71tEinNGYCxinCYadcmKU9bglqW9pV3txagJ90=
|
||||
github.com/hashicorp/hcl/v2 v2.7.2 h1:SpE9BfBb/nFxXRZvvKINKeQiGpyj6d0hhgXVqEtLGD4=
|
||||
github.com/hashicorp/hcl/v2 v2.7.2/go.mod h1:bQTN5mpo+jewjJgh8jr0JUguIi7qPHUF6yIfAEN3jqY=
|
||||
github.com/hashicorp/hcl/v2 v2.8.0 h1:iHLEAsNDp3N2MtqroP1wf0nF/zB2+McHN5YCzwqIm80=
|
||||
github.com/hashicorp/hcl/v2 v2.8.0/go.mod h1:bQTN5mpo+jewjJgh8jr0JUguIi7qPHUF6yIfAEN3jqY=
|
||||
github.com/hashicorp/memberlist v0.1.0 h1:qSsCiC0WYD39lbSitKNt40e30uorm2Ss/d4JGU1hzH8=
|
||||
github.com/hashicorp/memberlist v0.1.0/go.mod h1:ncdBp14cuox2iFOq3kDiquKU6fqsTBc3W6JvZwjxxsE=
|
||||
github.com/hashicorp/serf v0.0.0-20160124182025-e4ec8cc423bb h1:ZbgmOQt8DOg796figP87/EFCVx2v2h9yRvwHF/zceX4=
|
||||
|
@ -494,6 +494,8 @@ github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FI
|
|||
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
|
||||
github.com/pkg/browser v0.0.0-20180916011732-0a3d74bf9ce4 h1:49lOXmGaUpV9Fz3gd7TFZY106KVlPVa5jcYD1gaQf98=
|
||||
github.com/pkg/browser v0.0.0-20180916011732-0a3d74bf9ce4/go.mod h1:4OwLy04Bl9Ef3GJJCoec+30X3LQs/0/m4HFRt/2LUSA=
|
||||
github.com/pkg/browser v0.0.0-20201207095918-0426ae3fba23 h1:dofHuld+js7eKSemxqTVIo8yRlpRw+H1SdpzZxWruBc=
|
||||
github.com/pkg/browser v0.0.0-20201207095918-0426ae3fba23/go.mod h1:N6UoU20jOqggOuDwUaBQpluzLNDqif3kq9z2wpdYEfQ=
|
||||
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
|
|
|
@ -0,0 +1,253 @@
|
|||
package getproviders
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"path"
|
||||
|
||||
"github.com/hashicorp/go-retryablehttp"
|
||||
svchost "github.com/hashicorp/terraform-svchost"
|
||||
"github.com/hashicorp/terraform/addrs"
|
||||
)
|
||||
|
||||
// MissingProviderSuggestion takes a provider address that failed installation
|
||||
// due to the remote registry reporting that it didn't exist, and attempts
|
||||
// to find another provider that the user might have meant to select.
|
||||
//
|
||||
// If the result is equal to the given address then that indicates that there
|
||||
// is no suggested alternative to offer, either because the function
|
||||
// successfully determined there is no recorded alternative or because the
|
||||
// lookup failed somehow. We don't consider a failure to find a suggestion
|
||||
// as an installation failure, because the caller should already be reporting
|
||||
// that the provider didn't exist anyway and this is only extra context for
|
||||
// that error message.
|
||||
//
|
||||
// The result of this is a best effort, so any UI presenting it should be
|
||||
// careful to give it only as a possibility and not necessarily a suitable
|
||||
// replacement for the given provider.
|
||||
//
|
||||
// In practice today this function only knows how to suggest alternatives for
|
||||
// "default" providers, which is to say ones that are in the hashicorp
|
||||
// namespace in the Terraform registry. It will always return no result for
|
||||
// any other provider. That might change in future if we introduce other ways
|
||||
// to discover provider suggestions.
|
||||
//
|
||||
// If the given context is cancelled then this function might not return a
|
||||
// renaming suggestion even if one would've been available for a completed
|
||||
// request.
|
||||
func MissingProviderSuggestion(ctx context.Context, addr addrs.Provider, source Source) addrs.Provider {
|
||||
if !addr.IsDefault() {
|
||||
return addr
|
||||
}
|
||||
|
||||
// Our strategy here, for a default provider, is to use the default
|
||||
// registry's special API for looking up "legacy" providers and try looking
|
||||
// for a legacy provider whose type name matches the type of the given
|
||||
// provider. This should then find a suitable answer for any provider
|
||||
// that was originally auto-installable in v0.12 and earlier but moved
|
||||
// into a non-default namespace as part of introducing the heirarchical
|
||||
// provider namespace.
|
||||
//
|
||||
// To achieve that, we need to find the direct registry client in
|
||||
// particular from the given source, because that is the only Source
|
||||
// implementation that can actually handle a legacy provider lookup.
|
||||
regSource := findLegacyProviderLookupSource(addr.Hostname, source)
|
||||
if regSource == nil {
|
||||
// If there's no direct registry source in the installation config
|
||||
// then we can't provide a renaming suggestion.
|
||||
return addr
|
||||
}
|
||||
|
||||
defaultNS, redirectNS, err := regSource.lookupLegacyProviderNamespace(ctx, addr.Hostname, addr.Type)
|
||||
if err != nil {
|
||||
return addr
|
||||
}
|
||||
|
||||
switch {
|
||||
case redirectNS != "":
|
||||
return addrs.Provider{
|
||||
Hostname: addr.Hostname,
|
||||
Namespace: redirectNS,
|
||||
Type: addr.Type,
|
||||
}
|
||||
default:
|
||||
return addrs.Provider{
|
||||
Hostname: addr.Hostname,
|
||||
Namespace: defaultNS,
|
||||
Type: addr.Type,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// findLegacyProviderLookupSource tries to find a *RegistrySource that can talk
|
||||
// to the given registry host in the given Source. It might be given directly,
|
||||
// or it might be given indirectly via a MultiSource where the selector
|
||||
// includes a wildcard for registry.terraform.io.
|
||||
//
|
||||
// Returns nil if the given source does not have any configured way to talk
|
||||
// directly to the given host.
|
||||
//
|
||||
// If the given source contains multiple sources that can talk to the given
|
||||
// host directly, the first one in the sequence takes preference. In practice
|
||||
// it's pointless to have two direct installation sources that match the same
|
||||
// hostname anyway, so this shouldn't arise in normal use.
|
||||
func findLegacyProviderLookupSource(host svchost.Hostname, source Source) *RegistrySource {
|
||||
switch source := source.(type) {
|
||||
|
||||
case *RegistrySource:
|
||||
// Easy case: the source is a registry source directly, and so we'll
|
||||
// just use it.
|
||||
return source
|
||||
|
||||
case *MemoizeSource:
|
||||
// Also easy: the source is a memoize wrapper, so defer to its
|
||||
// underlying source.
|
||||
return findLegacyProviderLookupSource(host, source.underlying)
|
||||
|
||||
case MultiSource:
|
||||
// Trickier case: if it's a multisource then we need to scan over
|
||||
// its selectors until we find one that is a *RegistrySource _and_
|
||||
// that is configured to accept arbitrary providers from the
|
||||
// given hostname.
|
||||
|
||||
// For our matching purposes we'll use an address that would not be
|
||||
// valid as a real provider FQN and thus can only match a selector
|
||||
// that has no filters at all or a selector that wildcards everything
|
||||
// except the hostname, like "registry.terraform.io/*/*"
|
||||
matchAddr := addrs.Provider{
|
||||
Hostname: host,
|
||||
// Other fields are intentionally left empty, to make this invalid
|
||||
// as a specific provider address.
|
||||
}
|
||||
|
||||
for _, selector := range source {
|
||||
// If this source has suitable matching patterns to install from
|
||||
// the given hostname then we'll recursively search inside it
|
||||
// for *RegistrySource objects.
|
||||
if selector.CanHandleProvider(matchAddr) {
|
||||
ret := findLegacyProviderLookupSource(host, selector.Source)
|
||||
if ret != nil {
|
||||
return ret
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If we get here then there were no selectors that are both configured
|
||||
// to handle modules from the given hostname and that are registry
|
||||
// sources, so we fail.
|
||||
return nil
|
||||
|
||||
default:
|
||||
// This source cannot be and cannot contain a *RegistrySource, so
|
||||
// we fail.
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// lookupLegacyProviderNamespace is a special method available only on
|
||||
// RegistrySource which can deal with legacy provider addresses that contain
|
||||
// only a type and leave the namespace implied.
|
||||
//
|
||||
// It asks the registry at the given hostname to provide a default namespace
|
||||
// for the given provider type, which can be combined with the given hostname
|
||||
// and type name to produce a fully-qualified provider address.
|
||||
//
|
||||
// Not all unqualified type names can be resolved to a default namespace. If
|
||||
// the request fails, this method returns an error describing the failure.
|
||||
//
|
||||
// This method exists only to allow compatibility with unqualified names
|
||||
// in older configurations. New configurations should be written so as not to
|
||||
// depend on it, and this fallback mechanism will likely be removed altogether
|
||||
// in a future Terraform version.
|
||||
func (s *RegistrySource) lookupLegacyProviderNamespace(ctx context.Context, hostname svchost.Hostname, typeName string) (string, string, error) {
|
||||
client, err := s.registryClient(hostname)
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
return client.legacyProviderDefaultNamespace(ctx, typeName)
|
||||
}
|
||||
|
||||
// legacyProviderDefaultNamespace returns the raw address strings produced by
|
||||
// the registry when asked about the given unqualified provider type name.
|
||||
// The returned namespace string is taken verbatim from the registry's response.
|
||||
//
|
||||
// This method exists only to allow compatibility with unqualified names
|
||||
// in older configurations. New configurations should be written so as not to
|
||||
// depend on it.
|
||||
func (c *registryClient) legacyProviderDefaultNamespace(ctx context.Context, typeName string) (string, string, error) {
|
||||
endpointPath, err := url.Parse(path.Join("-", typeName, "versions"))
|
||||
if err != nil {
|
||||
// Should never happen because we're constructing this from
|
||||
// already-validated components.
|
||||
return "", "", err
|
||||
}
|
||||
endpointURL := c.baseURL.ResolveReference(endpointPath)
|
||||
|
||||
req, err := retryablehttp.NewRequest("GET", endpointURL.String(), nil)
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
req = req.WithContext(ctx)
|
||||
c.addHeadersToRequest(req.Request)
|
||||
|
||||
// This is just to give us something to return in error messages. It's
|
||||
// not a proper provider address.
|
||||
placeholderProviderAddr := addrs.NewLegacyProvider(typeName)
|
||||
|
||||
resp, err := c.httpClient.Do(req)
|
||||
if err != nil {
|
||||
return "", "", c.errQueryFailed(placeholderProviderAddr, err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
switch resp.StatusCode {
|
||||
case http.StatusOK:
|
||||
// Great!
|
||||
case http.StatusNotFound:
|
||||
return "", "", ErrProviderNotFound{
|
||||
Provider: placeholderProviderAddr,
|
||||
}
|
||||
case http.StatusUnauthorized, http.StatusForbidden:
|
||||
return "", "", c.errUnauthorized(placeholderProviderAddr.Hostname)
|
||||
default:
|
||||
return "", "", c.errQueryFailed(placeholderProviderAddr, errors.New(resp.Status))
|
||||
}
|
||||
|
||||
type ResponseBody struct {
|
||||
Id string `json:"id"`
|
||||
MovedTo string `json:"moved_to"`
|
||||
}
|
||||
var body ResponseBody
|
||||
|
||||
dec := json.NewDecoder(resp.Body)
|
||||
if err := dec.Decode(&body); err != nil {
|
||||
return "", "", c.errQueryFailed(placeholderProviderAddr, err)
|
||||
}
|
||||
|
||||
provider, diags := addrs.ParseProviderSourceString(body.Id)
|
||||
if diags.HasErrors() {
|
||||
return "", "", fmt.Errorf("Error parsing provider ID from Registry: %s", diags.Err())
|
||||
}
|
||||
|
||||
if provider.Type != typeName {
|
||||
return "", "", fmt.Errorf("Registry returned provider with type %q, expected %q", provider.Type, typeName)
|
||||
}
|
||||
|
||||
var movedTo addrs.Provider
|
||||
if body.MovedTo != "" {
|
||||
movedTo, diags = addrs.ParseProviderSourceString(body.MovedTo)
|
||||
if diags.HasErrors() {
|
||||
return "", "", fmt.Errorf("Error parsing provider ID from Registry: %s", diags.Err())
|
||||
}
|
||||
|
||||
if movedTo.Type != typeName {
|
||||
return "", "", fmt.Errorf("Registry returned provider with type %q, expected %q", movedTo.Type, typeName)
|
||||
}
|
||||
}
|
||||
|
||||
return provider.Namespace, movedTo.Namespace, nil
|
||||
}
|
|
@ -0,0 +1,128 @@
|
|||
package getproviders
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
svchost "github.com/hashicorp/terraform-svchost"
|
||||
"github.com/hashicorp/terraform/addrs"
|
||||
)
|
||||
|
||||
func TestMissingProviderSuggestion(t *testing.T) {
|
||||
// Most of these test cases rely on specific "magic" provider addresses
|
||||
// that are implemented by the fake registry source returned by
|
||||
// testRegistrySource. Refer to that function for more details on how
|
||||
// they work.
|
||||
|
||||
t.Run("happy path", func(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
source, _, close := testRegistrySource(t)
|
||||
defer close()
|
||||
|
||||
// testRegistrySource handles -/legacy as a valid legacy provider
|
||||
// lookup mapping to legacycorp/legacy.
|
||||
got := MissingProviderSuggestion(
|
||||
ctx,
|
||||
addrs.NewDefaultProvider("legacy"),
|
||||
source,
|
||||
)
|
||||
|
||||
want := addrs.Provider{
|
||||
Hostname: defaultRegistryHost,
|
||||
Namespace: "legacycorp",
|
||||
Type: "legacy",
|
||||
}
|
||||
if got != want {
|
||||
t.Errorf("wrong result\ngot: %s\nwant: %s", got, want)
|
||||
}
|
||||
})
|
||||
t.Run("provider moved", func(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
source, _, close := testRegistrySource(t)
|
||||
defer close()
|
||||
|
||||
// testRegistrySource handles -/moved as a valid legacy provider
|
||||
// lookup mapping to hashicorp/moved but with an additional "redirect"
|
||||
// to acme/moved. This mimics how for some providers there is both
|
||||
// a copy under terraform-providers for v0.12 compatibility _and_ a
|
||||
// copy in some other namespace for v0.13 or later to use. Our naming
|
||||
// suggestions ignore the v0.12-compatible one and suggest the
|
||||
// other one.
|
||||
got := MissingProviderSuggestion(
|
||||
ctx,
|
||||
addrs.NewDefaultProvider("moved"),
|
||||
source,
|
||||
)
|
||||
|
||||
want := addrs.Provider{
|
||||
Hostname: defaultRegistryHost,
|
||||
Namespace: "acme",
|
||||
Type: "moved",
|
||||
}
|
||||
if got != want {
|
||||
t.Errorf("wrong result\ngot: %s\nwant: %s", got, want)
|
||||
}
|
||||
})
|
||||
t.Run("invalid response", func(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
source, _, close := testRegistrySource(t)
|
||||
defer close()
|
||||
|
||||
// testRegistrySource handles -/invalid by returning an invalid
|
||||
// provider address, which MissingProviderSuggestion should reject
|
||||
// and behave as if there was no suggestion available.
|
||||
want := addrs.NewDefaultProvider("invalid")
|
||||
got := MissingProviderSuggestion(
|
||||
ctx,
|
||||
want,
|
||||
source,
|
||||
)
|
||||
if got != want {
|
||||
t.Errorf("wrong result\ngot: %s\nwant: %s", got, want)
|
||||
}
|
||||
})
|
||||
t.Run("another registry", func(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
source, _, close := testRegistrySource(t)
|
||||
defer close()
|
||||
|
||||
// Because this provider address isn't on registry.terraform.io,
|
||||
// MissingProviderSuggestion won't even attempt to make a suggestion
|
||||
// for it.
|
||||
want := addrs.Provider{
|
||||
Hostname: svchost.Hostname("example.com"),
|
||||
Namespace: "whatever",
|
||||
Type: "foo",
|
||||
}
|
||||
got := MissingProviderSuggestion(
|
||||
ctx,
|
||||
want,
|
||||
source,
|
||||
)
|
||||
if got != want {
|
||||
t.Errorf("wrong result\ngot: %s\nwant: %s", got, want)
|
||||
}
|
||||
})
|
||||
t.Run("another namespace", func(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
source, _, close := testRegistrySource(t)
|
||||
defer close()
|
||||
|
||||
// Because this provider address isn't in
|
||||
// registry.terraform.io/hashicorp/..., MissingProviderSuggestion won't
|
||||
// even attempt to make a suggestion for it.
|
||||
want := addrs.Provider{
|
||||
Hostname: defaultRegistryHost,
|
||||
Namespace: "whatever",
|
||||
Type: "foo",
|
||||
}
|
||||
got := MissingProviderSuggestion(
|
||||
ctx,
|
||||
want,
|
||||
source,
|
||||
)
|
||||
if got != want {
|
||||
t.Errorf("wrong result\ngot: %s\nwant: %s", got, want)
|
||||
}
|
||||
})
|
||||
}
|
|
@ -65,6 +65,12 @@ func NewInstaller(targetDir *Dir, source getproviders.Source) *Installer {
|
|||
}
|
||||
}
|
||||
|
||||
// ProviderSource returns the getproviders.Source that the installer would
|
||||
// use for installing any new providers.
|
||||
func (i *Installer) ProviderSource() getproviders.Source {
|
||||
return i.source
|
||||
}
|
||||
|
||||
// SetGlobalCacheDir activates a second tier of caching for the receiving
|
||||
// installer, with the given directory used as a read-through cache for
|
||||
// installation operations that need to retrieve new packages.
|
||||
|
|
|
@ -3,13 +3,13 @@ package funcs
|
|||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"math/big"
|
||||
"sort"
|
||||
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
"github.com/zclconf/go-cty/cty/convert"
|
||||
"github.com/zclconf/go-cty/cty/function"
|
||||
"github.com/zclconf/go-cty/cty/function/stdlib"
|
||||
"github.com/zclconf/go-cty/cty/gocty"
|
||||
)
|
||||
|
||||
var LengthFunc = function.New(&function.Spec{
|
||||
|
@ -70,6 +70,9 @@ var AllTrueFunc = function.New(&function.Spec{
|
|||
result := cty.True
|
||||
for it := args[0].ElementIterator(); it.Next(); {
|
||||
_, v := it.Element()
|
||||
if !v.IsKnown() {
|
||||
return cty.UnknownVal(cty.Bool), nil
|
||||
}
|
||||
if v.IsNull() {
|
||||
return cty.False, nil
|
||||
}
|
||||
|
@ -94,8 +97,13 @@ var AnyTrueFunc = function.New(&function.Spec{
|
|||
Type: function.StaticReturnType(cty.Bool),
|
||||
Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) {
|
||||
result := cty.False
|
||||
var hasUnknown bool
|
||||
for it := args[0].ElementIterator(); it.Next(); {
|
||||
_, v := it.Element()
|
||||
if !v.IsKnown() {
|
||||
hasUnknown = true
|
||||
continue
|
||||
}
|
||||
if v.IsNull() {
|
||||
continue
|
||||
}
|
||||
|
@ -104,6 +112,9 @@ var AnyTrueFunc = function.New(&function.Spec{
|
|||
return cty.True, nil
|
||||
}
|
||||
}
|
||||
if hasUnknown {
|
||||
return cty.UnknownVal(cty.Bool), nil
|
||||
}
|
||||
return result, nil
|
||||
},
|
||||
})
|
||||
|
@ -393,27 +404,45 @@ var SumFunc = function.New(&function.Spec{
|
|||
arg := args[0].AsValueSlice()
|
||||
ty := args[0].Type()
|
||||
|
||||
var i float64
|
||||
var s float64
|
||||
|
||||
if !ty.IsListType() && !ty.IsSetType() && !ty.IsTupleType() {
|
||||
return cty.NilVal, function.NewArgErrorf(0, fmt.Sprintf("argument must be list, set, or tuple. Received %s", ty.FriendlyName()))
|
||||
}
|
||||
|
||||
if !args[0].IsKnown() {
|
||||
if !args[0].IsWhollyKnown() {
|
||||
return cty.UnknownVal(cty.Number), nil
|
||||
}
|
||||
|
||||
for _, v := range arg {
|
||||
|
||||
if err := gocty.FromCtyValue(v, &i); err != nil {
|
||||
return cty.UnknownVal(cty.Number), function.NewArgErrorf(0, "argument must be list, set, or tuple of number values")
|
||||
} else {
|
||||
s += i
|
||||
// big.Float.Add can panic if the input values are opposing infinities,
|
||||
// so we must catch that here in order to remain within
|
||||
// the cty Function abstraction.
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
if _, ok := r.(big.ErrNaN); ok {
|
||||
ret = cty.NilVal
|
||||
err = fmt.Errorf("can't compute sum of opposing infinities")
|
||||
} else {
|
||||
// not a panic we recognize
|
||||
panic(r)
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
s := arg[0]
|
||||
if s.IsNull() {
|
||||
return cty.NilVal, function.NewArgErrorf(0, "argument must be list, set, or tuple of number values")
|
||||
}
|
||||
for _, v := range arg[1:] {
|
||||
if v.IsNull() {
|
||||
return cty.NilVal, function.NewArgErrorf(0, "argument must be list, set, or tuple of number values")
|
||||
}
|
||||
v, err = convert.Convert(v, cty.Number)
|
||||
if err != nil {
|
||||
return cty.NilVal, function.NewArgErrorf(0, "argument must be list, set, or tuple of number values")
|
||||
}
|
||||
s = s.Add(v)
|
||||
}
|
||||
|
||||
return cty.NumberFloatVal(s), nil
|
||||
return s, nil
|
||||
},
|
||||
})
|
||||
|
||||
|
|
|
@ -2,6 +2,7 @@ package funcs
|
|||
|
||||
import (
|
||||
"fmt"
|
||||
"math"
|
||||
"testing"
|
||||
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
|
@ -169,10 +170,28 @@ func TestAllTrue(t *testing.T) {
|
|||
cty.False,
|
||||
false,
|
||||
},
|
||||
{
|
||||
cty.ListVal([]cty.Value{cty.True, cty.NullVal(cty.Bool)}),
|
||||
cty.False,
|
||||
false,
|
||||
},
|
||||
{
|
||||
cty.ListVal([]cty.Value{cty.UnknownVal(cty.Bool)}),
|
||||
cty.UnknownVal(cty.Bool),
|
||||
true,
|
||||
false,
|
||||
},
|
||||
{
|
||||
cty.ListVal([]cty.Value{
|
||||
cty.UnknownVal(cty.Bool),
|
||||
cty.UnknownVal(cty.Bool),
|
||||
}),
|
||||
cty.UnknownVal(cty.Bool),
|
||||
false,
|
||||
},
|
||||
{
|
||||
cty.UnknownVal(cty.List(cty.Bool)),
|
||||
cty.UnknownVal(cty.Bool),
|
||||
false,
|
||||
},
|
||||
{
|
||||
cty.NullVal(cty.List(cty.Bool)),
|
||||
|
@ -232,10 +251,36 @@ func TestAnyTrue(t *testing.T) {
|
|||
cty.True,
|
||||
false,
|
||||
},
|
||||
{
|
||||
cty.ListVal([]cty.Value{cty.NullVal(cty.Bool), cty.True}),
|
||||
cty.True,
|
||||
false,
|
||||
},
|
||||
{
|
||||
cty.ListVal([]cty.Value{cty.UnknownVal(cty.Bool)}),
|
||||
cty.UnknownVal(cty.Bool),
|
||||
true,
|
||||
false,
|
||||
},
|
||||
{
|
||||
cty.ListVal([]cty.Value{
|
||||
cty.UnknownVal(cty.Bool),
|
||||
cty.False,
|
||||
}),
|
||||
cty.UnknownVal(cty.Bool),
|
||||
false,
|
||||
},
|
||||
{
|
||||
cty.ListVal([]cty.Value{
|
||||
cty.UnknownVal(cty.Bool),
|
||||
cty.True,
|
||||
}),
|
||||
cty.True,
|
||||
false,
|
||||
},
|
||||
{
|
||||
cty.UnknownVal(cty.List(cty.Bool)),
|
||||
cty.UnknownVal(cty.Bool),
|
||||
false,
|
||||
},
|
||||
{
|
||||
cty.NullVal(cty.List(cty.Bool)),
|
||||
|
@ -952,7 +997,7 @@ func TestSum(t *testing.T) {
|
|||
tests := []struct {
|
||||
List cty.Value
|
||||
Want cty.Value
|
||||
Err bool
|
||||
Err string
|
||||
}{
|
||||
{
|
||||
cty.ListVal([]cty.Value{
|
||||
|
@ -961,7 +1006,7 @@ func TestSum(t *testing.T) {
|
|||
cty.NumberIntVal(3),
|
||||
}),
|
||||
cty.NumberIntVal(6),
|
||||
false,
|
||||
"",
|
||||
},
|
||||
{
|
||||
cty.ListVal([]cty.Value{
|
||||
|
@ -972,7 +1017,7 @@ func TestSum(t *testing.T) {
|
|||
cty.NumberIntVal(234),
|
||||
}),
|
||||
cty.NumberIntVal(66685532),
|
||||
false,
|
||||
"",
|
||||
},
|
||||
{
|
||||
cty.ListVal([]cty.Value{
|
||||
|
@ -981,7 +1026,7 @@ func TestSum(t *testing.T) {
|
|||
cty.StringVal("c"),
|
||||
}),
|
||||
cty.UnknownVal(cty.String),
|
||||
true,
|
||||
"argument must be list, set, or tuple of number values",
|
||||
},
|
||||
{
|
||||
cty.ListVal([]cty.Value{
|
||||
|
@ -990,7 +1035,7 @@ func TestSum(t *testing.T) {
|
|||
cty.NumberIntVal(5),
|
||||
}),
|
||||
cty.NumberIntVal(-4),
|
||||
false,
|
||||
"",
|
||||
},
|
||||
{
|
||||
cty.ListVal([]cty.Value{
|
||||
|
@ -999,7 +1044,7 @@ func TestSum(t *testing.T) {
|
|||
cty.NumberFloatVal(5.7),
|
||||
}),
|
||||
cty.NumberFloatVal(35.3),
|
||||
false,
|
||||
"",
|
||||
},
|
||||
{
|
||||
cty.ListVal([]cty.Value{
|
||||
|
@ -1008,12 +1053,20 @@ func TestSum(t *testing.T) {
|
|||
cty.NumberFloatVal(-5.7),
|
||||
}),
|
||||
cty.NumberFloatVal(-35.3),
|
||||
false,
|
||||
"",
|
||||
},
|
||||
{
|
||||
cty.ListVal([]cty.Value{cty.NullVal(cty.Number)}),
|
||||
cty.NilVal,
|
||||
true,
|
||||
"argument must be list, set, or tuple of number values",
|
||||
},
|
||||
{
|
||||
cty.ListVal([]cty.Value{
|
||||
cty.NumberIntVal(5),
|
||||
cty.NullVal(cty.Number),
|
||||
}),
|
||||
cty.NilVal,
|
||||
"argument must be list, set, or tuple of number values",
|
||||
},
|
||||
{
|
||||
cty.SetVal([]cty.Value{
|
||||
|
@ -1022,7 +1075,7 @@ func TestSum(t *testing.T) {
|
|||
cty.StringVal("c"),
|
||||
}),
|
||||
cty.UnknownVal(cty.String),
|
||||
true,
|
||||
"argument must be list, set, or tuple of number values",
|
||||
},
|
||||
{
|
||||
cty.SetVal([]cty.Value{
|
||||
|
@ -1031,7 +1084,7 @@ func TestSum(t *testing.T) {
|
|||
cty.NumberIntVal(5),
|
||||
}),
|
||||
cty.NumberIntVal(-4),
|
||||
false,
|
||||
"",
|
||||
},
|
||||
{
|
||||
cty.SetVal([]cty.Value{
|
||||
|
@ -1040,7 +1093,7 @@ func TestSum(t *testing.T) {
|
|||
cty.NumberIntVal(30),
|
||||
}),
|
||||
cty.NumberIntVal(65),
|
||||
false,
|
||||
"",
|
||||
},
|
||||
{
|
||||
cty.SetVal([]cty.Value{
|
||||
|
@ -1049,14 +1102,14 @@ func TestSum(t *testing.T) {
|
|||
cty.NumberFloatVal(3),
|
||||
}),
|
||||
cty.NumberFloatVal(2354),
|
||||
false,
|
||||
"",
|
||||
},
|
||||
{
|
||||
cty.SetVal([]cty.Value{
|
||||
cty.NumberFloatVal(2),
|
||||
}),
|
||||
cty.NumberFloatVal(2),
|
||||
false,
|
||||
"",
|
||||
},
|
||||
{
|
||||
cty.SetVal([]cty.Value{
|
||||
|
@ -1067,7 +1120,7 @@ func TestSum(t *testing.T) {
|
|||
cty.NumberFloatVal(-4),
|
||||
}),
|
||||
cty.NumberFloatVal(-199),
|
||||
false,
|
||||
"",
|
||||
},
|
||||
{
|
||||
cty.TupleVal([]cty.Value{
|
||||
|
@ -1076,27 +1129,53 @@ func TestSum(t *testing.T) {
|
|||
cty.NumberIntVal(38),
|
||||
}),
|
||||
cty.UnknownVal(cty.String),
|
||||
true,
|
||||
"argument must be list, set, or tuple of number values",
|
||||
},
|
||||
{
|
||||
cty.NumberIntVal(12),
|
||||
cty.NilVal,
|
||||
true,
|
||||
"cannot sum noniterable",
|
||||
},
|
||||
{
|
||||
cty.ListValEmpty(cty.Number),
|
||||
cty.NilVal,
|
||||
true,
|
||||
"cannot sum an empty list",
|
||||
},
|
||||
{
|
||||
cty.MapVal(map[string]cty.Value{"hello": cty.True}),
|
||||
cty.NilVal,
|
||||
true,
|
||||
"argument must be list, set, or tuple. Received map of bool",
|
||||
},
|
||||
{
|
||||
cty.UnknownVal(cty.Number),
|
||||
cty.UnknownVal(cty.Number),
|
||||
false,
|
||||
"",
|
||||
},
|
||||
{
|
||||
cty.UnknownVal(cty.List(cty.Number)),
|
||||
cty.UnknownVal(cty.Number),
|
||||
"",
|
||||
},
|
||||
{ // known list containing unknown values
|
||||
cty.ListVal([]cty.Value{cty.UnknownVal(cty.Number)}),
|
||||
cty.UnknownVal(cty.Number),
|
||||
"",
|
||||
},
|
||||
{ // numbers too large to represent as float64
|
||||
cty.ListVal([]cty.Value{
|
||||
cty.MustParseNumberVal("1e+500"),
|
||||
cty.MustParseNumberVal("1e+500"),
|
||||
}),
|
||||
cty.MustParseNumberVal("2e+500"),
|
||||
"",
|
||||
},
|
||||
{ // edge case we have a special error handler for
|
||||
cty.ListVal([]cty.Value{
|
||||
cty.NumberFloatVal(math.Inf(1)),
|
||||
cty.NumberFloatVal(math.Inf(-1)),
|
||||
}),
|
||||
cty.NilVal,
|
||||
"can't compute sum of opposing infinities",
|
||||
},
|
||||
}
|
||||
|
||||
|
@ -1104,9 +1183,11 @@ func TestSum(t *testing.T) {
|
|||
t.Run(fmt.Sprintf("sum(%#v)", test.List), func(t *testing.T) {
|
||||
got, err := Sum(test.List)
|
||||
|
||||
if test.Err {
|
||||
if test.Err != "" {
|
||||
if err == nil {
|
||||
t.Fatal("succeeded; want error")
|
||||
} else if got, want := err.Error(), test.Err; got != want {
|
||||
t.Fatalf("wrong error\n got: %s\nwant: %s", got, want)
|
||||
}
|
||||
return
|
||||
} else if err != nil {
|
||||
|
|
|
@ -1,727 +0,0 @@
|
|||
package terraform
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"reflect"
|
||||
"strings"
|
||||
|
||||
multierror "github.com/hashicorp/go-multierror"
|
||||
"github.com/hashicorp/hcl/v2"
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
|
||||
"github.com/hashicorp/terraform/addrs"
|
||||
"github.com/hashicorp/terraform/configs"
|
||||
"github.com/hashicorp/terraform/configs/configschema"
|
||||
"github.com/hashicorp/terraform/plans"
|
||||
"github.com/hashicorp/terraform/plans/objchange"
|
||||
"github.com/hashicorp/terraform/providers"
|
||||
"github.com/hashicorp/terraform/provisioners"
|
||||
"github.com/hashicorp/terraform/states"
|
||||
"github.com/hashicorp/terraform/tfdiags"
|
||||
)
|
||||
|
||||
// EvalApply is an EvalNode implementation that writes the diff to
|
||||
// the full diff.
|
||||
type EvalApply struct {
|
||||
Addr addrs.ResourceInstance
|
||||
Config *configs.Resource
|
||||
State **states.ResourceInstanceObject
|
||||
Change **plans.ResourceInstanceChange
|
||||
ProviderAddr addrs.AbsProviderConfig
|
||||
Provider *providers.Interface
|
||||
ProviderMetas map[addrs.Provider]*configs.ProviderMeta
|
||||
ProviderSchema **ProviderSchema
|
||||
Output **states.ResourceInstanceObject
|
||||
CreateNew *bool
|
||||
Error *error
|
||||
CreateBeforeDestroy bool
|
||||
}
|
||||
|
||||
// TODO: test
|
||||
func (n *EvalApply) Eval(ctx EvalContext) tfdiags.Diagnostics {
|
||||
var diags tfdiags.Diagnostics
|
||||
|
||||
change := *n.Change
|
||||
provider := *n.Provider
|
||||
state := *n.State
|
||||
absAddr := n.Addr.Absolute(ctx.Path())
|
||||
|
||||
if state == nil {
|
||||
state = &states.ResourceInstanceObject{}
|
||||
}
|
||||
|
||||
schema, _ := (*n.ProviderSchema).SchemaForResourceType(n.Addr.Resource.Mode, n.Addr.Resource.Type)
|
||||
if schema == nil {
|
||||
// Should be caught during validation, so we don't bother with a pretty error here
|
||||
diags = diags.Append(fmt.Errorf("provider does not support resource type %q", n.Addr.Resource.Type))
|
||||
return diags
|
||||
}
|
||||
|
||||
if n.CreateNew != nil {
|
||||
*n.CreateNew = (change.Action == plans.Create || change.Action.IsReplace())
|
||||
}
|
||||
|
||||
configVal := cty.NullVal(cty.DynamicPseudoType)
|
||||
if n.Config != nil {
|
||||
var configDiags tfdiags.Diagnostics
|
||||
forEach, _ := evaluateForEachExpression(n.Config.ForEach, ctx)
|
||||
keyData := EvalDataForInstanceKey(n.Addr.Key, forEach)
|
||||
configVal, _, configDiags = ctx.EvaluateBlock(n.Config.Config, schema, nil, keyData)
|
||||
diags = diags.Append(configDiags)
|
||||
if configDiags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
}
|
||||
|
||||
if !configVal.IsWhollyKnown() {
|
||||
diags = diags.Append(fmt.Errorf(
|
||||
"configuration for %s still contains unknown values during apply (this is a bug in Terraform; please report it!)",
|
||||
absAddr,
|
||||
))
|
||||
return diags
|
||||
}
|
||||
|
||||
metaConfigVal := cty.NullVal(cty.DynamicPseudoType)
|
||||
if n.ProviderMetas != nil {
|
||||
log.Printf("[DEBUG] EvalApply: ProviderMeta config value set")
|
||||
if m, ok := n.ProviderMetas[n.ProviderAddr.Provider]; ok && m != nil {
|
||||
// if the provider doesn't support this feature, throw an error
|
||||
if (*n.ProviderSchema).ProviderMeta == nil {
|
||||
log.Printf("[DEBUG] EvalApply: no ProviderMeta schema")
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
Severity: hcl.DiagError,
|
||||
Summary: fmt.Sprintf("Provider %s doesn't support provider_meta", n.ProviderAddr.Provider.String()),
|
||||
Detail: fmt.Sprintf("The resource %s belongs to a provider that doesn't support provider_meta blocks", n.Addr),
|
||||
Subject: &m.ProviderRange,
|
||||
})
|
||||
} else {
|
||||
log.Printf("[DEBUG] EvalApply: ProviderMeta schema found")
|
||||
var configDiags tfdiags.Diagnostics
|
||||
metaConfigVal, _, configDiags = ctx.EvaluateBlock(m.Config, (*n.ProviderSchema).ProviderMeta, nil, EvalDataForNoInstanceKey)
|
||||
diags = diags.Append(configDiags)
|
||||
if configDiags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
log.Printf("[DEBUG] %s: applying the planned %s change", n.Addr.Absolute(ctx.Path()), change.Action)
|
||||
|
||||
// If our config, Before or After value contain any marked values,
|
||||
// ensure those are stripped out before sending
|
||||
// this to the provider
|
||||
unmarkedConfigVal, _ := configVal.UnmarkDeep()
|
||||
unmarkedBefore, beforePaths := change.Before.UnmarkDeepWithPaths()
|
||||
unmarkedAfter, afterPaths := change.After.UnmarkDeepWithPaths()
|
||||
|
||||
// If we have an Update action, our before and after values are equal,
|
||||
// and only differ on their sensitivity, the newVal is the after val
|
||||
// and we should not communicate with the provider. We do need to update
|
||||
// the state with this new value, to ensure the sensitivity change is
|
||||
// persisted.
|
||||
eqV := unmarkedBefore.Equals(unmarkedAfter)
|
||||
eq := eqV.IsKnown() && eqV.True()
|
||||
if change.Action == plans.Update && eq && !reflect.DeepEqual(beforePaths, afterPaths) {
|
||||
// Copy the previous state, changing only the value
|
||||
newState := &states.ResourceInstanceObject{
|
||||
CreateBeforeDestroy: state.CreateBeforeDestroy,
|
||||
Dependencies: state.Dependencies,
|
||||
Private: state.Private,
|
||||
Status: state.Status,
|
||||
Value: change.After,
|
||||
}
|
||||
|
||||
// Write the final state
|
||||
if n.Output != nil {
|
||||
*n.Output = newState
|
||||
}
|
||||
|
||||
return diags
|
||||
}
|
||||
|
||||
resp := provider.ApplyResourceChange(providers.ApplyResourceChangeRequest{
|
||||
TypeName: n.Addr.Resource.Type,
|
||||
PriorState: unmarkedBefore,
|
||||
Config: unmarkedConfigVal,
|
||||
PlannedState: unmarkedAfter,
|
||||
PlannedPrivate: change.Private,
|
||||
ProviderMeta: metaConfigVal,
|
||||
})
|
||||
applyDiags := resp.Diagnostics
|
||||
if n.Config != nil {
|
||||
applyDiags = applyDiags.InConfigBody(n.Config.Config)
|
||||
}
|
||||
diags = diags.Append(applyDiags)
|
||||
|
||||
// Even if there are errors in the returned diagnostics, the provider may
|
||||
// have returned a _partial_ state for an object that already exists but
|
||||
// failed to fully configure, and so the remaining code must always run
|
||||
// to completion but must be defensive against the new value being
|
||||
// incomplete.
|
||||
newVal := resp.NewState
|
||||
|
||||
// If we have paths to mark, mark those on this new value
|
||||
if len(afterPaths) > 0 {
|
||||
newVal = newVal.MarkWithPaths(afterPaths)
|
||||
}
|
||||
|
||||
if newVal == cty.NilVal {
|
||||
// Providers are supposed to return a partial new value even when errors
|
||||
// occur, but sometimes they don't and so in that case we'll patch that up
|
||||
// by just using the prior state, so we'll at least keep track of the
|
||||
// object for the user to retry.
|
||||
newVal = change.Before
|
||||
|
||||
// As a special case, we'll set the new value to null if it looks like
|
||||
// we were trying to execute a delete, because the provider in this case
|
||||
// probably left the newVal unset intending it to be interpreted as "null".
|
||||
if change.After.IsNull() {
|
||||
newVal = cty.NullVal(schema.ImpliedType())
|
||||
}
|
||||
|
||||
// Ideally we'd produce an error or warning here if newVal is nil and
|
||||
// there are no errors in diags, because that indicates a buggy
|
||||
// provider not properly reporting its result, but unfortunately many
|
||||
// of our historical test mocks behave in this way and so producing
|
||||
// a diagnostic here fails hundreds of tests. Instead, we must just
|
||||
// silently retain the old value for now. Returning a nil value with
|
||||
// no errors is still always considered a bug in the provider though,
|
||||
// and should be fixed for any "real" providers that do it.
|
||||
}
|
||||
|
||||
var conformDiags tfdiags.Diagnostics
|
||||
for _, err := range newVal.Type().TestConformance(schema.ImpliedType()) {
|
||||
conformDiags = conformDiags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Provider produced invalid object",
|
||||
fmt.Sprintf(
|
||||
"Provider %q produced an invalid value after apply for %s. The result cannot not be saved in the Terraform state.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
|
||||
n.ProviderAddr.Provider.String(), tfdiags.FormatErrorPrefixed(err, absAddr.String()),
|
||||
),
|
||||
))
|
||||
}
|
||||
diags = diags.Append(conformDiags)
|
||||
if conformDiags.HasErrors() {
|
||||
// Bail early in this particular case, because an object that doesn't
|
||||
// conform to the schema can't be saved in the state anyway -- the
|
||||
// serializer will reject it.
|
||||
return diags
|
||||
}
|
||||
|
||||
// After this point we have a type-conforming result object and so we
|
||||
// must always run to completion to ensure it can be saved. If n.Error
|
||||
// is set then we must not return a non-nil error, in order to allow
|
||||
// evaluation to continue to a later point where our state object will
|
||||
// be saved.
|
||||
|
||||
// By this point there must not be any unknown values remaining in our
|
||||
// object, because we've applied the change and we can't save unknowns
|
||||
// in our persistent state. If any are present then we will indicate an
|
||||
// error (which is always a bug in the provider) but we will also replace
|
||||
// them with nulls so that we can successfully save the portions of the
|
||||
// returned value that are known.
|
||||
if !newVal.IsWhollyKnown() {
|
||||
// To generate better error messages, we'll go for a walk through the
|
||||
// value and make a separate diagnostic for each unknown value we
|
||||
// find.
|
||||
cty.Walk(newVal, func(path cty.Path, val cty.Value) (bool, error) {
|
||||
if !val.IsKnown() {
|
||||
pathStr := tfdiags.FormatCtyPath(path)
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Provider returned invalid result object after apply",
|
||||
fmt.Sprintf(
|
||||
"After the apply operation, the provider still indicated an unknown value for %s%s. All values must be known after apply, so this is always a bug in the provider and should be reported in the provider's own repository. Terraform will still save the other known object values in the state.",
|
||||
n.Addr.Absolute(ctx.Path()), pathStr,
|
||||
),
|
||||
))
|
||||
}
|
||||
return true, nil
|
||||
})
|
||||
|
||||
// NOTE: This operation can potentially be lossy if there are multiple
|
||||
// elements in a set that differ only by unknown values: after
|
||||
// replacing with null these will be merged together into a single set
|
||||
// element. Since we can only get here in the presence of a provider
|
||||
// bug, we accept this because storing a result here is always a
|
||||
// best-effort sort of thing.
|
||||
newVal = cty.UnknownAsNull(newVal)
|
||||
}
|
||||
|
||||
if change.Action != plans.Delete && !diags.HasErrors() {
|
||||
// Only values that were marked as unknown in the planned value are allowed
|
||||
// to change during the apply operation. (We do this after the unknown-ness
|
||||
// check above so that we also catch anything that became unknown after
|
||||
// being known during plan.)
|
||||
//
|
||||
// If we are returning other errors anyway then we'll give this
|
||||
// a pass since the other errors are usually the explanation for
|
||||
// this one and so it's more helpful to let the user focus on the
|
||||
// root cause rather than distract with this extra problem.
|
||||
if errs := objchange.AssertObjectCompatible(schema, change.After, newVal); len(errs) > 0 {
|
||||
if resp.LegacyTypeSystem {
|
||||
// The shimming of the old type system in the legacy SDK is not precise
|
||||
// enough to pass this consistency check, so we'll give it a pass here,
|
||||
// but we will generate a warning about it so that we are more likely
|
||||
// to notice in the logs if an inconsistency beyond the type system
|
||||
// leads to a downstream provider failure.
|
||||
var buf strings.Builder
|
||||
fmt.Fprintf(&buf, "[WARN] Provider %q produced an unexpected new value for %s, but we are tolerating it because it is using the legacy plugin SDK.\n The following problems may be the cause of any confusing errors from downstream operations:", n.ProviderAddr.Provider.String(), absAddr)
|
||||
for _, err := range errs {
|
||||
fmt.Fprintf(&buf, "\n - %s", tfdiags.FormatError(err))
|
||||
}
|
||||
log.Print(buf.String())
|
||||
|
||||
// The sort of inconsistency we won't catch here is if a known value
|
||||
// in the plan is changed during apply. That can cause downstream
|
||||
// problems because a dependent resource would make its own plan based
|
||||
// on the planned value, and thus get a different result during the
|
||||
// apply phase. This will usually lead to a "Provider produced invalid plan"
|
||||
// error that incorrectly blames the downstream resource for the change.
|
||||
|
||||
} else {
|
||||
for _, err := range errs {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Provider produced inconsistent result after apply",
|
||||
fmt.Sprintf(
|
||||
"When applying changes to %s, provider %q produced an unexpected new value: %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
|
||||
absAddr, n.ProviderAddr.Provider.String(), tfdiags.FormatError(err),
|
||||
),
|
||||
))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If a provider returns a null or non-null object at the wrong time then
|
||||
// we still want to save that but it often causes some confusing behaviors
|
||||
// where it seems like Terraform is failing to take any action at all,
|
||||
// so we'll generate some errors to draw attention to it.
|
||||
if !diags.HasErrors() {
|
||||
if change.Action == plans.Delete && !newVal.IsNull() {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Provider returned invalid result object after apply",
|
||||
fmt.Sprintf(
|
||||
"After applying a %s plan, the provider returned a non-null object for %s. Destroying should always produce a null value, so this is always a bug in the provider and should be reported in the provider's own repository. Terraform will still save this errant object in the state for debugging and recovery.",
|
||||
change.Action, n.Addr.Absolute(ctx.Path()),
|
||||
),
|
||||
))
|
||||
}
|
||||
if change.Action != plans.Delete && newVal.IsNull() {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Provider returned invalid result object after apply",
|
||||
fmt.Sprintf(
|
||||
"After applying a %s plan, the provider returned a null object for %s. Only destroying should always produce a null value, so this is always a bug in the provider and should be reported in the provider's own repository.",
|
||||
change.Action, n.Addr.Absolute(ctx.Path()),
|
||||
),
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
newStatus := states.ObjectReady
|
||||
|
||||
// Sometimes providers return a null value when an operation fails for some
|
||||
// reason, but we'd rather keep the prior state so that the error can be
|
||||
// corrected on a subsequent run. We must only do this for null new value
|
||||
// though, or else we may discard partial updates the provider was able to
|
||||
// complete.
|
||||
if diags.HasErrors() && newVal.IsNull() {
|
||||
// Otherwise, we'll continue but using the prior state as the new value,
|
||||
// making this effectively a no-op. If the item really _has_ been
|
||||
// deleted then our next refresh will detect that and fix it up.
|
||||
// If change.Action is Create then change.Before will also be null,
|
||||
// which is fine.
|
||||
newVal = change.Before
|
||||
|
||||
// If we're recovering the previous state, we also want to restore the
|
||||
// the tainted status of the object.
|
||||
if state.Status == states.ObjectTainted {
|
||||
newStatus = states.ObjectTainted
|
||||
}
|
||||
}
|
||||
|
||||
var newState *states.ResourceInstanceObject
|
||||
if !newVal.IsNull() { // null value indicates that the object is deleted, so we won't set a new state in that case
|
||||
newState = &states.ResourceInstanceObject{
|
||||
Status: newStatus,
|
||||
Value: newVal,
|
||||
Private: resp.Private,
|
||||
CreateBeforeDestroy: n.CreateBeforeDestroy,
|
||||
}
|
||||
}
|
||||
|
||||
// Write the final state
|
||||
if n.Output != nil {
|
||||
*n.Output = newState
|
||||
}
|
||||
|
||||
if diags.HasErrors() {
|
||||
// If the caller provided an error pointer then they are expected to
|
||||
// handle the error some other way and we treat our own result as
|
||||
// success.
|
||||
if n.Error != nil {
|
||||
err := diags.Err()
|
||||
*n.Error = err
|
||||
log.Printf("[DEBUG] %s: apply errored, but we're indicating that via the Error pointer rather than returning it: %s", n.Addr.Absolute(ctx.Path()), err)
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
return diags
|
||||
}
|
||||
|
||||
// EvalMaybeTainted is an EvalNode that takes the planned change, new value,
|
||||
// and possible error from an apply operation and produces a new instance
|
||||
// object marked as tainted if it appears that a create operation has failed.
|
||||
//
|
||||
// This EvalNode never returns an error, to ensure that a subsequent EvalNode
|
||||
// can still record the possibly-tainted object in the state.
|
||||
type EvalMaybeTainted struct {
|
||||
Addr addrs.ResourceInstance
|
||||
Gen states.Generation
|
||||
Change **plans.ResourceInstanceChange
|
||||
State **states.ResourceInstanceObject
|
||||
Error *error
|
||||
}
|
||||
|
||||
func (n *EvalMaybeTainted) Eval(ctx EvalContext) tfdiags.Diagnostics {
|
||||
if n.State == nil || n.Change == nil || n.Error == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
state := *n.State
|
||||
change := *n.Change
|
||||
err := *n.Error
|
||||
|
||||
// nothing to do if everything went as planned
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
if state != nil && state.Status == states.ObjectTainted {
|
||||
log.Printf("[TRACE] EvalMaybeTainted: %s was already tainted, so nothing to do", n.Addr.Absolute(ctx.Path()))
|
||||
return nil
|
||||
}
|
||||
|
||||
if change.Action == plans.Create {
|
||||
// If there are errors during a _create_ then the object is
|
||||
// in an undefined state, and so we'll mark it as tainted so
|
||||
// we can try again on the next run.
|
||||
//
|
||||
// We don't do this for other change actions because errors
|
||||
// during updates will often not change the remote object at all.
|
||||
// If there _were_ changes prior to the error, it's the provider's
|
||||
// responsibility to record the effect of those changes in the
|
||||
// object value it returned.
|
||||
log.Printf("[TRACE] EvalMaybeTainted: %s encountered an error during creation, so it is now marked as tainted", n.Addr.Absolute(ctx.Path()))
|
||||
*n.State = state.AsTainted()
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// resourceHasUserVisibleApply returns true if the given resource is one where
|
||||
// apply actions should be exposed to the user.
|
||||
//
|
||||
// Certain resources do apply actions only as an implementation detail, so
|
||||
// these should not be advertised to code outside of this package.
|
||||
func resourceHasUserVisibleApply(addr addrs.ResourceInstance) bool {
|
||||
// Only managed resources have user-visible apply actions.
|
||||
// In particular, this excludes data resources since we "apply" these
|
||||
// only as an implementation detail of removing them from state when
|
||||
// they are destroyed. (When reading, they don't get here at all because
|
||||
// we present them as "Refresh" actions.)
|
||||
return addr.ContainingResource().Mode == addrs.ManagedResourceMode
|
||||
}
|
||||
|
||||
// EvalApplyProvisioners is an EvalNode implementation that executes
|
||||
// the provisioners for a resource.
|
||||
//
|
||||
// TODO(mitchellh): This should probably be split up into a more fine-grained
|
||||
// ApplyProvisioner (single) that is looped over.
|
||||
type EvalApplyProvisioners struct {
|
||||
Addr addrs.ResourceInstance
|
||||
State **states.ResourceInstanceObject
|
||||
ResourceConfig *configs.Resource
|
||||
CreateNew *bool
|
||||
Error *error
|
||||
|
||||
// When is the type of provisioner to run at this point
|
||||
When configs.ProvisionerWhen
|
||||
}
|
||||
|
||||
// TODO: test
|
||||
func (n *EvalApplyProvisioners) Eval(ctx EvalContext) tfdiags.Diagnostics {
|
||||
var diags tfdiags.Diagnostics
|
||||
|
||||
absAddr := n.Addr.Absolute(ctx.Path())
|
||||
state := *n.State
|
||||
if state == nil {
|
||||
log.Printf("[TRACE] EvalApplyProvisioners: %s has no state, so skipping provisioners", n.Addr)
|
||||
return nil
|
||||
}
|
||||
if n.When == configs.ProvisionerWhenCreate && n.CreateNew != nil && !*n.CreateNew {
|
||||
// If we're not creating a new resource, then don't run provisioners
|
||||
log.Printf("[TRACE] EvalApplyProvisioners: %s is not freshly-created, so no provisioning is required", n.Addr)
|
||||
return nil
|
||||
}
|
||||
if state.Status == states.ObjectTainted {
|
||||
// No point in provisioning an object that is already tainted, since
|
||||
// it's going to get recreated on the next apply anyway.
|
||||
log.Printf("[TRACE] EvalApplyProvisioners: %s is tainted, so skipping provisioning", n.Addr)
|
||||
return nil
|
||||
}
|
||||
|
||||
provs := n.filterProvisioners()
|
||||
if len(provs) == 0 {
|
||||
// We have no provisioners, so don't do anything
|
||||
return nil
|
||||
}
|
||||
|
||||
if n.Error != nil && *n.Error != nil {
|
||||
// We're already tainted, so just return out
|
||||
return nil
|
||||
}
|
||||
|
||||
// Call pre hook
|
||||
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
|
||||
return h.PreProvisionInstance(absAddr, state.Value)
|
||||
}))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
// If there are no errors, then we append it to our output error
|
||||
// if we have one, otherwise we just output it.
|
||||
err := n.apply(ctx, provs)
|
||||
if err != nil {
|
||||
*n.Error = multierror.Append(*n.Error, err)
|
||||
log.Printf("[TRACE] EvalApplyProvisioners: %s provisioning failed, but we will continue anyway at the caller's request", absAddr)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Call post hook
|
||||
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
|
||||
return h.PostProvisionInstance(absAddr, state.Value)
|
||||
}))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
return diags
|
||||
}
|
||||
|
||||
// filterProvisioners filters the provisioners on the resource to only
|
||||
// the provisioners specified by the "when" option.
|
||||
func (n *EvalApplyProvisioners) filterProvisioners() []*configs.Provisioner {
|
||||
// Fast path the zero case
|
||||
if n.ResourceConfig == nil || n.ResourceConfig.Managed == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
if len(n.ResourceConfig.Managed.Provisioners) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
result := make([]*configs.Provisioner, 0, len(n.ResourceConfig.Managed.Provisioners))
|
||||
for _, p := range n.ResourceConfig.Managed.Provisioners {
|
||||
if p.When == n.When {
|
||||
result = append(result, p)
|
||||
}
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
func (n *EvalApplyProvisioners) apply(ctx EvalContext, provs []*configs.Provisioner) error {
|
||||
var diags tfdiags.Diagnostics
|
||||
instanceAddr := n.Addr
|
||||
absAddr := instanceAddr.Absolute(ctx.Path())
|
||||
|
||||
// this self is only used for destroy provisioner evaluation, and must
|
||||
// refer to the last known value of the resource.
|
||||
self := (*n.State).Value
|
||||
|
||||
var evalScope func(EvalContext, hcl.Body, cty.Value, *configschema.Block) (cty.Value, tfdiags.Diagnostics)
|
||||
switch n.When {
|
||||
case configs.ProvisionerWhenDestroy:
|
||||
evalScope = n.evalDestroyProvisionerConfig
|
||||
default:
|
||||
evalScope = n.evalProvisionerConfig
|
||||
}
|
||||
|
||||
// If there's a connection block defined directly inside the resource block
|
||||
// then it'll serve as a base connection configuration for all of the
|
||||
// provisioners.
|
||||
var baseConn hcl.Body
|
||||
if n.ResourceConfig.Managed != nil && n.ResourceConfig.Managed.Connection != nil {
|
||||
baseConn = n.ResourceConfig.Managed.Connection.Config
|
||||
}
|
||||
|
||||
for _, prov := range provs {
|
||||
log.Printf("[TRACE] EvalApplyProvisioners: provisioning %s with %q", absAddr, prov.Type)
|
||||
|
||||
// Get the provisioner
|
||||
provisioner := ctx.Provisioner(prov.Type)
|
||||
schema := ctx.ProvisionerSchema(prov.Type)
|
||||
|
||||
config, configDiags := evalScope(ctx, prov.Config, self, schema)
|
||||
diags = diags.Append(configDiags)
|
||||
if diags.HasErrors() {
|
||||
return diags.Err()
|
||||
}
|
||||
|
||||
// If the provisioner block contains a connection block of its own then
|
||||
// it can override the base connection configuration, if any.
|
||||
var localConn hcl.Body
|
||||
if prov.Connection != nil {
|
||||
localConn = prov.Connection.Config
|
||||
}
|
||||
|
||||
var connBody hcl.Body
|
||||
switch {
|
||||
case baseConn != nil && localConn != nil:
|
||||
// Our standard merging logic applies here, similar to what we do
|
||||
// with _override.tf configuration files: arguments from the
|
||||
// base connection block will be masked by any arguments of the
|
||||
// same name in the local connection block.
|
||||
connBody = configs.MergeBodies(baseConn, localConn)
|
||||
case baseConn != nil:
|
||||
connBody = baseConn
|
||||
case localConn != nil:
|
||||
connBody = localConn
|
||||
}
|
||||
|
||||
// start with an empty connInfo
|
||||
connInfo := cty.NullVal(connectionBlockSupersetSchema.ImpliedType())
|
||||
|
||||
if connBody != nil {
|
||||
var connInfoDiags tfdiags.Diagnostics
|
||||
connInfo, connInfoDiags = evalScope(ctx, connBody, self, connectionBlockSupersetSchema)
|
||||
diags = diags.Append(connInfoDiags)
|
||||
if diags.HasErrors() {
|
||||
return diags.Err()
|
||||
}
|
||||
}
|
||||
|
||||
{
|
||||
// Call pre hook
|
||||
err := ctx.Hook(func(h Hook) (HookAction, error) {
|
||||
return h.PreProvisionInstanceStep(absAddr, prov.Type)
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// The output function
|
||||
outputFn := func(msg string) {
|
||||
ctx.Hook(func(h Hook) (HookAction, error) {
|
||||
h.ProvisionOutput(absAddr, prov.Type, msg)
|
||||
return HookActionContinue, nil
|
||||
})
|
||||
}
|
||||
|
||||
// If our config or connection info contains any marked values, ensure
|
||||
// those are stripped out before sending to the provisioner. Unlike
|
||||
// resources, we have no need to capture the marked paths and reapply
|
||||
// later.
|
||||
unmarkedConfig, configMarks := config.UnmarkDeep()
|
||||
unmarkedConnInfo, _ := connInfo.UnmarkDeep()
|
||||
|
||||
// Marks on the config might result in leaking sensitive values through
|
||||
// provisioner logging, so we conservatively suppress all output in
|
||||
// this case. This should not apply to connection info values, which
|
||||
// provisioners ought not to be logging anyway.
|
||||
if len(configMarks) > 0 {
|
||||
outputFn = func(msg string) {
|
||||
ctx.Hook(func(h Hook) (HookAction, error) {
|
||||
h.ProvisionOutput(absAddr, prov.Type, "(output suppressed due to sensitive value in config)")
|
||||
return HookActionContinue, nil
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
output := CallbackUIOutput{OutputFn: outputFn}
|
||||
resp := provisioner.ProvisionResource(provisioners.ProvisionResourceRequest{
|
||||
Config: unmarkedConfig,
|
||||
Connection: unmarkedConnInfo,
|
||||
UIOutput: &output,
|
||||
})
|
||||
applyDiags := resp.Diagnostics.InConfigBody(prov.Config)
|
||||
|
||||
// Call post hook
|
||||
hookErr := ctx.Hook(func(h Hook) (HookAction, error) {
|
||||
return h.PostProvisionInstanceStep(absAddr, prov.Type, applyDiags.Err())
|
||||
})
|
||||
|
||||
switch prov.OnFailure {
|
||||
case configs.ProvisionerOnFailureContinue:
|
||||
if applyDiags.HasErrors() {
|
||||
log.Printf("[WARN] Errors while provisioning %s with %q, but continuing as requested in configuration", n.Addr, prov.Type)
|
||||
} else {
|
||||
// Maybe there are warnings that we still want to see
|
||||
diags = diags.Append(applyDiags)
|
||||
}
|
||||
default:
|
||||
diags = diags.Append(applyDiags)
|
||||
if applyDiags.HasErrors() {
|
||||
log.Printf("[WARN] Errors while provisioning %s with %q, so aborting", n.Addr, prov.Type)
|
||||
return diags.Err()
|
||||
}
|
||||
}
|
||||
|
||||
// Deal with the hook
|
||||
if hookErr != nil {
|
||||
return hookErr
|
||||
}
|
||||
}
|
||||
|
||||
// we have to drop warning-only diagnostics for now
|
||||
if diags.HasErrors() {
|
||||
return diags.ErrWithWarnings()
|
||||
}
|
||||
|
||||
// log any warnings since we can't return them
|
||||
if e := diags.ErrWithWarnings(); e != nil {
|
||||
log.Printf("[WARN] EvalApplyProvisioners %s: %v", n.Addr, e)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (n *EvalApplyProvisioners) evalProvisionerConfig(ctx EvalContext, body hcl.Body, self cty.Value, schema *configschema.Block) (cty.Value, tfdiags.Diagnostics) {
|
||||
var diags tfdiags.Diagnostics
|
||||
|
||||
forEach, forEachDiags := evaluateForEachExpression(n.ResourceConfig.ForEach, ctx)
|
||||
diags = diags.Append(forEachDiags)
|
||||
|
||||
keyData := EvalDataForInstanceKey(n.Addr.Key, forEach)
|
||||
|
||||
config, _, configDiags := ctx.EvaluateBlock(body, schema, n.Addr, keyData)
|
||||
diags = diags.Append(configDiags)
|
||||
|
||||
return config, diags
|
||||
}
|
||||
|
||||
// during destroy a provisioner can only evaluate within the scope of the parent resource
|
||||
func (n *EvalApplyProvisioners) evalDestroyProvisionerConfig(ctx EvalContext, body hcl.Body, self cty.Value, schema *configschema.Block) (cty.Value, tfdiags.Diagnostics) {
|
||||
var diags tfdiags.Diagnostics
|
||||
|
||||
// For a destroy-time provisioner forEach is intentionally nil here,
|
||||
// which EvalDataForInstanceKey responds to by not populating EachValue
|
||||
// in its result. That's okay because each.value is prohibited for
|
||||
// destroy-time provisioners.
|
||||
keyData := EvalDataForInstanceKey(n.Addr.Key, nil)
|
||||
|
||||
evalScope := ctx.EvaluationScope(n.Addr, keyData)
|
||||
config, evalDiags := evalScope.EvalSelfBlock(body, self, schema, keyData)
|
||||
diags = diags.Append(evalDiags)
|
||||
|
||||
return config, diags
|
||||
}
|
|
@ -55,6 +55,9 @@ type MockEvalContext struct {
|
|||
SetProviderInputAddr addrs.AbsProviderConfig
|
||||
SetProviderInputValues map[string]cty.Value
|
||||
|
||||
ConfigureProviderFn func(
|
||||
addr addrs.AbsProviderConfig,
|
||||
cfg cty.Value) tfdiags.Diagnostics // overrides the other values below, if set
|
||||
ConfigureProviderCalled bool
|
||||
ConfigureProviderAddr addrs.AbsProviderConfig
|
||||
ConfigureProviderConfig cty.Value
|
||||
|
@ -183,9 +186,13 @@ func (c *MockEvalContext) CloseProvider(addr addrs.AbsProviderConfig) error {
|
|||
}
|
||||
|
||||
func (c *MockEvalContext) ConfigureProvider(addr addrs.AbsProviderConfig, cfg cty.Value) tfdiags.Diagnostics {
|
||||
|
||||
c.ConfigureProviderCalled = true
|
||||
c.ConfigureProviderAddr = addr
|
||||
c.ConfigureProviderConfig = cfg
|
||||
if c.ConfigureProviderFn != nil {
|
||||
return c.ConfigureProviderFn(addr, cfg)
|
||||
}
|
||||
return c.ConfigureProviderDiags
|
||||
}
|
||||
|
||||
|
|
|
@ -60,6 +60,10 @@ func evaluateCountExpressionValue(expr hcl.Expression, ctx EvalContext) (cty.Val
|
|||
return nullCount, diags
|
||||
}
|
||||
|
||||
// Unmark the count value, sensitive values are allowed in count but not for_each,
|
||||
// as using it here will not disclose the sensitive value
|
||||
countVal, _ = countVal.Unmark()
|
||||
|
||||
switch {
|
||||
case countVal.IsNull():
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
|
|
|
@ -0,0 +1,45 @@
|
|||
package terraform
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/davecgh/go-spew/spew"
|
||||
"github.com/hashicorp/hcl/v2"
|
||||
"github.com/hashicorp/hcl/v2/hcltest"
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
)
|
||||
|
||||
func TestEvaluateCountExpression(t *testing.T) {
|
||||
tests := map[string]struct {
|
||||
Expr hcl.Expression
|
||||
Count int
|
||||
}{
|
||||
"zero": {
|
||||
hcltest.MockExprLiteral(cty.NumberIntVal(0)),
|
||||
0,
|
||||
},
|
||||
"expression with marked value": {
|
||||
hcltest.MockExprLiteral(cty.NumberIntVal(8).Mark("sensitive")),
|
||||
8,
|
||||
},
|
||||
}
|
||||
for name, test := range tests {
|
||||
t.Run(name, func(t *testing.T) {
|
||||
ctx := &MockEvalContext{}
|
||||
ctx.installSimpleEval()
|
||||
countVal, diags := evaluateCountExpression(test.Expr, ctx)
|
||||
|
||||
if len(diags) != 0 {
|
||||
t.Errorf("unexpected diagnostics %s", spew.Sdump(diags))
|
||||
}
|
||||
|
||||
if !reflect.DeepEqual(countVal, test.Count) {
|
||||
t.Errorf(
|
||||
"wrong map value\ngot: %swant: %s",
|
||||
spew.Sdump(countVal), spew.Sdump(test.Count),
|
||||
)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
|
@ -1,940 +0,0 @@
|
|||
package terraform
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"reflect"
|
||||
"strings"
|
||||
|
||||
"github.com/hashicorp/hcl/v2"
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
|
||||
"github.com/hashicorp/terraform/addrs"
|
||||
"github.com/hashicorp/terraform/configs"
|
||||
"github.com/hashicorp/terraform/plans"
|
||||
"github.com/hashicorp/terraform/plans/objchange"
|
||||
"github.com/hashicorp/terraform/providers"
|
||||
"github.com/hashicorp/terraform/states"
|
||||
"github.com/hashicorp/terraform/tfdiags"
|
||||
)
|
||||
|
||||
// EvalCheckPlannedChange is an EvalNode implementation that produces errors
|
||||
// if the _actual_ expected value is not compatible with what was recorded
|
||||
// in the plan.
|
||||
//
|
||||
// Errors here are most often indicative of a bug in the provider, so our
|
||||
// error messages will report with that in mind. It's also possible that
|
||||
// there's a bug in Terraform's Core's own "proposed new value" code in
|
||||
// EvalDiff.
|
||||
type EvalCheckPlannedChange struct {
|
||||
Addr addrs.ResourceInstance
|
||||
ProviderAddr addrs.AbsProviderConfig
|
||||
ProviderSchema **ProviderSchema
|
||||
|
||||
// We take ResourceInstanceChange objects here just because that's what's
|
||||
// convenient to pass in from the evaltree implementation, but we really
|
||||
// only look at the "After" value of each change.
|
||||
Planned, Actual **plans.ResourceInstanceChange
|
||||
}
|
||||
|
||||
func (n *EvalCheckPlannedChange) Eval(ctx EvalContext) tfdiags.Diagnostics {
|
||||
var diags tfdiags.Diagnostics
|
||||
|
||||
providerSchema := *n.ProviderSchema
|
||||
plannedChange := *n.Planned
|
||||
actualChange := *n.Actual
|
||||
|
||||
schema, _ := providerSchema.SchemaForResourceAddr(n.Addr.ContainingResource())
|
||||
if schema == nil {
|
||||
// Should be caught during validation, so we don't bother with a pretty error here
|
||||
diags = diags.Append(fmt.Errorf("provider does not support %q", n.Addr.Resource.Type))
|
||||
return diags
|
||||
}
|
||||
|
||||
absAddr := n.Addr.Absolute(ctx.Path())
|
||||
|
||||
log.Printf("[TRACE] EvalCheckPlannedChange: Verifying that actual change (action %s) matches planned change (action %s)", actualChange.Action, plannedChange.Action)
|
||||
|
||||
if plannedChange.Action != actualChange.Action {
|
||||
switch {
|
||||
case plannedChange.Action == plans.Update && actualChange.Action == plans.NoOp:
|
||||
// It's okay for an update to become a NoOp once we've filled in
|
||||
// all of the unknown values, since the final values might actually
|
||||
// match what was there before after all.
|
||||
log.Printf("[DEBUG] After incorporating new values learned so far during apply, %s change has become NoOp", absAddr)
|
||||
|
||||
case (plannedChange.Action == plans.CreateThenDelete && actualChange.Action == plans.DeleteThenCreate) ||
|
||||
(plannedChange.Action == plans.DeleteThenCreate && actualChange.Action == plans.CreateThenDelete):
|
||||
// If the order of replacement changed, then that is a bug in terraform
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Terraform produced inconsistent final plan",
|
||||
fmt.Sprintf(
|
||||
"When expanding the plan for %s to include new values learned so far during apply, the planned action changed from %s to %s.\n\nThis is a bug in Terraform and should be reported.",
|
||||
absAddr, plannedChange.Action, actualChange.Action,
|
||||
),
|
||||
))
|
||||
default:
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Provider produced inconsistent final plan",
|
||||
fmt.Sprintf(
|
||||
"When expanding the plan for %s to include new values learned so far during apply, provider %q changed the planned action from %s to %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
|
||||
absAddr, n.ProviderAddr.Provider.String(),
|
||||
plannedChange.Action, actualChange.Action,
|
||||
),
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
errs := objchange.AssertObjectCompatible(schema, plannedChange.After, actualChange.After)
|
||||
for _, err := range errs {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Provider produced inconsistent final plan",
|
||||
fmt.Sprintf(
|
||||
"When expanding the plan for %s to include new values learned so far during apply, provider %q produced an invalid new value for %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
|
||||
absAddr, n.ProviderAddr.Provider.String(), tfdiags.FormatError(err),
|
||||
),
|
||||
))
|
||||
}
|
||||
return diags
|
||||
}
|
||||
|
||||
// EvalDiff is an EvalNode implementation that detects changes for a given
|
||||
// resource instance.
|
||||
type EvalDiff struct {
|
||||
Addr addrs.ResourceInstance
|
||||
Config *configs.Resource
|
||||
Provider *providers.Interface
|
||||
ProviderAddr addrs.AbsProviderConfig
|
||||
ProviderMetas map[addrs.Provider]*configs.ProviderMeta
|
||||
ProviderSchema **ProviderSchema
|
||||
State **states.ResourceInstanceObject
|
||||
PreviousDiff **plans.ResourceInstanceChange
|
||||
|
||||
// CreateBeforeDestroy is set if either the resource's own config sets
|
||||
// create_before_destroy explicitly or if dependencies have forced the
|
||||
// resource to be handled as create_before_destroy in order to avoid
|
||||
// a dependency cycle.
|
||||
CreateBeforeDestroy bool
|
||||
|
||||
OutputChange **plans.ResourceInstanceChange
|
||||
OutputState **states.ResourceInstanceObject
|
||||
|
||||
Stub bool
|
||||
}
|
||||
|
||||
// TODO: test
|
||||
func (n *EvalDiff) Eval(ctx EvalContext) tfdiags.Diagnostics {
|
||||
var diags tfdiags.Diagnostics
|
||||
|
||||
state := *n.State
|
||||
config := *n.Config
|
||||
provider := *n.Provider
|
||||
providerSchema := *n.ProviderSchema
|
||||
|
||||
createBeforeDestroy := n.CreateBeforeDestroy
|
||||
if n.PreviousDiff != nil {
|
||||
// If we already planned the action, we stick to that plan
|
||||
createBeforeDestroy = (*n.PreviousDiff).Action == plans.CreateThenDelete
|
||||
}
|
||||
|
||||
if providerSchema == nil {
|
||||
diags = diags.Append(fmt.Errorf("provider schema is unavailable for %s", n.Addr))
|
||||
return diags
|
||||
}
|
||||
if n.ProviderAddr.Provider.Type == "" {
|
||||
panic(fmt.Sprintf("EvalDiff for %s does not have ProviderAddr set", n.Addr.Absolute(ctx.Path())))
|
||||
}
|
||||
|
||||
// Evaluate the configuration
|
||||
schema, _ := providerSchema.SchemaForResourceAddr(n.Addr.ContainingResource())
|
||||
if schema == nil {
|
||||
// Should be caught during validation, so we don't bother with a pretty error here
|
||||
diags = diags.Append(fmt.Errorf("provider does not support resource type %q", n.Addr.Resource.Type))
|
||||
return diags
|
||||
}
|
||||
forEach, _ := evaluateForEachExpression(n.Config.ForEach, ctx)
|
||||
keyData := EvalDataForInstanceKey(n.Addr.Key, forEach)
|
||||
origConfigVal, _, configDiags := ctx.EvaluateBlock(config.Config, schema, nil, keyData)
|
||||
diags = diags.Append(configDiags)
|
||||
if configDiags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
metaConfigVal := cty.NullVal(cty.DynamicPseudoType)
|
||||
if n.ProviderMetas != nil {
|
||||
if m, ok := n.ProviderMetas[n.ProviderAddr.Provider]; ok && m != nil {
|
||||
// if the provider doesn't support this feature, throw an error
|
||||
if (*n.ProviderSchema).ProviderMeta == nil {
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
Severity: hcl.DiagError,
|
||||
Summary: fmt.Sprintf("Provider %s doesn't support provider_meta", n.ProviderAddr.Provider.String()),
|
||||
Detail: fmt.Sprintf("The resource %s belongs to a provider that doesn't support provider_meta blocks", n.Addr),
|
||||
Subject: &m.ProviderRange,
|
||||
})
|
||||
} else {
|
||||
var configDiags tfdiags.Diagnostics
|
||||
metaConfigVal, _, configDiags = ctx.EvaluateBlock(m.Config, (*n.ProviderSchema).ProviderMeta, nil, EvalDataForNoInstanceKey)
|
||||
diags = diags.Append(configDiags)
|
||||
if configDiags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
absAddr := n.Addr.Absolute(ctx.Path())
|
||||
var priorVal cty.Value
|
||||
var priorValTainted cty.Value
|
||||
var priorPrivate []byte
|
||||
if state != nil {
|
||||
if state.Status != states.ObjectTainted {
|
||||
priorVal = state.Value
|
||||
priorPrivate = state.Private
|
||||
} else {
|
||||
// If the prior state is tainted then we'll proceed below like
|
||||
// we're creating an entirely new object, but then turn it into
|
||||
// a synthetic "Replace" change at the end, creating the same
|
||||
// result as if the provider had marked at least one argument
|
||||
// change as "requires replacement".
|
||||
priorValTainted = state.Value
|
||||
priorVal = cty.NullVal(schema.ImpliedType())
|
||||
}
|
||||
} else {
|
||||
priorVal = cty.NullVal(schema.ImpliedType())
|
||||
}
|
||||
|
||||
// Create an unmarked version of our config val and our prior val.
|
||||
// Store the paths for the config val to re-markafter
|
||||
// we've sent things over the wire.
|
||||
unmarkedConfigVal, unmarkedPaths := origConfigVal.UnmarkDeepWithPaths()
|
||||
unmarkedPriorVal, priorPaths := priorVal.UnmarkDeepWithPaths()
|
||||
|
||||
log.Printf("[TRACE] Re-validating config for %q", n.Addr.Absolute(ctx.Path()))
|
||||
// Allow the provider to validate the final set of values.
|
||||
// The config was statically validated early on, but there may have been
|
||||
// unknown values which the provider could not validate at the time.
|
||||
// TODO: It would be more correct to validate the config after
|
||||
// ignore_changes has been applied, but the current implementation cannot
|
||||
// exclude computed-only attributes when given the `all` option.
|
||||
validateResp := provider.ValidateResourceTypeConfig(
|
||||
providers.ValidateResourceTypeConfigRequest{
|
||||
TypeName: n.Addr.Resource.Type,
|
||||
Config: unmarkedConfigVal,
|
||||
},
|
||||
)
|
||||
if validateResp.Diagnostics.HasErrors() {
|
||||
diags = diags.Append(validateResp.Diagnostics.InConfigBody(config.Config))
|
||||
return diags
|
||||
}
|
||||
|
||||
// ignore_changes is meant to only apply to the configuration, so it must
|
||||
// be applied before we generate a plan. This ensures the config used for
|
||||
// the proposed value, the proposed value itself, and the config presented
|
||||
// to the provider in the PlanResourceChange request all agree on the
|
||||
// starting values.
|
||||
configValIgnored, ignoreChangeDiags := n.processIgnoreChanges(unmarkedPriorVal, unmarkedConfigVal)
|
||||
diags = diags.Append(ignoreChangeDiags)
|
||||
if ignoreChangeDiags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
proposedNewVal := objchange.ProposedNewObject(schema, unmarkedPriorVal, configValIgnored)
|
||||
|
||||
// Call pre-diff hook
|
||||
if !n.Stub {
|
||||
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
|
||||
return h.PreDiff(absAddr, states.CurrentGen, priorVal, proposedNewVal)
|
||||
}))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
}
|
||||
|
||||
resp := provider.PlanResourceChange(providers.PlanResourceChangeRequest{
|
||||
TypeName: n.Addr.Resource.Type,
|
||||
Config: configValIgnored,
|
||||
PriorState: unmarkedPriorVal,
|
||||
ProposedNewState: proposedNewVal,
|
||||
PriorPrivate: priorPrivate,
|
||||
ProviderMeta: metaConfigVal,
|
||||
})
|
||||
diags = diags.Append(resp.Diagnostics.InConfigBody(config.Config))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
plannedNewVal := resp.PlannedState
|
||||
plannedPrivate := resp.PlannedPrivate
|
||||
|
||||
if plannedNewVal == cty.NilVal {
|
||||
// Should never happen. Since real-world providers return via RPC a nil
|
||||
// is always a bug in the client-side stub. This is more likely caused
|
||||
// by an incompletely-configured mock provider in tests, though.
|
||||
panic(fmt.Sprintf("PlanResourceChange of %s produced nil value", absAddr.String()))
|
||||
}
|
||||
|
||||
// We allow the planned new value to disagree with configuration _values_
|
||||
// here, since that allows the provider to do special logic like a
|
||||
// DiffSuppressFunc, but we still require that the provider produces
|
||||
// a value whose type conforms to the schema.
|
||||
for _, err := range plannedNewVal.Type().TestConformance(schema.ImpliedType()) {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Provider produced invalid plan",
|
||||
fmt.Sprintf(
|
||||
"Provider %q planned an invalid value for %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
|
||||
n.ProviderAddr.Provider.String(), tfdiags.FormatErrorPrefixed(err, absAddr.String()),
|
||||
),
|
||||
))
|
||||
}
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
if errs := objchange.AssertPlanValid(schema, unmarkedPriorVal, configValIgnored, plannedNewVal); len(errs) > 0 {
|
||||
if resp.LegacyTypeSystem {
|
||||
// The shimming of the old type system in the legacy SDK is not precise
|
||||
// enough to pass this consistency check, so we'll give it a pass here,
|
||||
// but we will generate a warning about it so that we are more likely
|
||||
// to notice in the logs if an inconsistency beyond the type system
|
||||
// leads to a downstream provider failure.
|
||||
var buf strings.Builder
|
||||
fmt.Fprintf(&buf,
|
||||
"[WARN] Provider %q produced an invalid plan for %s, but we are tolerating it because it is using the legacy plugin SDK.\n The following problems may be the cause of any confusing errors from downstream operations:",
|
||||
n.ProviderAddr.Provider.String(), absAddr,
|
||||
)
|
||||
for _, err := range errs {
|
||||
fmt.Fprintf(&buf, "\n - %s", tfdiags.FormatError(err))
|
||||
}
|
||||
log.Print(buf.String())
|
||||
} else {
|
||||
for _, err := range errs {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Provider produced invalid plan",
|
||||
fmt.Sprintf(
|
||||
"Provider %q planned an invalid value for %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
|
||||
n.ProviderAddr.Provider.String(), tfdiags.FormatErrorPrefixed(err, absAddr.String()),
|
||||
),
|
||||
))
|
||||
}
|
||||
return diags
|
||||
}
|
||||
}
|
||||
|
||||
if resp.LegacyTypeSystem {
|
||||
// Because we allow legacy providers to depart from the contract and
|
||||
// return changes to non-computed values, the plan response may have
|
||||
// altered values that were already suppressed with ignore_changes.
|
||||
// A prime example of this is where providers attempt to obfuscate
|
||||
// config data by turning the config value into a hash and storing the
|
||||
// hash value in the state. There are enough cases of this in existing
|
||||
// providers that we must accommodate the behavior for now, so for
|
||||
// ignore_changes to work at all on these values, we will revert the
|
||||
// ignored values once more.
|
||||
plannedNewVal, ignoreChangeDiags = n.processIgnoreChanges(unmarkedPriorVal, plannedNewVal)
|
||||
diags = diags.Append(ignoreChangeDiags)
|
||||
if ignoreChangeDiags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
}
|
||||
|
||||
// Add the marks back to the planned new value -- this must happen after ignore changes
|
||||
// have been processed
|
||||
unmarkedPlannedNewVal := plannedNewVal
|
||||
if len(unmarkedPaths) > 0 {
|
||||
plannedNewVal = plannedNewVal.MarkWithPaths(unmarkedPaths)
|
||||
}
|
||||
|
||||
// The provider produces a list of paths to attributes whose changes mean
|
||||
// that we must replace rather than update an existing remote object.
|
||||
// However, we only need to do that if the identified attributes _have_
|
||||
// actually changed -- particularly after we may have undone some of the
|
||||
// changes in processIgnoreChanges -- so now we'll filter that list to
|
||||
// include only where changes are detected.
|
||||
reqRep := cty.NewPathSet()
|
||||
if len(resp.RequiresReplace) > 0 {
|
||||
for _, path := range resp.RequiresReplace {
|
||||
if priorVal.IsNull() {
|
||||
// If prior is null then we don't expect any RequiresReplace at all,
|
||||
// because this is a Create action.
|
||||
continue
|
||||
}
|
||||
|
||||
priorChangedVal, priorPathDiags := hcl.ApplyPath(unmarkedPriorVal, path, nil)
|
||||
plannedChangedVal, plannedPathDiags := hcl.ApplyPath(plannedNewVal, path, nil)
|
||||
if plannedPathDiags.HasErrors() && priorPathDiags.HasErrors() {
|
||||
// This means the path was invalid in both the prior and new
|
||||
// values, which is an error with the provider itself.
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Provider produced invalid plan",
|
||||
fmt.Sprintf(
|
||||
"Provider %q has indicated \"requires replacement\" on %s for a non-existent attribute path %#v.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
|
||||
n.ProviderAddr.Provider.String(), absAddr, path,
|
||||
),
|
||||
))
|
||||
continue
|
||||
}
|
||||
|
||||
// Make sure we have valid Values for both values.
|
||||
// Note: if the opposing value was of the type
|
||||
// cty.DynamicPseudoType, the type assigned here may not exactly
|
||||
// match the schema. This is fine here, since we're only going to
|
||||
// check for equality, but if the NullVal is to be used, we need to
|
||||
// check the schema for th true type.
|
||||
switch {
|
||||
case priorChangedVal == cty.NilVal && plannedChangedVal == cty.NilVal:
|
||||
// this should never happen without ApplyPath errors above
|
||||
panic("requires replace path returned 2 nil values")
|
||||
case priorChangedVal == cty.NilVal:
|
||||
priorChangedVal = cty.NullVal(plannedChangedVal.Type())
|
||||
case plannedChangedVal == cty.NilVal:
|
||||
plannedChangedVal = cty.NullVal(priorChangedVal.Type())
|
||||
}
|
||||
|
||||
// Unmark for this value for the equality test. If only sensitivity has changed,
|
||||
// this does not require an Update or Replace
|
||||
unmarkedPlannedChangedVal, _ := plannedChangedVal.UnmarkDeep()
|
||||
eqV := unmarkedPlannedChangedVal.Equals(priorChangedVal)
|
||||
if !eqV.IsKnown() || eqV.False() {
|
||||
reqRep.Add(path)
|
||||
}
|
||||
}
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
}
|
||||
|
||||
// Unmark for this test for value equality.
|
||||
eqV := unmarkedPlannedNewVal.Equals(unmarkedPriorVal)
|
||||
eq := eqV.IsKnown() && eqV.True()
|
||||
|
||||
var action plans.Action
|
||||
switch {
|
||||
case priorVal.IsNull():
|
||||
action = plans.Create
|
||||
case eq:
|
||||
action = plans.NoOp
|
||||
case !reqRep.Empty():
|
||||
// If there are any "requires replace" paths left _after our filtering
|
||||
// above_ then this is a replace action.
|
||||
if createBeforeDestroy {
|
||||
action = plans.CreateThenDelete
|
||||
} else {
|
||||
action = plans.DeleteThenCreate
|
||||
}
|
||||
default:
|
||||
action = plans.Update
|
||||
// "Delete" is never chosen here, because deletion plans are always
|
||||
// created more directly elsewhere, such as in "orphan" handling.
|
||||
}
|
||||
|
||||
if action.IsReplace() {
|
||||
// In this strange situation we want to produce a change object that
|
||||
// shows our real prior object but has a _new_ object that is built
|
||||
// from a null prior object, since we're going to delete the one
|
||||
// that has all the computed values on it.
|
||||
//
|
||||
// Therefore we'll ask the provider to plan again here, giving it
|
||||
// a null object for the prior, and then we'll meld that with the
|
||||
// _actual_ prior state to produce a correctly-shaped replace change.
|
||||
// The resulting change should show any computed attributes changing
|
||||
// from known prior values to unknown values, unless the provider is
|
||||
// able to predict new values for any of these computed attributes.
|
||||
nullPriorVal := cty.NullVal(schema.ImpliedType())
|
||||
|
||||
// Since there is no prior state to compare after replacement, we need
|
||||
// a new unmarked config from our original with no ignored values.
|
||||
unmarkedConfigVal := origConfigVal
|
||||
if origConfigVal.ContainsMarked() {
|
||||
unmarkedConfigVal, _ = origConfigVal.UnmarkDeep()
|
||||
}
|
||||
|
||||
// create a new proposed value from the null state and the config
|
||||
proposedNewVal = objchange.ProposedNewObject(schema, nullPriorVal, unmarkedConfigVal)
|
||||
|
||||
resp = provider.PlanResourceChange(providers.PlanResourceChangeRequest{
|
||||
TypeName: n.Addr.Resource.Type,
|
||||
Config: unmarkedConfigVal,
|
||||
PriorState: nullPriorVal,
|
||||
ProposedNewState: proposedNewVal,
|
||||
PriorPrivate: plannedPrivate,
|
||||
ProviderMeta: metaConfigVal,
|
||||
})
|
||||
// We need to tread carefully here, since if there are any warnings
|
||||
// in here they probably also came out of our previous call to
|
||||
// PlanResourceChange above, and so we don't want to repeat them.
|
||||
// Consequently, we break from the usual pattern here and only
|
||||
// append these new diagnostics if there's at least one error inside.
|
||||
if resp.Diagnostics.HasErrors() {
|
||||
diags = diags.Append(resp.Diagnostics.InConfigBody(config.Config))
|
||||
return diags
|
||||
}
|
||||
plannedNewVal = resp.PlannedState
|
||||
plannedPrivate = resp.PlannedPrivate
|
||||
|
||||
if len(unmarkedPaths) > 0 {
|
||||
plannedNewVal = plannedNewVal.MarkWithPaths(unmarkedPaths)
|
||||
}
|
||||
|
||||
for _, err := range plannedNewVal.Type().TestConformance(schema.ImpliedType()) {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Provider produced invalid plan",
|
||||
fmt.Sprintf(
|
||||
"Provider %q planned an invalid value for %s%s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
|
||||
n.ProviderAddr.Provider.String(), absAddr, tfdiags.FormatError(err),
|
||||
),
|
||||
))
|
||||
}
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
}
|
||||
|
||||
// If our prior value was tainted then we actually want this to appear
|
||||
// as a replace change, even though so far we've been treating it as a
|
||||
// create.
|
||||
if action == plans.Create && priorValTainted != cty.NilVal {
|
||||
if createBeforeDestroy {
|
||||
action = plans.CreateThenDelete
|
||||
} else {
|
||||
action = plans.DeleteThenCreate
|
||||
}
|
||||
priorVal = priorValTainted
|
||||
}
|
||||
|
||||
// If we plan to write or delete sensitive paths from state,
|
||||
// this is an Update action
|
||||
if action == plans.NoOp && !reflect.DeepEqual(priorPaths, unmarkedPaths) {
|
||||
action = plans.Update
|
||||
}
|
||||
|
||||
// As a special case, if we have a previous diff (presumably from the plan
|
||||
// phases, whereas we're now in the apply phase) and it was for a replace,
|
||||
// we've already deleted the original object from state by the time we
|
||||
// get here and so we would've ended up with a _create_ action this time,
|
||||
// which we now need to paper over to get a result consistent with what
|
||||
// we originally intended.
|
||||
if n.PreviousDiff != nil {
|
||||
prevChange := *n.PreviousDiff
|
||||
if prevChange.Action.IsReplace() && action == plans.Create {
|
||||
log.Printf("[TRACE] EvalDiff: %s treating Create change as %s change to match with earlier plan", absAddr, prevChange.Action)
|
||||
action = prevChange.Action
|
||||
priorVal = prevChange.Before
|
||||
}
|
||||
}
|
||||
|
||||
// Call post-refresh hook
|
||||
if !n.Stub {
|
||||
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
|
||||
return h.PostDiff(absAddr, states.CurrentGen, action, priorVal, plannedNewVal)
|
||||
}))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
}
|
||||
|
||||
// Update our output if we care
|
||||
if n.OutputChange != nil {
|
||||
*n.OutputChange = &plans.ResourceInstanceChange{
|
||||
Addr: absAddr,
|
||||
Private: plannedPrivate,
|
||||
ProviderAddr: n.ProviderAddr,
|
||||
Change: plans.Change{
|
||||
Action: action,
|
||||
Before: priorVal,
|
||||
// Pass the marked planned value through in our change
|
||||
// to propogate through evaluation.
|
||||
// Marks will be removed when encoding.
|
||||
After: plannedNewVal,
|
||||
},
|
||||
RequiredReplace: reqRep,
|
||||
}
|
||||
}
|
||||
|
||||
// Update the state if we care
|
||||
if n.OutputState != nil {
|
||||
*n.OutputState = &states.ResourceInstanceObject{
|
||||
// We use the special "planned" status here to note that this
|
||||
// object's value is not yet complete. Objects with this status
|
||||
// cannot be used during expression evaluation, so the caller
|
||||
// must _also_ record the returned change in the active plan,
|
||||
// which the expression evaluator will use in preference to this
|
||||
// incomplete value recorded in the state.
|
||||
Status: states.ObjectPlanned,
|
||||
Value: plannedNewVal,
|
||||
Private: plannedPrivate,
|
||||
}
|
||||
}
|
||||
|
||||
return diags
|
||||
}
|
||||
|
||||
func (n *EvalDiff) processIgnoreChanges(prior, config cty.Value) (cty.Value, tfdiags.Diagnostics) {
|
||||
// ignore_changes only applies when an object already exists, since we
|
||||
// can't ignore changes to a thing we've not created yet.
|
||||
if prior.IsNull() {
|
||||
return config, nil
|
||||
}
|
||||
|
||||
ignoreChanges := n.Config.Managed.IgnoreChanges
|
||||
ignoreAll := n.Config.Managed.IgnoreAllChanges
|
||||
|
||||
if len(ignoreChanges) == 0 && !ignoreAll {
|
||||
return config, nil
|
||||
}
|
||||
if ignoreAll {
|
||||
return prior, nil
|
||||
}
|
||||
if prior.IsNull() || config.IsNull() {
|
||||
// Ignore changes doesn't apply when we're creating for the first time.
|
||||
// Proposed should never be null here, but if it is then we'll just let it be.
|
||||
return config, nil
|
||||
}
|
||||
|
||||
return processIgnoreChangesIndividual(prior, config, ignoreChanges)
|
||||
}
|
||||
|
||||
func processIgnoreChangesIndividual(prior, config cty.Value, ignoreChanges []hcl.Traversal) (cty.Value, tfdiags.Diagnostics) {
|
||||
// When we walk below we will be using cty.Path values for comparison, so
|
||||
// we'll convert our traversals here so we can compare more easily.
|
||||
ignoreChangesPath := make([]cty.Path, len(ignoreChanges))
|
||||
for i, traversal := range ignoreChanges {
|
||||
path := make(cty.Path, len(traversal))
|
||||
for si, step := range traversal {
|
||||
switch ts := step.(type) {
|
||||
case hcl.TraverseRoot:
|
||||
path[si] = cty.GetAttrStep{
|
||||
Name: ts.Name,
|
||||
}
|
||||
case hcl.TraverseAttr:
|
||||
path[si] = cty.GetAttrStep{
|
||||
Name: ts.Name,
|
||||
}
|
||||
case hcl.TraverseIndex:
|
||||
path[si] = cty.IndexStep{
|
||||
Key: ts.Key,
|
||||
}
|
||||
default:
|
||||
panic(fmt.Sprintf("unsupported traversal step %#v", step))
|
||||
}
|
||||
}
|
||||
ignoreChangesPath[i] = path
|
||||
}
|
||||
|
||||
type ignoreChange struct {
|
||||
// Path is the full path, minus any trailing map index
|
||||
path cty.Path
|
||||
// Value is the value we are to retain at the above path. If there is a
|
||||
// key value, this must be a map and the desired value will be at the
|
||||
// key index.
|
||||
value cty.Value
|
||||
// Key is the index key if the ignored path ends in a map index.
|
||||
key cty.Value
|
||||
}
|
||||
var ignoredValues []ignoreChange
|
||||
|
||||
// Find the actual changes first and store them in the ignoreChange struct.
|
||||
// If the change was to a map value, and the key doesn't exist in the
|
||||
// config, it would never be visited in the transform walk.
|
||||
for _, icPath := range ignoreChangesPath {
|
||||
key := cty.NullVal(cty.String)
|
||||
// check for a map index, since maps are the only structure where we
|
||||
// could have invalid path steps.
|
||||
last, ok := icPath[len(icPath)-1].(cty.IndexStep)
|
||||
if ok {
|
||||
if last.Key.Type() == cty.String {
|
||||
icPath = icPath[:len(icPath)-1]
|
||||
key = last.Key
|
||||
}
|
||||
}
|
||||
|
||||
// The structure should have been validated already, and we already
|
||||
// trimmed the trailing map index. Any other intermediate index error
|
||||
// means we wouldn't be able to apply the value below, so no need to
|
||||
// record this.
|
||||
p, err := icPath.Apply(prior)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
c, err := icPath.Apply(config)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// If this is a map, it is checking the entire map value for equality
|
||||
// rather than the individual key. This means that the change is stored
|
||||
// here even if our ignored key doesn't change. That is OK since it
|
||||
// won't cause any changes in the transformation, but allows us to skip
|
||||
// breaking up the maps and checking for key existence here too.
|
||||
eq := p.Equals(c)
|
||||
if eq.IsKnown() && eq.False() {
|
||||
// there a change to ignore at this path, store the prior value
|
||||
ignoredValues = append(ignoredValues, ignoreChange{icPath, p, key})
|
||||
}
|
||||
}
|
||||
|
||||
if len(ignoredValues) == 0 {
|
||||
return config, nil
|
||||
}
|
||||
|
||||
ret, _ := cty.Transform(config, func(path cty.Path, v cty.Value) (cty.Value, error) {
|
||||
// Easy path for when we are only matching the entire value. The only
|
||||
// values we break up for inspection are maps.
|
||||
if !v.Type().IsMapType() {
|
||||
for _, ignored := range ignoredValues {
|
||||
if path.Equals(ignored.path) {
|
||||
return ignored.value, nil
|
||||
}
|
||||
}
|
||||
return v, nil
|
||||
}
|
||||
// We now know this must be a map, so we need to accumulate the values
|
||||
// key-by-key.
|
||||
|
||||
if !v.IsNull() && !v.IsKnown() {
|
||||
// since v is not known, we cannot ignore individual keys
|
||||
return v, nil
|
||||
}
|
||||
|
||||
// The configMap is the current configuration value, which we will
|
||||
// mutate based on the ignored paths and the prior map value.
|
||||
var configMap map[string]cty.Value
|
||||
switch {
|
||||
case v.IsNull() || v.LengthInt() == 0:
|
||||
configMap = map[string]cty.Value{}
|
||||
default:
|
||||
configMap = v.AsValueMap()
|
||||
}
|
||||
|
||||
for _, ignored := range ignoredValues {
|
||||
if !path.Equals(ignored.path) {
|
||||
continue
|
||||
}
|
||||
|
||||
if ignored.key.IsNull() {
|
||||
// The map address is confirmed to match at this point,
|
||||
// so if there is no key, we want the entire map and can
|
||||
// stop accumulating values.
|
||||
return ignored.value, nil
|
||||
}
|
||||
// Now we know we are ignoring a specific index of this map, so get
|
||||
// the config map and modify, add, or remove the desired key.
|
||||
|
||||
// We also need to create a prior map, so we can check for
|
||||
// existence while getting the value, because Value.Index will
|
||||
// return null for a key with a null value and for a non-existent
|
||||
// key.
|
||||
var priorMap map[string]cty.Value
|
||||
switch {
|
||||
case ignored.value.IsNull() || ignored.value.LengthInt() == 0:
|
||||
priorMap = map[string]cty.Value{}
|
||||
default:
|
||||
priorMap = ignored.value.AsValueMap()
|
||||
}
|
||||
|
||||
key := ignored.key.AsString()
|
||||
priorElem, keep := priorMap[key]
|
||||
|
||||
switch {
|
||||
case !keep:
|
||||
// this didn't exist in the old map value, so we're keeping the
|
||||
// "absence" of the key by removing it from the config
|
||||
delete(configMap, key)
|
||||
default:
|
||||
configMap[key] = priorElem
|
||||
}
|
||||
}
|
||||
|
||||
if len(configMap) == 0 {
|
||||
return cty.MapValEmpty(v.Type().ElementType()), nil
|
||||
}
|
||||
|
||||
return cty.MapVal(configMap), nil
|
||||
})
|
||||
return ret, nil
|
||||
}
|
||||
|
||||
// EvalDiffDestroy is an EvalNode implementation that returns a plain
|
||||
// destroy diff.
|
||||
type EvalDiffDestroy struct {
|
||||
Addr addrs.ResourceInstance
|
||||
DeposedKey states.DeposedKey
|
||||
State **states.ResourceInstanceObject
|
||||
ProviderAddr addrs.AbsProviderConfig
|
||||
|
||||
Output **plans.ResourceInstanceChange
|
||||
OutputState **states.ResourceInstanceObject
|
||||
}
|
||||
|
||||
// TODO: test
|
||||
func (n *EvalDiffDestroy) Eval(ctx EvalContext) tfdiags.Diagnostics {
|
||||
var diags tfdiags.Diagnostics
|
||||
|
||||
absAddr := n.Addr.Absolute(ctx.Path())
|
||||
state := *n.State
|
||||
|
||||
if n.ProviderAddr.Provider.Type == "" {
|
||||
if n.DeposedKey == "" {
|
||||
panic(fmt.Sprintf("EvalDiffDestroy for %s does not have ProviderAddr set", absAddr))
|
||||
} else {
|
||||
panic(fmt.Sprintf("EvalDiffDestroy for %s (deposed %s) does not have ProviderAddr set", absAddr, n.DeposedKey))
|
||||
}
|
||||
}
|
||||
|
||||
// If there is no state or our attributes object is null then we're already
|
||||
// destroyed.
|
||||
if state == nil || state.Value.IsNull() {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Call pre-diff hook
|
||||
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
|
||||
return h.PreDiff(
|
||||
absAddr, n.DeposedKey.Generation(),
|
||||
state.Value,
|
||||
cty.NullVal(cty.DynamicPseudoType),
|
||||
)
|
||||
}))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
// Change is always the same for a destroy. We don't need the provider's
|
||||
// help for this one.
|
||||
// TODO: Should we give the provider an opportunity to veto this?
|
||||
change := &plans.ResourceInstanceChange{
|
||||
Addr: absAddr,
|
||||
DeposedKey: n.DeposedKey,
|
||||
Change: plans.Change{
|
||||
Action: plans.Delete,
|
||||
Before: state.Value,
|
||||
After: cty.NullVal(cty.DynamicPseudoType),
|
||||
},
|
||||
Private: state.Private,
|
||||
ProviderAddr: n.ProviderAddr,
|
||||
}
|
||||
|
||||
// Call post-diff hook
|
||||
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
|
||||
return h.PostDiff(
|
||||
absAddr,
|
||||
n.DeposedKey.Generation(),
|
||||
change.Action,
|
||||
change.Before,
|
||||
change.After,
|
||||
)
|
||||
}))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
// Update our output
|
||||
*n.Output = change
|
||||
|
||||
if n.OutputState != nil {
|
||||
// Record our proposed new state, which is nil because we're destroying.
|
||||
*n.OutputState = nil
|
||||
}
|
||||
|
||||
return diags
|
||||
}
|
||||
|
||||
// EvalReduceDiff is an EvalNode implementation that takes a planned resource
|
||||
// instance change as might be produced by EvalDiff or EvalDiffDestroy and
|
||||
// "simplifies" it to a single atomic action to be performed by a specific
|
||||
// graph node.
|
||||
//
|
||||
// Callers must specify whether they are a destroy node or a regular apply
|
||||
// node. If the result is NoOp then the given change requires no action for
|
||||
// the specific graph node calling this and so evaluation of the that graph
|
||||
// node should exit early and take no action.
|
||||
//
|
||||
// The object written to OutChange may either be identical to InChange or
|
||||
// a new change object derived from InChange. Because of the former case, the
|
||||
// caller must not mutate the object returned in OutChange.
|
||||
type EvalReduceDiff struct {
|
||||
Addr addrs.ResourceInstance
|
||||
InChange **plans.ResourceInstanceChange
|
||||
Destroy bool
|
||||
OutChange **plans.ResourceInstanceChange
|
||||
}
|
||||
|
||||
// TODO: test
|
||||
func (n *EvalReduceDiff) Eval(ctx EvalContext) tfdiags.Diagnostics {
|
||||
in := *n.InChange
|
||||
out := in.Simplify(n.Destroy)
|
||||
if n.OutChange != nil {
|
||||
*n.OutChange = out
|
||||
}
|
||||
if out.Action != in.Action {
|
||||
if n.Destroy {
|
||||
log.Printf("[TRACE] EvalReduceDiff: %s change simplified from %s to %s for destroy node", n.Addr, in.Action, out.Action)
|
||||
} else {
|
||||
log.Printf("[TRACE] EvalReduceDiff: %s change simplified from %s to %s for apply node", n.Addr, in.Action, out.Action)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// EvalWriteDiff is an EvalNode implementation that saves a planned change
|
||||
// for an instance object into the set of global planned changes.
|
||||
type EvalWriteDiff struct {
|
||||
Addr addrs.ResourceInstance
|
||||
DeposedKey states.DeposedKey
|
||||
ProviderSchema **ProviderSchema
|
||||
Change **plans.ResourceInstanceChange
|
||||
}
|
||||
|
||||
// TODO: test
|
||||
func (n *EvalWriteDiff) Eval(ctx EvalContext) tfdiags.Diagnostics {
|
||||
var diags tfdiags.Diagnostics
|
||||
|
||||
changes := ctx.Changes()
|
||||
addr := n.Addr.Absolute(ctx.Path())
|
||||
if n.Change == nil || *n.Change == nil {
|
||||
// Caller sets nil to indicate that we need to remove a change from
|
||||
// the set of changes.
|
||||
gen := states.CurrentGen
|
||||
if n.DeposedKey != states.NotDeposed {
|
||||
gen = n.DeposedKey
|
||||
}
|
||||
changes.RemoveResourceInstanceChange(addr, gen)
|
||||
return nil
|
||||
}
|
||||
|
||||
providerSchema := *n.ProviderSchema
|
||||
change := *n.Change
|
||||
|
||||
if change.Addr.String() != addr.String() || change.DeposedKey != n.DeposedKey {
|
||||
// Should never happen, and indicates a bug in the caller.
|
||||
panic("inconsistent address and/or deposed key in EvalWriteDiff")
|
||||
}
|
||||
|
||||
schema, _ := providerSchema.SchemaForResourceAddr(n.Addr.ContainingResource())
|
||||
if schema == nil {
|
||||
// Should be caught during validation, so we don't bother with a pretty error here
|
||||
diags = diags.Append(fmt.Errorf("provider does not support resource type %q", n.Addr.Resource.Type))
|
||||
return diags
|
||||
}
|
||||
|
||||
csrc, err := change.Encode(schema.ImpliedType())
|
||||
if err != nil {
|
||||
diags = diags.Append(fmt.Errorf("failed to encode planned changes for %s: %s", addr, err))
|
||||
return diags
|
||||
}
|
||||
|
||||
changes.AppendResourceInstanceChange(csrc)
|
||||
if n.DeposedKey == states.NotDeposed {
|
||||
log.Printf("[TRACE] EvalWriteDiff: recorded %s change for %s", change.Action, addr)
|
||||
} else {
|
||||
log.Printf("[TRACE] EvalWriteDiff: recorded %s change for %s deposed object %s", change.Action, addr, n.DeposedKey)
|
||||
}
|
||||
|
||||
return diags
|
||||
}
|
|
@ -45,8 +45,8 @@ func buildProviderConfig(ctx EvalContext, addr addrs.AbsProviderConfig, config *
|
|||
}
|
||||
}
|
||||
|
||||
// GetProvider returns the providers.Interface and schema for a given provider.
|
||||
func GetProvider(ctx EvalContext, addr addrs.AbsProviderConfig) (providers.Interface, *ProviderSchema, error) {
|
||||
// getProvider returns the providers.Interface and schema for a given provider.
|
||||
func getProvider(ctx EvalContext, addr addrs.AbsProviderConfig) (providers.Interface, *ProviderSchema, error) {
|
||||
if addr.Provider.Type == "" {
|
||||
// Should never happen
|
||||
panic("GetProvider used with uninitialized provider configuration address")
|
||||
|
|
|
@ -1,192 +0,0 @@
|
|||
package terraform
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
|
||||
"github.com/hashicorp/hcl/v2"
|
||||
"github.com/hashicorp/terraform/addrs"
|
||||
"github.com/hashicorp/terraform/configs"
|
||||
"github.com/hashicorp/terraform/plans"
|
||||
"github.com/hashicorp/terraform/providers"
|
||||
"github.com/hashicorp/terraform/states"
|
||||
"github.com/hashicorp/terraform/tfdiags"
|
||||
)
|
||||
|
||||
// evalReadData implements shared methods and data for the individual data
|
||||
// source eval nodes.
|
||||
type evalReadData struct {
|
||||
Addr addrs.ResourceInstance
|
||||
Config *configs.Resource
|
||||
Provider *providers.Interface
|
||||
ProviderAddr addrs.AbsProviderConfig
|
||||
ProviderMetas map[addrs.Provider]*configs.ProviderMeta
|
||||
ProviderSchema **ProviderSchema
|
||||
|
||||
// Planned is set when dealing with data resources that were deferred to
|
||||
// the apply walk, to let us see what was planned. If this is set, the
|
||||
// evaluation of the config is required to produce a wholly-known
|
||||
// configuration which is consistent with the partial object included
|
||||
// in this planned change.
|
||||
Planned **plans.ResourceInstanceChange
|
||||
|
||||
// State is the current state for the data source, and is updated once the
|
||||
// new state has been read.
|
||||
// While data sources are read-only, we need to start with the prior state
|
||||
// to determine if we have a change or not. If we needed to read a new
|
||||
// value, but it still matches the previous state, then we can record a
|
||||
// NoNop change. If the states don't match then we record a Read change so
|
||||
// that the new value is applied to the state.
|
||||
State **states.ResourceInstanceObject
|
||||
|
||||
// Output change records any change for this data source, which is
|
||||
// interpreted differently than changes for managed resources.
|
||||
// - During Refresh, this change is only used to correctly evaluate
|
||||
// references to the data source, but it is not saved.
|
||||
// - If a planned change has the action of plans.Read, it indicates that the
|
||||
// data source could not be evaluated yet, and reading is being deferred to
|
||||
// apply.
|
||||
// - If planned action is plans.Update, it indicates that the data source
|
||||
// was read, and the result needs to be stored in state during apply.
|
||||
OutputChange **plans.ResourceInstanceChange
|
||||
|
||||
// dependsOn stores the list of transitive resource addresses that any
|
||||
// configuration depends_on references may resolve to. This is used to
|
||||
// determine if there are any changes that will force this data sources to
|
||||
// be deferred to apply.
|
||||
dependsOn []addrs.ConfigResource
|
||||
}
|
||||
|
||||
// readDataSource handles everything needed to call ReadDataSource on the provider.
|
||||
// A previously evaluated configVal can be passed in, or a new one is generated
|
||||
// from the resource configuration.
|
||||
func (n *evalReadData) readDataSource(ctx EvalContext, configVal cty.Value) (cty.Value, tfdiags.Diagnostics) {
|
||||
var diags tfdiags.Diagnostics
|
||||
var newVal cty.Value
|
||||
|
||||
config := *n.Config
|
||||
absAddr := n.Addr.Absolute(ctx.Path())
|
||||
|
||||
if n.ProviderSchema == nil || *n.ProviderSchema == nil {
|
||||
diags = diags.Append(fmt.Errorf("provider schema not available for %s", n.Addr))
|
||||
return newVal, diags
|
||||
}
|
||||
|
||||
provider := *n.Provider
|
||||
|
||||
providerSchema := *n.ProviderSchema
|
||||
schema, _ := providerSchema.SchemaForResourceAddr(n.Addr.ContainingResource())
|
||||
if schema == nil {
|
||||
// Should be caught during validation, so we don't bother with a pretty error here
|
||||
diags = diags.Append(fmt.Errorf("provider %q does not support data source %q", n.ProviderAddr.Provider.String(), n.Addr.Resource.Type))
|
||||
return newVal, diags
|
||||
}
|
||||
|
||||
metaConfigVal, metaDiags := n.providerMetas(ctx)
|
||||
diags = diags.Append(metaDiags)
|
||||
if diags.HasErrors() {
|
||||
return newVal, diags
|
||||
}
|
||||
|
||||
log.Printf("[TRACE] EvalReadData: Re-validating config for %s", absAddr)
|
||||
validateResp := provider.ValidateDataSourceConfig(
|
||||
providers.ValidateDataSourceConfigRequest{
|
||||
TypeName: n.Addr.Resource.Type,
|
||||
Config: configVal,
|
||||
},
|
||||
)
|
||||
if validateResp.Diagnostics.HasErrors() {
|
||||
return newVal, validateResp.Diagnostics.InConfigBody(config.Config)
|
||||
}
|
||||
|
||||
// If we get down here then our configuration is complete and we're read
|
||||
// to actually call the provider to read the data.
|
||||
log.Printf("[TRACE] EvalReadData: %s configuration is complete, so reading from provider", absAddr)
|
||||
|
||||
resp := provider.ReadDataSource(providers.ReadDataSourceRequest{
|
||||
TypeName: n.Addr.Resource.Type,
|
||||
Config: configVal,
|
||||
ProviderMeta: metaConfigVal,
|
||||
})
|
||||
diags = diags.Append(resp.Diagnostics.InConfigBody(config.Config))
|
||||
if diags.HasErrors() {
|
||||
return newVal, diags
|
||||
}
|
||||
newVal = resp.State
|
||||
if newVal == cty.NilVal {
|
||||
// This can happen with incompletely-configured mocks. We'll allow it
|
||||
// and treat it as an alias for a properly-typed null value.
|
||||
newVal = cty.NullVal(schema.ImpliedType())
|
||||
}
|
||||
|
||||
for _, err := range newVal.Type().TestConformance(schema.ImpliedType()) {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Provider produced invalid object",
|
||||
fmt.Sprintf(
|
||||
"Provider %q produced an invalid value for %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
|
||||
n.ProviderAddr.Provider.String(), tfdiags.FormatErrorPrefixed(err, absAddr.String()),
|
||||
),
|
||||
))
|
||||
}
|
||||
if diags.HasErrors() {
|
||||
return newVal, diags
|
||||
}
|
||||
|
||||
if newVal.IsNull() {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Provider produced null object",
|
||||
fmt.Sprintf(
|
||||
"Provider %q produced a null value for %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
|
||||
n.ProviderAddr.Provider.String(), absAddr,
|
||||
),
|
||||
))
|
||||
}
|
||||
|
||||
if !newVal.IsNull() && !newVal.IsWhollyKnown() {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Provider produced invalid object",
|
||||
fmt.Sprintf(
|
||||
"Provider %q produced a value for %s that is not wholly known.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
|
||||
n.ProviderAddr.Provider.String(), absAddr,
|
||||
),
|
||||
))
|
||||
|
||||
// We'll still save the object, but we need to eliminate any unknown
|
||||
// values first because we can't serialize them in the state file.
|
||||
// Note that this may cause set elements to be coalesced if they
|
||||
// differed only by having unknown values, but we don't worry about
|
||||
// that here because we're saving the value only for inspection
|
||||
// purposes; the error we added above will halt the graph walk.
|
||||
newVal = cty.UnknownAsNull(newVal)
|
||||
}
|
||||
|
||||
return newVal, diags
|
||||
}
|
||||
|
||||
func (n *evalReadData) providerMetas(ctx EvalContext) (cty.Value, tfdiags.Diagnostics) {
|
||||
var diags tfdiags.Diagnostics
|
||||
metaConfigVal := cty.NullVal(cty.DynamicPseudoType)
|
||||
if n.ProviderMetas != nil {
|
||||
if m, ok := n.ProviderMetas[n.ProviderAddr.Provider]; ok && m != nil {
|
||||
// if the provider doesn't support this feature, throw an error
|
||||
if (*n.ProviderSchema).ProviderMeta == nil {
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
Severity: hcl.DiagError,
|
||||
Summary: fmt.Sprintf("Provider %s doesn't support provider_meta", n.ProviderAddr.Provider.String()),
|
||||
Detail: fmt.Sprintf("The resource %s belongs to a provider that doesn't support provider_meta blocks", n.Addr),
|
||||
Subject: &m.ProviderRange,
|
||||
})
|
||||
} else {
|
||||
var configDiags tfdiags.Diagnostics
|
||||
metaConfigVal, _, configDiags = ctx.EvaluateBlock(m.Config, (*n.ProviderSchema).ProviderMeta, nil, EvalDataForNoInstanceKey)
|
||||
diags = diags.Append(configDiags)
|
||||
}
|
||||
}
|
||||
}
|
||||
return metaConfigVal, diags
|
||||
}
|
|
@ -1,84 +0,0 @@
|
|||
package terraform
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/hashicorp/terraform/plans"
|
||||
"github.com/hashicorp/terraform/states"
|
||||
"github.com/hashicorp/terraform/tfdiags"
|
||||
)
|
||||
|
||||
// evalReadDataApply is an EvalNode implementation that deals with the main part
|
||||
// of the data resource lifecycle: either actually reading from the data source
|
||||
// or generating a plan to do so.
|
||||
type evalReadDataApply struct {
|
||||
evalReadData
|
||||
}
|
||||
|
||||
func (n *evalReadDataApply) Eval(ctx EvalContext) tfdiags.Diagnostics {
|
||||
absAddr := n.Addr.Absolute(ctx.Path())
|
||||
|
||||
var diags tfdiags.Diagnostics
|
||||
|
||||
var planned *plans.ResourceInstanceChange
|
||||
if n.Planned != nil {
|
||||
planned = *n.Planned
|
||||
}
|
||||
|
||||
if n.ProviderSchema == nil || *n.ProviderSchema == nil {
|
||||
diags = diags.Append(fmt.Errorf("provider schema not available for %s", n.Addr))
|
||||
return diags
|
||||
}
|
||||
|
||||
if planned != nil && planned.Action != plans.Read {
|
||||
// If any other action gets in here then that's always a bug; this
|
||||
// EvalNode only deals with reading.
|
||||
diags = diags.Append(fmt.Errorf(
|
||||
"invalid action %s for %s: only Read is supported (this is a bug in Terraform; please report it!)",
|
||||
planned.Action, absAddr,
|
||||
))
|
||||
return diags
|
||||
}
|
||||
|
||||
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
|
||||
return h.PreApply(absAddr, states.CurrentGen, planned.Action, planned.Before, planned.After)
|
||||
}))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
config := *n.Config
|
||||
providerSchema := *n.ProviderSchema
|
||||
schema, _ := providerSchema.SchemaForResourceAddr(n.Addr.ContainingResource())
|
||||
if schema == nil {
|
||||
// Should be caught during validation, so we don't bother with a pretty error here
|
||||
diags = diags.Append(fmt.Errorf("provider %q does not support data source %q", n.ProviderAddr.Provider.String(), n.Addr.Resource.Type))
|
||||
return diags
|
||||
}
|
||||
|
||||
forEach, _ := evaluateForEachExpression(config.ForEach, ctx)
|
||||
keyData := EvalDataForInstanceKey(n.Addr.Key, forEach)
|
||||
|
||||
configVal, _, configDiags := ctx.EvaluateBlock(config.Config, schema, nil, keyData)
|
||||
diags = diags.Append(configDiags)
|
||||
if configDiags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
newVal, readDiags := n.readDataSource(ctx, configVal)
|
||||
diags = diags.Append(readDiags)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
*n.State = &states.ResourceInstanceObject{
|
||||
Value: newVal,
|
||||
Status: states.ObjectReady,
|
||||
}
|
||||
|
||||
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
|
||||
return h.PostApply(absAddr, states.CurrentGen, newVal, diags.Err())
|
||||
}))
|
||||
|
||||
return diags
|
||||
}
|
|
@ -1,170 +0,0 @@
|
|||
package terraform
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"strings"
|
||||
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
|
||||
"github.com/hashicorp/terraform/addrs"
|
||||
"github.com/hashicorp/terraform/plans"
|
||||
"github.com/hashicorp/terraform/plans/objchange"
|
||||
"github.com/hashicorp/terraform/states"
|
||||
"github.com/hashicorp/terraform/tfdiags"
|
||||
)
|
||||
|
||||
// evalReadDataPlan is an EvalNode implementation that deals with the main part
|
||||
// of the data resource lifecycle: either actually reading from the data source
|
||||
// or generating a plan to do so.
|
||||
type evalReadDataPlan struct {
|
||||
evalReadData
|
||||
}
|
||||
|
||||
func (n *evalReadDataPlan) Eval(ctx EvalContext) tfdiags.Diagnostics {
|
||||
absAddr := n.Addr.Absolute(ctx.Path())
|
||||
|
||||
var diags tfdiags.Diagnostics
|
||||
var configVal cty.Value
|
||||
|
||||
if n.ProviderSchema == nil || *n.ProviderSchema == nil {
|
||||
diags = diags.Append(fmt.Errorf("provider schema not available for %s", n.Addr))
|
||||
return diags
|
||||
}
|
||||
|
||||
config := *n.Config
|
||||
providerSchema := *n.ProviderSchema
|
||||
schema, _ := providerSchema.SchemaForResourceAddr(n.Addr.ContainingResource())
|
||||
if schema == nil {
|
||||
// Should be caught during validation, so we don't bother with a pretty error here
|
||||
diags = diags.Append(fmt.Errorf("provider %q does not support data source %q", n.ProviderAddr.Provider.String(), n.Addr.Resource.Type))
|
||||
return diags
|
||||
}
|
||||
|
||||
objTy := schema.ImpliedType()
|
||||
priorVal := cty.NullVal(objTy)
|
||||
if n.State != nil && *n.State != nil {
|
||||
priorVal = (*n.State).Value
|
||||
}
|
||||
|
||||
forEach, _ := evaluateForEachExpression(config.ForEach, ctx)
|
||||
keyData := EvalDataForInstanceKey(n.Addr.Key, forEach)
|
||||
|
||||
var configDiags tfdiags.Diagnostics
|
||||
configVal, _, configDiags = ctx.EvaluateBlock(config.Config, schema, nil, keyData)
|
||||
diags = diags.Append(configDiags)
|
||||
if configDiags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
configKnown := configVal.IsWhollyKnown()
|
||||
// If our configuration contains any unknown values, or we depend on any
|
||||
// unknown values then we must defer the read to the apply phase by
|
||||
// producing a "Read" change for this resource, and a placeholder value for
|
||||
// it in the state.
|
||||
if n.forcePlanRead(ctx) || !configKnown {
|
||||
if configKnown {
|
||||
log.Printf("[TRACE] evalReadDataPlan: %s configuration is fully known, but we're forcing a read plan to be created", absAddr)
|
||||
} else {
|
||||
log.Printf("[TRACE] evalReadDataPlan: %s configuration not fully known yet, so deferring to apply phase", absAddr)
|
||||
}
|
||||
|
||||
proposedNewVal := objchange.PlannedDataResourceObject(schema, configVal)
|
||||
|
||||
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
|
||||
return h.PreDiff(absAddr, states.CurrentGen, priorVal, proposedNewVal)
|
||||
}))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
// Apply detects that the data source will need to be read by the After
|
||||
// value containing unknowns from PlanDataResourceObject.
|
||||
*n.OutputChange = &plans.ResourceInstanceChange{
|
||||
Addr: absAddr,
|
||||
ProviderAddr: n.ProviderAddr,
|
||||
Change: plans.Change{
|
||||
Action: plans.Read,
|
||||
Before: priorVal,
|
||||
After: proposedNewVal,
|
||||
},
|
||||
}
|
||||
|
||||
*n.State = &states.ResourceInstanceObject{
|
||||
Value: proposedNewVal,
|
||||
Status: states.ObjectPlanned,
|
||||
}
|
||||
|
||||
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
|
||||
return h.PostDiff(absAddr, states.CurrentGen, plans.Read, priorVal, proposedNewVal)
|
||||
}))
|
||||
|
||||
return diags
|
||||
}
|
||||
|
||||
// We have a complete configuration with no dependencies to wait on, so we
|
||||
// can read the data source into the state.
|
||||
newVal, readDiags := n.readDataSource(ctx, configVal)
|
||||
diags = diags.Append(readDiags)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
// if we have a prior value, we can check for any irregularities in the response
|
||||
if !priorVal.IsNull() {
|
||||
// While we don't propose planned changes for data sources, we can
|
||||
// generate a proposed value for comparison to ensure the data source
|
||||
// is returning a result following the rules of the provider contract.
|
||||
proposedVal := objchange.ProposedNewObject(schema, priorVal, configVal)
|
||||
if errs := objchange.AssertObjectCompatible(schema, proposedVal, newVal); len(errs) > 0 {
|
||||
// Resources have the LegacyTypeSystem field to signal when they are
|
||||
// using an SDK which may not produce precise values. While data
|
||||
// sources are read-only, they can still return a value which is not
|
||||
// compatible with the config+schema. Since we can't detect the legacy
|
||||
// type system, we can only warn about this for now.
|
||||
var buf strings.Builder
|
||||
fmt.Fprintf(&buf, "[WARN] Provider %q produced an unexpected new value for %s.",
|
||||
n.ProviderAddr.Provider.String(), absAddr)
|
||||
for _, err := range errs {
|
||||
fmt.Fprintf(&buf, "\n - %s", tfdiags.FormatError(err))
|
||||
}
|
||||
log.Print(buf.String())
|
||||
}
|
||||
}
|
||||
|
||||
*n.State = &states.ResourceInstanceObject{
|
||||
Value: newVal,
|
||||
Status: states.ObjectReady,
|
||||
}
|
||||
|
||||
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
|
||||
return h.PostDiff(absAddr, states.CurrentGen, plans.Update, priorVal, newVal)
|
||||
}))
|
||||
return diags
|
||||
}
|
||||
|
||||
// forcePlanRead determines if we need to override the usual behavior of
|
||||
// immediately reading from the data source where possible, instead forcing us
|
||||
// to generate a plan.
|
||||
func (n *evalReadDataPlan) forcePlanRead(ctx EvalContext) bool {
|
||||
// Check and see if any depends_on dependencies have
|
||||
// changes, since they won't show up as changes in the
|
||||
// configuration.
|
||||
changes := ctx.Changes()
|
||||
for _, d := range n.dependsOn {
|
||||
if d.Resource.Mode == addrs.DataResourceMode {
|
||||
// Data sources have no external side effects, so they pose a need
|
||||
// to delay this read. If they do have a change planned, it must be
|
||||
// because of a dependency on a managed resource, in which case
|
||||
// we'll also encounter it in this list of dependencies.
|
||||
continue
|
||||
}
|
||||
|
||||
for _, change := range changes.GetChangesForConfigResource(d) {
|
||||
if change != nil && change.Action != plans.NoOp {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
|
@ -1,165 +0,0 @@
|
|||
package terraform
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"strings"
|
||||
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
|
||||
"github.com/hashicorp/hcl/v2"
|
||||
"github.com/hashicorp/terraform/addrs"
|
||||
"github.com/hashicorp/terraform/configs"
|
||||
"github.com/hashicorp/terraform/plans/objchange"
|
||||
"github.com/hashicorp/terraform/providers"
|
||||
"github.com/hashicorp/terraform/states"
|
||||
"github.com/hashicorp/terraform/tfdiags"
|
||||
)
|
||||
|
||||
// EvalRefresh is an EvalNode implementation that does a refresh for
|
||||
// a resource.
|
||||
type EvalRefresh struct {
|
||||
Addr addrs.ResourceInstance
|
||||
ProviderAddr addrs.AbsProviderConfig
|
||||
Provider *providers.Interface
|
||||
ProviderMetas map[addrs.Provider]*configs.ProviderMeta
|
||||
ProviderSchema **ProviderSchema
|
||||
State **states.ResourceInstanceObject
|
||||
Output **states.ResourceInstanceObject
|
||||
}
|
||||
|
||||
// TODO: test
|
||||
func (n *EvalRefresh) Eval(ctx EvalContext) tfdiags.Diagnostics {
|
||||
state := *n.State
|
||||
absAddr := n.Addr.Absolute(ctx.Path())
|
||||
|
||||
var diags tfdiags.Diagnostics
|
||||
|
||||
// If we have no state, we don't do any refreshing
|
||||
if state == nil {
|
||||
log.Printf("[DEBUG] refresh: %s: no state, so not refreshing", n.Addr.Absolute(ctx.Path()))
|
||||
return diags
|
||||
}
|
||||
|
||||
schema, _ := (*n.ProviderSchema).SchemaForResourceAddr(n.Addr.ContainingResource())
|
||||
if schema == nil {
|
||||
// Should be caught during validation, so we don't bother with a pretty error here
|
||||
diags = diags.Append(fmt.Errorf("provider does not support resource type %q", n.Addr.Resource.Type))
|
||||
return diags
|
||||
}
|
||||
|
||||
metaConfigVal := cty.NullVal(cty.DynamicPseudoType)
|
||||
if n.ProviderMetas != nil {
|
||||
if m, ok := n.ProviderMetas[n.ProviderAddr.Provider]; ok && m != nil {
|
||||
log.Printf("[DEBUG] EvalRefresh: ProviderMeta config value set")
|
||||
// if the provider doesn't support this feature, throw an error
|
||||
if (*n.ProviderSchema).ProviderMeta == nil {
|
||||
log.Printf("[DEBUG] EvalRefresh: no ProviderMeta schema")
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
Severity: hcl.DiagError,
|
||||
Summary: fmt.Sprintf("Provider %s doesn't support provider_meta", n.ProviderAddr.Provider.String()),
|
||||
Detail: fmt.Sprintf("The resource %s belongs to a provider that doesn't support provider_meta blocks", n.Addr),
|
||||
Subject: &m.ProviderRange,
|
||||
})
|
||||
} else {
|
||||
log.Printf("[DEBUG] EvalRefresh: ProviderMeta schema found: %+v", (*n.ProviderSchema).ProviderMeta)
|
||||
var configDiags tfdiags.Diagnostics
|
||||
metaConfigVal, _, configDiags = ctx.EvaluateBlock(m.Config, (*n.ProviderSchema).ProviderMeta, nil, EvalDataForNoInstanceKey)
|
||||
diags = diags.Append(configDiags)
|
||||
if configDiags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Call pre-refresh hook
|
||||
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
|
||||
return h.PreRefresh(absAddr, states.CurrentGen, state.Value)
|
||||
}))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
// Refresh!
|
||||
priorVal := state.Value
|
||||
|
||||
// Unmarked before sending to provider
|
||||
var priorPaths []cty.PathValueMarks
|
||||
if priorVal.ContainsMarked() {
|
||||
priorVal, priorPaths = priorVal.UnmarkDeepWithPaths()
|
||||
}
|
||||
|
||||
req := providers.ReadResourceRequest{
|
||||
TypeName: n.Addr.Resource.Type,
|
||||
PriorState: priorVal,
|
||||
Private: state.Private,
|
||||
ProviderMeta: metaConfigVal,
|
||||
}
|
||||
|
||||
provider := *n.Provider
|
||||
resp := provider.ReadResource(req)
|
||||
diags = diags.Append(resp.Diagnostics)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
if resp.NewState == cty.NilVal {
|
||||
// This ought not to happen in real cases since it's not possible to
|
||||
// send NilVal over the plugin RPC channel, but it can come up in
|
||||
// tests due to sloppy mocking.
|
||||
panic("new state is cty.NilVal")
|
||||
}
|
||||
|
||||
for _, err := range resp.NewState.Type().TestConformance(schema.ImpliedType()) {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Provider produced invalid object",
|
||||
fmt.Sprintf(
|
||||
"Provider %q planned an invalid value for %s during refresh: %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
|
||||
n.ProviderAddr.Provider.String(), absAddr, tfdiags.FormatError(err),
|
||||
),
|
||||
))
|
||||
}
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
// We have no way to exempt provider using the legacy SDK from this check,
|
||||
// so we can only log inconsistencies with the updated state values.
|
||||
// In most cases these are not errors anyway, and represent "drift" from
|
||||
// external changes which will be handled by the subsequent plan.
|
||||
if errs := objchange.AssertObjectCompatible(schema, priorVal, resp.NewState); len(errs) > 0 {
|
||||
var buf strings.Builder
|
||||
fmt.Fprintf(&buf, "[WARN] Provider %q produced an unexpected new value for %s during refresh.", n.ProviderAddr.Provider.String(), absAddr)
|
||||
for _, err := range errs {
|
||||
fmt.Fprintf(&buf, "\n - %s", tfdiags.FormatError(err))
|
||||
}
|
||||
log.Print(buf.String())
|
||||
}
|
||||
|
||||
newState := state.DeepCopy()
|
||||
newState.Value = resp.NewState
|
||||
newState.Private = resp.Private
|
||||
newState.Dependencies = state.Dependencies
|
||||
newState.CreateBeforeDestroy = state.CreateBeforeDestroy
|
||||
|
||||
// Call post-refresh hook
|
||||
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
|
||||
return h.PostRefresh(absAddr, states.CurrentGen, priorVal, newState.Value)
|
||||
}))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
// Mark the value if necessary
|
||||
if len(priorPaths) > 0 {
|
||||
newState.Value = newState.Value.MarkWithPaths(priorPaths)
|
||||
}
|
||||
|
||||
if n.Output != nil {
|
||||
*n.Output = newState
|
||||
}
|
||||
|
||||
return diags
|
||||
}
|
|
@ -1,126 +0,0 @@
|
|||
package terraform
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
|
||||
"github.com/hashicorp/terraform/addrs"
|
||||
"github.com/hashicorp/terraform/states"
|
||||
"github.com/hashicorp/terraform/tfdiags"
|
||||
)
|
||||
|
||||
type phaseState int
|
||||
|
||||
const (
|
||||
workingState phaseState = iota
|
||||
refreshState
|
||||
)
|
||||
|
||||
// UpdateStateHook calls the PostStateUpdate hook with the current state.
|
||||
func UpdateStateHook(ctx EvalContext) error {
|
||||
// In principle we could grab the lock here just long enough to take a
|
||||
// deep copy and then pass that to our hooks below, but we'll instead
|
||||
// hold the hook for the duration to avoid the potential confusing
|
||||
// situation of us racing to call PostStateUpdate concurrently with
|
||||
// different state snapshots.
|
||||
stateSync := ctx.State()
|
||||
state := stateSync.Lock().DeepCopy()
|
||||
defer stateSync.Unlock()
|
||||
|
||||
// Call the hook
|
||||
err := ctx.Hook(func(h Hook) (HookAction, error) {
|
||||
return h.PostStateUpdate(state)
|
||||
})
|
||||
return err
|
||||
}
|
||||
|
||||
// EvalWriteStateDeposed is an EvalNode implementation that writes
|
||||
// an InstanceState out to the Deposed list of a resource in the state.
|
||||
type EvalWriteStateDeposed struct {
|
||||
// Addr is the address of the instance to read state for.
|
||||
Addr addrs.ResourceInstance
|
||||
|
||||
// Key indicates which deposed object to write to.
|
||||
Key states.DeposedKey
|
||||
|
||||
// State is the object state to save.
|
||||
State **states.ResourceInstanceObject
|
||||
|
||||
// ProviderSchema is the schema for the provider given in ProviderAddr.
|
||||
ProviderSchema **ProviderSchema
|
||||
|
||||
// ProviderAddr is the address of the provider configuration that
|
||||
// produced the given object.
|
||||
ProviderAddr addrs.AbsProviderConfig
|
||||
}
|
||||
|
||||
func (n *EvalWriteStateDeposed) Eval(ctx EvalContext) tfdiags.Diagnostics {
|
||||
var diags tfdiags.Diagnostics
|
||||
|
||||
if n.State == nil {
|
||||
// Note that a pointer _to_ nil is valid here, indicating the total
|
||||
// absense of an object as we'd see during destroy.
|
||||
panic("EvalWriteStateDeposed used with no ResourceInstanceObject")
|
||||
}
|
||||
|
||||
absAddr := n.Addr.Absolute(ctx.Path())
|
||||
key := n.Key
|
||||
state := ctx.State()
|
||||
|
||||
if key == states.NotDeposed {
|
||||
// should never happen
|
||||
diags = diags.Append(fmt.Errorf("can't save deposed object for %s without a deposed key; this is a bug in Terraform that should be reported", absAddr))
|
||||
return diags
|
||||
}
|
||||
|
||||
obj := *n.State
|
||||
if obj == nil {
|
||||
// No need to encode anything: we'll just write it directly.
|
||||
state.SetResourceInstanceDeposed(absAddr, key, nil, n.ProviderAddr)
|
||||
log.Printf("[TRACE] EvalWriteStateDeposed: removing state object for %s deposed %s", absAddr, key)
|
||||
return diags
|
||||
}
|
||||
if n.ProviderSchema == nil || *n.ProviderSchema == nil {
|
||||
// Should never happen, unless our state object is nil
|
||||
panic("EvalWriteStateDeposed used with no ProviderSchema object")
|
||||
}
|
||||
|
||||
schema, currentVersion := (*n.ProviderSchema).SchemaForResourceAddr(n.Addr.ContainingResource())
|
||||
if schema == nil {
|
||||
// It shouldn't be possible to get this far in any real scenario
|
||||
// without a schema, but we might end up here in contrived tests that
|
||||
// fail to set up their world properly.
|
||||
diags = diags.Append(fmt.Errorf("failed to encode %s in state: no resource type schema available", absAddr))
|
||||
return diags
|
||||
}
|
||||
src, err := obj.Encode(schema.ImpliedType(), currentVersion)
|
||||
if err != nil {
|
||||
diags = diags.Append(fmt.Errorf("failed to encode %s in state: %s", absAddr, err))
|
||||
return diags
|
||||
}
|
||||
|
||||
log.Printf("[TRACE] EvalWriteStateDeposed: writing state object for %s deposed %s", absAddr, key)
|
||||
state.SetResourceInstanceDeposed(absAddr, key, src, n.ProviderAddr)
|
||||
return diags
|
||||
}
|
||||
|
||||
// EvalDeposeState is an EvalNode implementation that moves the current object
|
||||
// for the given instance to instead be a deposed object, leaving the instance
|
||||
// with no current object.
|
||||
// This is used at the beginning of a create-before-destroy replace action so
|
||||
// that the create can create while preserving the old state of the
|
||||
// to-be-destroyed object.
|
||||
type EvalDeposeState struct {
|
||||
Addr addrs.ResourceInstance
|
||||
|
||||
// ForceKey, if a value other than states.NotDeposed, will be used as the
|
||||
// key for the newly-created deposed object that results from this action.
|
||||
// If set to states.NotDeposed (the zero value), a new unique key will be
|
||||
// allocated.
|
||||
ForceKey states.DeposedKey
|
||||
|
||||
// OutputKey, if non-nil, will be written with the deposed object key that
|
||||
// was generated for the object. This can then be passed to
|
||||
// EvalUndeposeState.Key so it knows which deposed instance to forget.
|
||||
OutputKey *states.DeposedKey
|
||||
}
|
|
@ -1,82 +0,0 @@
|
|||
package terraform
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/davecgh/go-spew/spew"
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
|
||||
"github.com/hashicorp/terraform/addrs"
|
||||
"github.com/hashicorp/terraform/configs/configschema"
|
||||
"github.com/hashicorp/terraform/states"
|
||||
)
|
||||
|
||||
func TestEvalWriteStateDeposed(t *testing.T) {
|
||||
state := states.NewState()
|
||||
ctx := new(MockEvalContext)
|
||||
ctx.StateState = state.SyncWrapper()
|
||||
ctx.PathPath = addrs.RootModuleInstance
|
||||
|
||||
mockProvider := mockProviderWithResourceTypeSchema("aws_instance", &configschema.Block{
|
||||
Attributes: map[string]*configschema.Attribute{
|
||||
"id": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
},
|
||||
})
|
||||
providerSchema := mockProvider.GetSchemaReturn
|
||||
|
||||
obj := &states.ResourceInstanceObject{
|
||||
Value: cty.ObjectVal(map[string]cty.Value{
|
||||
"id": cty.StringVal("i-abc123"),
|
||||
}),
|
||||
Status: states.ObjectReady,
|
||||
}
|
||||
node := &EvalWriteStateDeposed{
|
||||
Addr: addrs.Resource{
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Type: "aws_instance",
|
||||
Name: "foo",
|
||||
}.Instance(addrs.NoKey),
|
||||
Key: states.DeposedKey("deadbeef"),
|
||||
|
||||
State: &obj,
|
||||
|
||||
ProviderSchema: &providerSchema,
|
||||
ProviderAddr: addrs.RootModuleInstance.ProviderConfigDefault(addrs.NewDefaultProvider("aws")),
|
||||
}
|
||||
diags := node.Eval(ctx)
|
||||
if diags.HasErrors() {
|
||||
t.Fatalf("Got err: %#v", diags.ErrWithWarnings())
|
||||
}
|
||||
|
||||
checkStateString(t, state, `
|
||||
aws_instance.foo: (1 deposed)
|
||||
ID = <not created>
|
||||
provider = provider["registry.terraform.io/hashicorp/aws"]
|
||||
Deposed ID 1 = i-abc123
|
||||
`)
|
||||
}
|
||||
|
||||
func TestUpdateStateHook(t *testing.T) {
|
||||
mockHook := new(MockHook)
|
||||
|
||||
state := states.NewState()
|
||||
state.Module(addrs.RootModuleInstance).SetLocalValue("foo", cty.StringVal("hello"))
|
||||
|
||||
ctx := new(MockEvalContext)
|
||||
ctx.HookHook = mockHook
|
||||
ctx.StateState = state.SyncWrapper()
|
||||
|
||||
if err := UpdateStateHook(ctx); err != nil {
|
||||
t.Fatalf("err: %s", err)
|
||||
}
|
||||
|
||||
if !mockHook.PostStateUpdateCalled {
|
||||
t.Fatal("should call PostStateUpdate")
|
||||
}
|
||||
if mockHook.PostStateUpdateState.LocalValue(addrs.LocalValue{Name: "foo"}.Absolute(addrs.RootModuleInstance)) != cty.StringVal("hello") {
|
||||
t.Fatalf("wrong state passed to hook: %s", spew.Sdump(mockHook.PostStateUpdateState))
|
||||
}
|
||||
}
|
|
@ -1,519 +0,0 @@
|
|||
package terraform
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/hashicorp/hcl/v2"
|
||||
"github.com/hashicorp/terraform/addrs"
|
||||
"github.com/hashicorp/terraform/configs"
|
||||
"github.com/hashicorp/terraform/configs/configschema"
|
||||
"github.com/hashicorp/terraform/providers"
|
||||
"github.com/hashicorp/terraform/provisioners"
|
||||
"github.com/hashicorp/terraform/tfdiags"
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
"github.com/zclconf/go-cty/cty/convert"
|
||||
"github.com/zclconf/go-cty/cty/gocty"
|
||||
)
|
||||
|
||||
// EvalValidateProvisioner validates the configuration of a provisioner
|
||||
// belonging to a resource. The provisioner config is expected to contain the
|
||||
// merged connection configurations.
|
||||
type EvalValidateProvisioner struct {
|
||||
ResourceAddr addrs.Resource
|
||||
Provisioner *provisioners.Interface
|
||||
Schema **configschema.Block
|
||||
Config *configs.Provisioner
|
||||
ResourceHasCount bool
|
||||
ResourceHasForEach bool
|
||||
}
|
||||
|
||||
func (n *EvalValidateProvisioner) Validate(ctx EvalContext) error {
|
||||
provisioner := *n.Provisioner
|
||||
config := *n.Config
|
||||
schema := *n.Schema
|
||||
|
||||
var diags tfdiags.Diagnostics
|
||||
|
||||
// Validate the provisioner's own config first
|
||||
configVal, _, configDiags := n.evaluateBlock(ctx, config.Config, schema)
|
||||
diags = diags.Append(configDiags)
|
||||
if configDiags.HasErrors() {
|
||||
return diags.Err()
|
||||
}
|
||||
|
||||
if configVal == cty.NilVal {
|
||||
// Should never happen for a well-behaved EvaluateBlock implementation
|
||||
return fmt.Errorf("EvaluateBlock returned nil value")
|
||||
}
|
||||
|
||||
req := provisioners.ValidateProvisionerConfigRequest{
|
||||
Config: configVal,
|
||||
}
|
||||
|
||||
resp := provisioner.ValidateProvisionerConfig(req)
|
||||
diags = diags.Append(resp.Diagnostics)
|
||||
|
||||
// Now validate the connection config, which contains the merged bodies
|
||||
// of the resource and provisioner connection blocks.
|
||||
connDiags := n.validateConnConfig(ctx, config.Connection, n.ResourceAddr)
|
||||
diags = diags.Append(connDiags)
|
||||
|
||||
return diags.NonFatalErr()
|
||||
}
|
||||
|
||||
func (n *EvalValidateProvisioner) validateConnConfig(ctx EvalContext, config *configs.Connection, self addrs.Referenceable) tfdiags.Diagnostics {
|
||||
// We can't comprehensively validate the connection config since its
|
||||
// final structure is decided by the communicator and we can't instantiate
|
||||
// that until we have a complete instance state. However, we *can* catch
|
||||
// configuration keys that are not valid for *any* communicator, catching
|
||||
// typos early rather than waiting until we actually try to run one of
|
||||
// the resource's provisioners.
|
||||
|
||||
var diags tfdiags.Diagnostics
|
||||
|
||||
if config == nil || config.Config == nil {
|
||||
// No block to validate
|
||||
return diags
|
||||
}
|
||||
|
||||
// We evaluate here just by evaluating the block and returning any
|
||||
// diagnostics we get, since evaluation alone is enough to check for
|
||||
// extraneous arguments and incorrectly-typed arguments.
|
||||
_, _, configDiags := n.evaluateBlock(ctx, config.Config, connectionBlockSupersetSchema)
|
||||
diags = diags.Append(configDiags)
|
||||
|
||||
return diags
|
||||
}
|
||||
|
||||
func (n *EvalValidateProvisioner) evaluateBlock(ctx EvalContext, body hcl.Body, schema *configschema.Block) (cty.Value, hcl.Body, tfdiags.Diagnostics) {
|
||||
keyData := EvalDataForNoInstanceKey
|
||||
selfAddr := n.ResourceAddr.Instance(addrs.NoKey)
|
||||
|
||||
if n.ResourceHasCount {
|
||||
// For a resource that has count, we allow count.index but don't
|
||||
// know at this stage what it will return.
|
||||
keyData = InstanceKeyEvalData{
|
||||
CountIndex: cty.UnknownVal(cty.Number),
|
||||
}
|
||||
|
||||
// "self" can't point to an unknown key, but we'll force it to be
|
||||
// key 0 here, which should return an unknown value of the
|
||||
// expected type since none of these elements are known at this
|
||||
// point anyway.
|
||||
selfAddr = n.ResourceAddr.Instance(addrs.IntKey(0))
|
||||
} else if n.ResourceHasForEach {
|
||||
// For a resource that has for_each, we allow each.value and each.key
|
||||
// but don't know at this stage what it will return.
|
||||
keyData = InstanceKeyEvalData{
|
||||
EachKey: cty.UnknownVal(cty.String),
|
||||
EachValue: cty.DynamicVal,
|
||||
}
|
||||
|
||||
// "self" can't point to an unknown key, but we'll force it to be
|
||||
// key "" here, which should return an unknown value of the
|
||||
// expected type since none of these elements are known at
|
||||
// this point anyway.
|
||||
selfAddr = n.ResourceAddr.Instance(addrs.StringKey(""))
|
||||
}
|
||||
|
||||
return ctx.EvaluateBlock(body, schema, selfAddr, keyData)
|
||||
}
|
||||
|
||||
// connectionBlockSupersetSchema is a schema representing the superset of all
|
||||
// possible arguments for "connection" blocks across all supported connection
|
||||
// types.
|
||||
//
|
||||
// This currently lives here because we've not yet updated our communicator
|
||||
// subsystem to be aware of schema itself. Once that is done, we can remove
|
||||
// this and use a type-specific schema from the communicator to validate
|
||||
// exactly what is expected for a given connection type.
|
||||
var connectionBlockSupersetSchema = &configschema.Block{
|
||||
Attributes: map[string]*configschema.Attribute{
|
||||
// NOTE: "type" is not included here because it's treated special
|
||||
// by the config loader and stored away in a separate field.
|
||||
|
||||
// Common attributes for both connection types
|
||||
"host": {
|
||||
Type: cty.String,
|
||||
Required: true,
|
||||
},
|
||||
"type": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"user": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"password": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"port": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"timeout": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"script_path": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
// For type=ssh only (enforced in ssh communicator)
|
||||
"target_platform": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"private_key": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"certificate": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"host_key": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"agent": {
|
||||
Type: cty.Bool,
|
||||
Optional: true,
|
||||
},
|
||||
"agent_identity": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"bastion_host": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"bastion_host_key": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"bastion_port": {
|
||||
Type: cty.Number,
|
||||
Optional: true,
|
||||
},
|
||||
"bastion_user": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"bastion_password": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"bastion_private_key": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"bastion_certificate": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
|
||||
// For type=winrm only (enforced in winrm communicator)
|
||||
"https": {
|
||||
Type: cty.Bool,
|
||||
Optional: true,
|
||||
},
|
||||
"insecure": {
|
||||
Type: cty.Bool,
|
||||
Optional: true,
|
||||
},
|
||||
"cacert": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"use_ntlm": {
|
||||
Type: cty.Bool,
|
||||
Optional: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
// EvalValidateResource validates the configuration of a resource.
|
||||
type EvalValidateResource struct {
|
||||
Addr addrs.Resource
|
||||
Provider *providers.Interface
|
||||
ProviderSchema **ProviderSchema
|
||||
Config *configs.Resource
|
||||
ProviderMetas map[addrs.Provider]*configs.ProviderMeta
|
||||
|
||||
// ConfigVal, if non-nil, will be updated with the value resulting from
|
||||
// evaluating the given configuration body. Since validation is performed
|
||||
// very early, this value is likely to contain lots of unknown values,
|
||||
// but its type will conform to the schema of the resource type associated
|
||||
// with the resource instance being validated.
|
||||
ConfigVal *cty.Value
|
||||
}
|
||||
|
||||
func (n *EvalValidateResource) Validate(ctx EvalContext) tfdiags.Diagnostics {
|
||||
var diags tfdiags.Diagnostics
|
||||
|
||||
if *n.ProviderSchema == nil {
|
||||
diags = diags.Append(fmt.Errorf("EvalValidateResource has nil schema for %s", n.Addr))
|
||||
return diags
|
||||
}
|
||||
|
||||
provider := *n.Provider
|
||||
cfg := *n.Config
|
||||
schema := *n.ProviderSchema
|
||||
mode := cfg.Mode
|
||||
|
||||
keyData := EvalDataForNoInstanceKey
|
||||
|
||||
switch {
|
||||
case n.Config.Count != nil:
|
||||
// If the config block has count, we'll evaluate with an unknown
|
||||
// number as count.index so we can still type check even though
|
||||
// we won't expand count until the plan phase.
|
||||
keyData = InstanceKeyEvalData{
|
||||
CountIndex: cty.UnknownVal(cty.Number),
|
||||
}
|
||||
|
||||
// Basic type-checking of the count argument. More complete validation
|
||||
// of this will happen when we DynamicExpand during the plan walk.
|
||||
countDiags := n.validateCount(ctx, n.Config.Count)
|
||||
diags = diags.Append(countDiags)
|
||||
|
||||
case n.Config.ForEach != nil:
|
||||
keyData = InstanceKeyEvalData{
|
||||
EachKey: cty.UnknownVal(cty.String),
|
||||
EachValue: cty.UnknownVal(cty.DynamicPseudoType),
|
||||
}
|
||||
|
||||
// Evaluate the for_each expression here so we can expose the diagnostics
|
||||
forEachDiags := n.validateForEach(ctx, n.Config.ForEach)
|
||||
diags = diags.Append(forEachDiags)
|
||||
}
|
||||
|
||||
diags = diags.Append(validateDependsOn(ctx, n.Config.DependsOn))
|
||||
|
||||
// Validate the provider_meta block for the provider this resource
|
||||
// belongs to, if there is one.
|
||||
//
|
||||
// Note: this will return an error for every resource a provider
|
||||
// uses in a module, if the provider_meta for that module is
|
||||
// incorrect. The only way to solve this that we've foudn is to
|
||||
// insert a new ProviderMeta graph node in the graph, and make all
|
||||
// that provider's resources in the module depend on the node. That's
|
||||
// an awful heavy hammer to swing for this feature, which should be
|
||||
// used only in limited cases with heavy coordination with the
|
||||
// Terraform team, so we're going to defer that solution for a future
|
||||
// enhancement to this functionality.
|
||||
/*
|
||||
if n.ProviderMetas != nil {
|
||||
if m, ok := n.ProviderMetas[n.ProviderAddr.ProviderConfig.Type]; ok && m != nil {
|
||||
// if the provider doesn't support this feature, throw an error
|
||||
if (*n.ProviderSchema).ProviderMeta == nil {
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
Severity: hcl.DiagError,
|
||||
Summary: fmt.Sprintf("Provider %s doesn't support provider_meta", cfg.ProviderConfigAddr()),
|
||||
Detail: fmt.Sprintf("The resource %s belongs to a provider that doesn't support provider_meta blocks", n.Addr),
|
||||
Subject: &m.ProviderRange,
|
||||
})
|
||||
} else {
|
||||
_, _, metaDiags := ctx.EvaluateBlock(m.Config, (*n.ProviderSchema).ProviderMeta, nil, EvalDataForNoInstanceKey)
|
||||
diags = diags.Append(metaDiags)
|
||||
}
|
||||
}
|
||||
}
|
||||
*/
|
||||
// BUG(paddy): we're not validating provider_meta blocks on EvalValidate right now
|
||||
// because the ProviderAddr for the resource isn't available on the EvalValidate
|
||||
// struct.
|
||||
|
||||
// Provider entry point varies depending on resource mode, because
|
||||
// managed resources and data resources are two distinct concepts
|
||||
// in the provider abstraction.
|
||||
switch mode {
|
||||
case addrs.ManagedResourceMode:
|
||||
schema, _ := schema.SchemaForResourceType(mode, cfg.Type)
|
||||
if schema == nil {
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
Severity: hcl.DiagError,
|
||||
Summary: "Invalid resource type",
|
||||
Detail: fmt.Sprintf("The provider %s does not support resource type %q.", cfg.ProviderConfigAddr(), cfg.Type),
|
||||
Subject: &cfg.TypeRange,
|
||||
})
|
||||
return diags
|
||||
}
|
||||
|
||||
configVal, _, valDiags := ctx.EvaluateBlock(cfg.Config, schema, nil, keyData)
|
||||
diags = diags.Append(valDiags)
|
||||
if valDiags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
if cfg.Managed != nil { // can be nil only in tests with poorly-configured mocks
|
||||
for _, traversal := range cfg.Managed.IgnoreChanges {
|
||||
// validate the ignore_changes traversals apply.
|
||||
moreDiags := schema.StaticValidateTraversal(traversal)
|
||||
diags = diags.Append(moreDiags)
|
||||
|
||||
// TODO: we want to notify users that they can't use
|
||||
// ignore_changes for computed attributes, but we don't have an
|
||||
// easy way to correlate the config value, schema and
|
||||
// traversal together.
|
||||
}
|
||||
}
|
||||
|
||||
// Use unmarked value for validate request
|
||||
unmarkedConfigVal, _ := configVal.UnmarkDeep()
|
||||
req := providers.ValidateResourceTypeConfigRequest{
|
||||
TypeName: cfg.Type,
|
||||
Config: unmarkedConfigVal,
|
||||
}
|
||||
|
||||
resp := provider.ValidateResourceTypeConfig(req)
|
||||
diags = diags.Append(resp.Diagnostics.InConfigBody(cfg.Config))
|
||||
|
||||
if n.ConfigVal != nil {
|
||||
*n.ConfigVal = configVal
|
||||
}
|
||||
|
||||
case addrs.DataResourceMode:
|
||||
schema, _ := schema.SchemaForResourceType(mode, cfg.Type)
|
||||
if schema == nil {
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
Severity: hcl.DiagError,
|
||||
Summary: "Invalid data source",
|
||||
Detail: fmt.Sprintf("The provider %s does not support data source %q.", cfg.ProviderConfigAddr(), cfg.Type),
|
||||
Subject: &cfg.TypeRange,
|
||||
})
|
||||
return diags
|
||||
}
|
||||
|
||||
configVal, _, valDiags := ctx.EvaluateBlock(cfg.Config, schema, nil, keyData)
|
||||
diags = diags.Append(valDiags)
|
||||
if valDiags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
// Use unmarked value for validate request
|
||||
unmarkedConfigVal, _ := configVal.UnmarkDeep()
|
||||
req := providers.ValidateDataSourceConfigRequest{
|
||||
TypeName: cfg.Type,
|
||||
Config: unmarkedConfigVal,
|
||||
}
|
||||
|
||||
resp := provider.ValidateDataSourceConfig(req)
|
||||
diags = diags.Append(resp.Diagnostics.InConfigBody(cfg.Config))
|
||||
}
|
||||
|
||||
return diags
|
||||
}
|
||||
|
||||
func (n *EvalValidateResource) validateCount(ctx EvalContext, expr hcl.Expression) tfdiags.Diagnostics {
|
||||
if expr == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
var diags tfdiags.Diagnostics
|
||||
|
||||
countVal, countDiags := ctx.EvaluateExpr(expr, cty.Number, nil)
|
||||
diags = diags.Append(countDiags)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
if countVal.IsNull() {
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
Severity: hcl.DiagError,
|
||||
Summary: "Invalid count argument",
|
||||
Detail: `The given "count" argument value is null. An integer is required.`,
|
||||
Subject: expr.Range().Ptr(),
|
||||
})
|
||||
return diags
|
||||
}
|
||||
|
||||
var err error
|
||||
countVal, err = convert.Convert(countVal, cty.Number)
|
||||
if err != nil {
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
Severity: hcl.DiagError,
|
||||
Summary: "Invalid count argument",
|
||||
Detail: fmt.Sprintf(`The given "count" argument value is unsuitable: %s.`, err),
|
||||
Subject: expr.Range().Ptr(),
|
||||
})
|
||||
return diags
|
||||
}
|
||||
|
||||
// If the value isn't known then that's the best we can do for now, but
|
||||
// we'll check more thoroughly during the plan walk.
|
||||
if !countVal.IsKnown() {
|
||||
return diags
|
||||
}
|
||||
|
||||
// If we _do_ know the value, then we can do a few more checks here.
|
||||
var count int
|
||||
err = gocty.FromCtyValue(countVal, &count)
|
||||
if err != nil {
|
||||
// Isn't a whole number, etc.
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
Severity: hcl.DiagError,
|
||||
Summary: "Invalid count argument",
|
||||
Detail: fmt.Sprintf(`The given "count" argument value is unsuitable: %s.`, err),
|
||||
Subject: expr.Range().Ptr(),
|
||||
})
|
||||
return diags
|
||||
}
|
||||
|
||||
if count < 0 {
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
Severity: hcl.DiagError,
|
||||
Summary: "Invalid count argument",
|
||||
Detail: `The given "count" argument value is unsuitable: count cannot be negative.`,
|
||||
Subject: expr.Range().Ptr(),
|
||||
})
|
||||
return diags
|
||||
}
|
||||
|
||||
return diags
|
||||
}
|
||||
|
||||
func (n *EvalValidateResource) validateForEach(ctx EvalContext, expr hcl.Expression) (diags tfdiags.Diagnostics) {
|
||||
val, forEachDiags := evaluateForEachExpressionValue(expr, ctx, true)
|
||||
// If the value isn't known then that's the best we can do for now, but
|
||||
// we'll check more thoroughly during the plan walk
|
||||
if !val.IsKnown() {
|
||||
return diags
|
||||
}
|
||||
|
||||
if forEachDiags.HasErrors() {
|
||||
diags = diags.Append(forEachDiags)
|
||||
}
|
||||
|
||||
return diags
|
||||
}
|
||||
|
||||
func validateDependsOn(ctx EvalContext, dependsOn []hcl.Traversal) (diags tfdiags.Diagnostics) {
|
||||
for _, traversal := range dependsOn {
|
||||
ref, refDiags := addrs.ParseRef(traversal)
|
||||
diags = diags.Append(refDiags)
|
||||
if !refDiags.HasErrors() && len(ref.Remaining) != 0 {
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
Severity: hcl.DiagError,
|
||||
Summary: "Invalid depends_on reference",
|
||||
Detail: "References in depends_on must be to a whole object (resource, etc), not to an attribute of an object.",
|
||||
Subject: ref.Remaining.SourceRange().Ptr(),
|
||||
})
|
||||
}
|
||||
|
||||
// The ref must also refer to something that exists. To test that,
|
||||
// we'll just eval it and count on the fact that our evaluator will
|
||||
// detect references to non-existent objects.
|
||||
if !diags.HasErrors() {
|
||||
scope := ctx.EvaluationScope(nil, EvalDataForNoInstanceKey)
|
||||
if scope != nil { // sometimes nil in tests, due to incomplete mocks
|
||||
_, refDiags = scope.EvalReference(ref, cty.DynamicPseudoType)
|
||||
diags = diags.Append(refDiags)
|
||||
}
|
||||
}
|
||||
}
|
||||
return diags
|
||||
}
|
|
@ -3,9 +3,7 @@ package terraform
|
|||
import "github.com/hashicorp/terraform/tfdiags"
|
||||
|
||||
// GraphNodeExecutable is the interface that graph nodes must implement to
|
||||
// enable execution. This is an alternative to GraphNodeEvalable, which is in
|
||||
// the process of being removed. A given graph node should _not_ implement both
|
||||
// GraphNodeExecutable and GraphNodeEvalable.
|
||||
// enable execution.
|
||||
type GraphNodeExecutable interface {
|
||||
Execute(EvalContext, walkOperation) tfdiags.Diagnostics
|
||||
}
|
||||
|
|
|
@ -155,13 +155,13 @@ func (n *nodeModuleVariable) Execute(ctx EvalContext, op walkOperation) (diags t
|
|||
|
||||
switch op {
|
||||
case walkValidate:
|
||||
vals, err = n.EvalModuleCallArgument(ctx, true)
|
||||
vals, err = n.evalModuleCallArgument(ctx, true)
|
||||
diags = diags.Append(err)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
default:
|
||||
vals, err = n.EvalModuleCallArgument(ctx, false)
|
||||
vals, err = n.evalModuleCallArgument(ctx, false)
|
||||
diags = diags.Append(err)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
|
@ -187,7 +187,7 @@ func (n *nodeModuleVariable) DotNode(name string, opts *dag.DotOpts) *dag.DotNod
|
|||
}
|
||||
}
|
||||
|
||||
// EvalModuleCallArgument produces the value for a particular variable as will
|
||||
// evalModuleCallArgument produces the value for a particular variable as will
|
||||
// be used by a child module instance.
|
||||
//
|
||||
// The result is written into a map, with its key set to the local name of the
|
||||
|
@ -199,7 +199,7 @@ func (n *nodeModuleVariable) DotNode(name string, opts *dag.DotOpts) *dag.DotNod
|
|||
// validateOnly indicates that this evaluation is only for config
|
||||
// validation, and we will not have any expansion module instance
|
||||
// repetition data.
|
||||
func (n *nodeModuleVariable) EvalModuleCallArgument(ctx EvalContext, validateOnly bool) (map[string]cty.Value, error) {
|
||||
func (n *nodeModuleVariable) evalModuleCallArgument(ctx EvalContext, validateOnly bool) (map[string]cty.Value, error) {
|
||||
wantType := n.Config.Type
|
||||
name := n.Addr.Variable.Name
|
||||
expr := n.Expr
|
||||
|
|
|
@ -487,7 +487,7 @@ func (n *NodeApplyableOutput) setValue(state *states.SyncState, changes *plans.C
|
|||
// Should never happen, since we just constructed this right above
|
||||
panic(fmt.Sprintf("planned change for %s could not be encoded: %s", n.Addr, err))
|
||||
}
|
||||
log.Printf("[TRACE] ExecuteWriteOutput: Saving %s change for %s in changeset", change.Action, n.Addr)
|
||||
log.Printf("[TRACE] setValue: Saving %s change for %s in changeset", change.Action, n.Addr)
|
||||
changes.RemoveOutputChange(n.Addr) // remove any existing planned change, if present
|
||||
changes.AppendOutputChange(cs) // add the new planned change
|
||||
}
|
||||
|
@ -496,12 +496,12 @@ func (n *NodeApplyableOutput) setValue(state *states.SyncState, changes *plans.C
|
|||
// The state itself doesn't represent unknown values, so we null them
|
||||
// out here and then we'll save the real unknown value in the planned
|
||||
// changeset below, if we have one on this graph walk.
|
||||
log.Printf("[TRACE] EvalWriteOutput: Saving value for %s in state", n.Addr)
|
||||
log.Printf("[TRACE] setValue: Saving value for %s in state", n.Addr)
|
||||
unmarkedVal, _ := val.UnmarkDeep()
|
||||
stateVal := cty.UnknownAsNull(unmarkedVal)
|
||||
state.SetOutputValue(n.Addr, stateVal, n.Config.Sensitive)
|
||||
} else {
|
||||
log.Printf("[TRACE] EvalWriteOutput: Removing %s from state (it is now null)", n.Addr)
|
||||
log.Printf("[TRACE] setValue: Removing %s from state (it is now null)", n.Addr)
|
||||
state.RemoveOutputValue(n.Addr)
|
||||
}
|
||||
|
||||
|
|
|
@ -27,7 +27,7 @@ func (n *NodeApplyableProvider) Execute(ctx EvalContext, op walkOperation) (diag
|
|||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
provider, _, err := GetProvider(ctx, n.Addr)
|
||||
provider, _, err := getProvider(ctx, n.Addr)
|
||||
diags = diags.Append(err)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
|
@ -71,10 +71,10 @@ func (n *NodeApplyableProvider) ValidateProvider(ctx EvalContext, provider provi
|
|||
}
|
||||
|
||||
configVal, _, evalDiags := ctx.EvaluateBlock(configBody, configSchema, nil, EvalDataForNoInstanceKey)
|
||||
diags = diags.Append(evalDiags)
|
||||
if evalDiags.HasErrors() {
|
||||
return diags
|
||||
return diags.Append(evalDiags)
|
||||
}
|
||||
diags = diags.Append(evalDiags)
|
||||
|
||||
// If our config value contains any marked values, ensure those are
|
||||
// stripped out before sending this to the provider
|
||||
|
@ -134,10 +134,22 @@ func (n *NodeApplyableProvider) ConfigureProvider(ctx EvalContext, provider prov
|
|||
// PrepareProviderConfig is only used for validation. We are intentionally
|
||||
// ignoring the PreparedConfig field to maintain existing behavior.
|
||||
prepareResp := provider.PrepareProviderConfig(req)
|
||||
diags = diags.Append(prepareResp.Diagnostics)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
if prepareResp.Diagnostics.HasErrors() {
|
||||
if config == nil {
|
||||
// If there isn't an explicit "provider" block in the configuration,
|
||||
// this error message won't be very clear. Add some detail to the
|
||||
// error message in this case.
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Invalid provider configuration",
|
||||
fmt.Sprintf(providerConfigErr, prepareResp.Diagnostics.Err(), n.Addr.Provider),
|
||||
))
|
||||
return diags
|
||||
} else {
|
||||
return diags.Append(prepareResp.Diagnostics)
|
||||
}
|
||||
}
|
||||
diags = diags.Append(prepareResp.Diagnostics)
|
||||
|
||||
// If the provider returns something different, log a warning to help
|
||||
// indicate to provider developers that the value is not used.
|
||||
|
@ -147,7 +159,27 @@ func (n *NodeApplyableProvider) ConfigureProvider(ctx EvalContext, provider prov
|
|||
}
|
||||
|
||||
configDiags := ctx.ConfigureProvider(n.Addr, unmarkedConfigVal)
|
||||
if configDiags.HasErrors() {
|
||||
if config == nil {
|
||||
// If there isn't an explicit "provider" block in the configuration,
|
||||
// this error message won't be very clear. Add some detail to the
|
||||
// error message in this case.
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Invalid provider configuration",
|
||||
fmt.Sprintf(providerConfigErr, configDiags.InConfigBody(configBody).Err(), n.Addr.Provider),
|
||||
))
|
||||
return diags
|
||||
} else {
|
||||
return diags.Append(configDiags.InConfigBody(configBody))
|
||||
}
|
||||
}
|
||||
diags = diags.Append(configDiags.InConfigBody(configBody))
|
||||
|
||||
return diags
|
||||
}
|
||||
|
||||
const providerConfigErr = `%s
|
||||
|
||||
Provider %q requires explicit configuration. Add a provider block to the root module and configure the provider's required arguments as described in the provider documentation.
|
||||
`
|
||||
|
|
|
@ -1,11 +1,16 @@
|
|||
package terraform
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/hashicorp/hcl/v2"
|
||||
"github.com/hashicorp/terraform/addrs"
|
||||
"github.com/hashicorp/terraform/configs"
|
||||
"github.com/hashicorp/terraform/configs/configschema"
|
||||
"github.com/hashicorp/terraform/providers"
|
||||
"github.com/hashicorp/terraform/tfdiags"
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
)
|
||||
|
||||
|
@ -224,3 +229,256 @@ func TestNodeApplyableProviderExecute_emptyValidate(t *testing.T) {
|
|||
t.Fatal("should not be called")
|
||||
}
|
||||
}
|
||||
|
||||
func TestNodeApplyableProvider_Validate(t *testing.T) {
|
||||
provider := &MockProvider{
|
||||
GetSchemaReturn: &ProviderSchema{
|
||||
Provider: &configschema.Block{
|
||||
Attributes: map[string]*configschema.Attribute{
|
||||
"region": {
|
||||
Type: cty.String,
|
||||
Required: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
ctx := &MockEvalContext{ProviderProvider: provider}
|
||||
ctx.installSimpleEval()
|
||||
|
||||
t.Run("valid", func(t *testing.T) {
|
||||
config := &configs.Provider{
|
||||
Name: "test",
|
||||
Config: configs.SynthBody("", map[string]cty.Value{
|
||||
"region": cty.StringVal("mars"),
|
||||
}),
|
||||
}
|
||||
|
||||
node := NodeApplyableProvider{
|
||||
NodeAbstractProvider: &NodeAbstractProvider{
|
||||
Addr: mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"]`),
|
||||
Config: config,
|
||||
},
|
||||
}
|
||||
|
||||
diags := node.ValidateProvider(ctx, provider)
|
||||
if diags.HasErrors() {
|
||||
t.Errorf("unexpected error with valid config: %s", diags.Err())
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("invalid", func(t *testing.T) {
|
||||
config := &configs.Provider{
|
||||
Name: "test",
|
||||
Config: configs.SynthBody("", map[string]cty.Value{
|
||||
"region": cty.MapValEmpty(cty.String),
|
||||
}),
|
||||
}
|
||||
|
||||
node := NodeApplyableProvider{
|
||||
NodeAbstractProvider: &NodeAbstractProvider{
|
||||
Addr: mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"]`),
|
||||
Config: config,
|
||||
},
|
||||
}
|
||||
|
||||
diags := node.ValidateProvider(ctx, provider)
|
||||
if !diags.HasErrors() {
|
||||
t.Error("missing expected error with invalid config")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("empty config", func(t *testing.T) {
|
||||
node := NodeApplyableProvider{
|
||||
NodeAbstractProvider: &NodeAbstractProvider{
|
||||
Addr: mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"]`),
|
||||
},
|
||||
}
|
||||
|
||||
diags := node.ValidateProvider(ctx, provider)
|
||||
if diags.HasErrors() {
|
||||
t.Errorf("unexpected error with empty config: %s", diags.Err())
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
//This test specifically tests responses from the
|
||||
//providers.PrepareProviderConfigFn. See
|
||||
//TestNodeApplyableProvider_ConfigProvider_config_fn_err for
|
||||
//providers.ConfigureRequest responses.
|
||||
func TestNodeApplyableProvider_ConfigProvider(t *testing.T) {
|
||||
provider := &MockProvider{
|
||||
GetSchemaReturn: &ProviderSchema{
|
||||
Provider: &configschema.Block{
|
||||
Attributes: map[string]*configschema.Attribute{
|
||||
"region": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
// For this test, we're returning an error for an optional argument. This
|
||||
// can happen for example if an argument is only conditionally required.
|
||||
provider.PrepareProviderConfigFn = func(req providers.PrepareProviderConfigRequest) (resp providers.PrepareProviderConfigResponse) {
|
||||
region := req.Config.GetAttr("region")
|
||||
if region.IsNull() {
|
||||
resp.Diagnostics = resp.Diagnostics.Append(fmt.Errorf("value is not found"))
|
||||
}
|
||||
return
|
||||
}
|
||||
ctx := &MockEvalContext{ProviderProvider: provider}
|
||||
ctx.installSimpleEval()
|
||||
|
||||
t.Run("valid", func(t *testing.T) {
|
||||
config := &configs.Provider{
|
||||
Name: "test",
|
||||
Config: configs.SynthBody("", map[string]cty.Value{
|
||||
"region": cty.StringVal("mars"),
|
||||
}),
|
||||
}
|
||||
|
||||
node := NodeApplyableProvider{
|
||||
NodeAbstractProvider: &NodeAbstractProvider{
|
||||
Addr: mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"]`),
|
||||
Config: config,
|
||||
},
|
||||
}
|
||||
|
||||
diags := node.ConfigureProvider(ctx, provider, false)
|
||||
if diags.HasErrors() {
|
||||
t.Errorf("unexpected error with valid config: %s", diags.Err())
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("missing required config (no config at all)", func(t *testing.T) {
|
||||
node := NodeApplyableProvider{
|
||||
NodeAbstractProvider: &NodeAbstractProvider{
|
||||
Addr: mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"]`),
|
||||
},
|
||||
}
|
||||
|
||||
diags := node.ConfigureProvider(ctx, provider, false)
|
||||
if !diags.HasErrors() {
|
||||
t.Fatal("missing expected error with nil config")
|
||||
}
|
||||
if !strings.Contains(diags.Err().Error(), "requires explicit configuration") {
|
||||
t.Errorf("diagnostic is missing \"requires explicit configuration\" message: %s", diags.Err())
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("missing required config", func(t *testing.T) {
|
||||
config := &configs.Provider{
|
||||
Name: "test",
|
||||
Config: hcl.EmptyBody(),
|
||||
}
|
||||
node := NodeApplyableProvider{
|
||||
NodeAbstractProvider: &NodeAbstractProvider{
|
||||
Addr: mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"]`),
|
||||
Config: config,
|
||||
},
|
||||
}
|
||||
|
||||
diags := node.ConfigureProvider(ctx, provider, false)
|
||||
if !diags.HasErrors() {
|
||||
t.Fatal("missing expected error with invalid config")
|
||||
}
|
||||
if diags.Err().Error() != "value is not found" {
|
||||
t.Errorf("wrong diagnostic: %s", diags.Err())
|
||||
}
|
||||
})
|
||||
|
||||
}
|
||||
|
||||
//This test is similar to TestNodeApplyableProvider_ConfigProvider, but tests responses from the providers.ConfigureRequest
|
||||
func TestNodeApplyableProvider_ConfigProvider_config_fn_err(t *testing.T) {
|
||||
provider := &MockProvider{
|
||||
GetSchemaReturn: &ProviderSchema{
|
||||
Provider: &configschema.Block{
|
||||
Attributes: map[string]*configschema.Attribute{
|
||||
"region": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
ctx := &MockEvalContext{ProviderProvider: provider}
|
||||
ctx.installSimpleEval()
|
||||
// For this test, provider.PrepareConfigFn will succeed every time but the
|
||||
// ctx.ConfigureProviderFn will return an error if a value is not found.
|
||||
//
|
||||
// This is an unlikely but real situation that occurs:
|
||||
// https://github.com/hashicorp/terraform/issues/23087
|
||||
ctx.ConfigureProviderFn = func(addr addrs.AbsProviderConfig, cfg cty.Value) (diags tfdiags.Diagnostics) {
|
||||
if cfg.IsNull() {
|
||||
diags = diags.Append(fmt.Errorf("no config provided"))
|
||||
} else {
|
||||
region := cfg.GetAttr("region")
|
||||
if region.IsNull() {
|
||||
diags = diags.Append(fmt.Errorf("value is not found"))
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
t.Run("valid", func(t *testing.T) {
|
||||
config := &configs.Provider{
|
||||
Name: "test",
|
||||
Config: configs.SynthBody("", map[string]cty.Value{
|
||||
"region": cty.StringVal("mars"),
|
||||
}),
|
||||
}
|
||||
|
||||
node := NodeApplyableProvider{
|
||||
NodeAbstractProvider: &NodeAbstractProvider{
|
||||
Addr: mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"]`),
|
||||
Config: config,
|
||||
},
|
||||
}
|
||||
|
||||
diags := node.ConfigureProvider(ctx, provider, false)
|
||||
if diags.HasErrors() {
|
||||
t.Errorf("unexpected error with valid config: %s", diags.Err())
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("missing required config (no config at all)", func(t *testing.T) {
|
||||
node := NodeApplyableProvider{
|
||||
NodeAbstractProvider: &NodeAbstractProvider{
|
||||
Addr: mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"]`),
|
||||
},
|
||||
}
|
||||
|
||||
diags := node.ConfigureProvider(ctx, provider, false)
|
||||
if !diags.HasErrors() {
|
||||
t.Fatal("missing expected error with nil config")
|
||||
}
|
||||
if !strings.Contains(diags.Err().Error(), "requires explicit configuration") {
|
||||
t.Errorf("diagnostic is missing \"requires explicit configuration\" message: %s", diags.Err())
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("missing required config", func(t *testing.T) {
|
||||
config := &configs.Provider{
|
||||
Name: "test",
|
||||
Config: hcl.EmptyBody(),
|
||||
}
|
||||
node := NodeApplyableProvider{
|
||||
NodeAbstractProvider: &NodeAbstractProvider{
|
||||
Addr: mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"]`),
|
||||
Config: config,
|
||||
},
|
||||
}
|
||||
|
||||
diags := node.ConfigureProvider(ctx, provider, false)
|
||||
if !diags.HasErrors() {
|
||||
t.Fatal("missing expected error with invalid config")
|
||||
}
|
||||
if diags.Err().Error() != "value is not found" {
|
||||
t.Errorf("wrong diagnostic: %s", diags.Err())
|
||||
}
|
||||
})
|
||||
}
|
||||
|
|
|
@ -345,20 +345,20 @@ func (n *NodeAbstractResource) writeResourceState(ctx EvalContext, addr addrs.Ab
|
|||
return diags
|
||||
}
|
||||
|
||||
// ReadResourceInstanceState reads the current object for a specific instance in
|
||||
// readResourceInstanceState reads the current object for a specific instance in
|
||||
// the state.
|
||||
func (n *NodeAbstractResource) ReadResourceInstanceState(ctx EvalContext, addr addrs.AbsResourceInstance) (*states.ResourceInstanceObject, error) {
|
||||
provider, providerSchema, err := GetProvider(ctx, n.ResolvedProvider)
|
||||
func (n *NodeAbstractResource) readResourceInstanceState(ctx EvalContext, addr addrs.AbsResourceInstance) (*states.ResourceInstanceObject, error) {
|
||||
provider, providerSchema, err := getProvider(ctx, n.ResolvedProvider)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
log.Printf("[TRACE] ReadResourceInstanceState: reading state for %s", addr)
|
||||
log.Printf("[TRACE] readResourceInstanceState: reading state for %s", addr)
|
||||
|
||||
src := ctx.State().ResourceInstanceObject(addr, states.CurrentGen)
|
||||
if src == nil {
|
||||
// Presumably we only have deposed objects, then.
|
||||
log.Printf("[TRACE] ReadResourceInstanceState: no state present for %s", addr)
|
||||
log.Printf("[TRACE] readResourceInstanceState: no state present for %s", addr)
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
|
@ -368,7 +368,7 @@ func (n *NodeAbstractResource) ReadResourceInstanceState(ctx EvalContext, addr a
|
|||
return nil, fmt.Errorf("no schema available for %s while reading state; this is a bug in Terraform and should be reported", addr)
|
||||
}
|
||||
var diags tfdiags.Diagnostics
|
||||
src, diags = UpgradeResourceState(addr, provider, src, schema, currentVersion)
|
||||
src, diags = upgradeResourceState(addr, provider, src, schema, currentVersion)
|
||||
if diags.HasErrors() {
|
||||
// Note that we don't have any channel to return warnings here. We'll
|
||||
// accept that for now since warnings during a schema upgrade would
|
||||
|
@ -385,24 +385,24 @@ func (n *NodeAbstractResource) ReadResourceInstanceState(ctx EvalContext, addr a
|
|||
return obj, nil
|
||||
}
|
||||
|
||||
// ReadResourceInstanceStateDeposed reads the deposed object for a specific
|
||||
// readResourceInstanceStateDeposed reads the deposed object for a specific
|
||||
// instance in the state.
|
||||
func (n *NodeAbstractResource) ReadResourceInstanceStateDeposed(ctx EvalContext, addr addrs.AbsResourceInstance, key states.DeposedKey) (*states.ResourceInstanceObject, error) {
|
||||
provider, providerSchema, err := GetProvider(ctx, n.ResolvedProvider)
|
||||
func (n *NodeAbstractResource) readResourceInstanceStateDeposed(ctx EvalContext, addr addrs.AbsResourceInstance, key states.DeposedKey) (*states.ResourceInstanceObject, error) {
|
||||
provider, providerSchema, err := getProvider(ctx, n.ResolvedProvider)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if key == states.NotDeposed {
|
||||
return nil, fmt.Errorf("EvalReadStateDeposed used with no instance key; this is a bug in Terraform and should be reported")
|
||||
return nil, fmt.Errorf("readResourceInstanceStateDeposed used with no instance key; this is a bug in Terraform and should be reported")
|
||||
}
|
||||
|
||||
log.Printf("[TRACE] EvalReadStateDeposed: reading state for %s deposed object %s", addr, key)
|
||||
log.Printf("[TRACE] readResourceInstanceStateDeposed: reading state for %s deposed object %s", addr, key)
|
||||
|
||||
src := ctx.State().ResourceInstanceObject(addr, key)
|
||||
if src == nil {
|
||||
// Presumably we only have deposed objects, then.
|
||||
log.Printf("[TRACE] EvalReadStateDeposed: no state present for %s deposed object %s", addr, key)
|
||||
log.Printf("[TRACE] readResourceInstanceStateDeposed: no state present for %s deposed object %s", addr, key)
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
|
@ -413,7 +413,7 @@ func (n *NodeAbstractResource) ReadResourceInstanceStateDeposed(ctx EvalContext,
|
|||
|
||||
}
|
||||
|
||||
src, diags := UpgradeResourceState(addr, provider, src, schema, currentVersion)
|
||||
src, diags := upgradeResourceState(addr, provider, src, schema, currentVersion)
|
||||
if diags.HasErrors() {
|
||||
// Note that we don't have any channel to return warnings here. We'll
|
||||
// accept that for now since warnings during a schema upgrade would
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -160,7 +160,7 @@ func TestNodeAbstractResource_ReadResourceInstanceState(t *testing.T) {
|
|||
ctx.ProviderSchemaSchema = mockProvider.GetSchemaReturn
|
||||
ctx.ProviderProvider = providers.Interface(mockProvider)
|
||||
|
||||
got, err := test.Node.ReadResourceInstanceState(ctx, test.Node.Addr.Resource.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance))
|
||||
got, err := test.Node.readResourceInstanceState(ctx, test.Node.Addr.Resource.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance))
|
||||
if err != nil {
|
||||
t.Fatalf("[%s] Got err: %#v", k, err.Error())
|
||||
}
|
||||
|
@ -223,7 +223,7 @@ func TestNodeAbstractResource_ReadResourceInstanceStateDeposed(t *testing.T) {
|
|||
|
||||
key := states.DeposedKey("00000001") // shim from legacy state assigns 0th deposed index this key
|
||||
|
||||
got, err := test.Node.ReadResourceInstanceStateDeposed(ctx, test.Node.Addr.Resource.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), key)
|
||||
got, err := test.Node.readResourceInstanceStateDeposed(ctx, test.Node.Addr.Resource.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance), key)
|
||||
if err != nil {
|
||||
t.Fatalf("[%s] Got err: %#v", k, err.Error())
|
||||
}
|
||||
|
|
|
@ -7,6 +7,7 @@ import (
|
|||
"github.com/hashicorp/terraform/addrs"
|
||||
"github.com/hashicorp/terraform/configs"
|
||||
"github.com/hashicorp/terraform/plans"
|
||||
"github.com/hashicorp/terraform/plans/objchange"
|
||||
"github.com/hashicorp/terraform/states"
|
||||
"github.com/hashicorp/terraform/tfdiags"
|
||||
)
|
||||
|
@ -134,9 +135,7 @@ func (n *NodeApplyableResourceInstance) Execute(ctx EvalContext, op walkOperatio
|
|||
}
|
||||
|
||||
func (n *NodeApplyableResourceInstance) dataResourceExecute(ctx EvalContext) (diags tfdiags.Diagnostics) {
|
||||
addr := n.ResourceInstanceAddr().Resource
|
||||
|
||||
provider, providerSchema, err := GetProvider(ctx, n.ResolvedProvider)
|
||||
_, providerSchema, err := getProvider(ctx, n.ResolvedProvider)
|
||||
diags = diags.Append(err)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
|
@ -152,23 +151,11 @@ func (n *NodeApplyableResourceInstance) dataResourceExecute(ctx EvalContext) (di
|
|||
return diags
|
||||
}
|
||||
|
||||
// In this particular call to EvalReadData we include our planned
|
||||
// In this particular call to applyDataSource we include our planned
|
||||
// change, which signals that we expect this read to complete fully
|
||||
// with no unknown values; it'll produce an error if not.
|
||||
var state *states.ResourceInstanceObject
|
||||
readDataApply := &evalReadDataApply{
|
||||
evalReadData{
|
||||
Addr: addr,
|
||||
Config: n.Config,
|
||||
Planned: &change,
|
||||
Provider: &provider,
|
||||
ProviderAddr: n.ResolvedProvider,
|
||||
ProviderMetas: n.ProviderMetas,
|
||||
ProviderSchema: &providerSchema,
|
||||
State: &state,
|
||||
},
|
||||
}
|
||||
diags = diags.Append(readDataApply.Eval(ctx))
|
||||
state, applyDiags := n.applyDataSource(ctx, change)
|
||||
diags = diags.Append(applyDiags)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
@ -179,17 +166,9 @@ func (n *NodeApplyableResourceInstance) dataResourceExecute(ctx EvalContext) (di
|
|||
return diags
|
||||
}
|
||||
|
||||
writeDiff := &EvalWriteDiff{
|
||||
Addr: addr,
|
||||
ProviderSchema: &providerSchema,
|
||||
Change: nil,
|
||||
}
|
||||
diags = diags.Append(writeDiff.Eval(ctx))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
diags = diags.Append(n.writeChange(ctx, nil, ""))
|
||||
|
||||
diags = diags.Append(UpdateStateHook(ctx))
|
||||
diags = diags.Append(updateStateHook(ctx))
|
||||
return diags
|
||||
}
|
||||
|
||||
|
@ -197,12 +176,11 @@ func (n *NodeApplyableResourceInstance) managedResourceExecute(ctx EvalContext)
|
|||
// Declare a bunch of variables that are used for state during
|
||||
// evaluation. Most of this are written to by-address below.
|
||||
var state *states.ResourceInstanceObject
|
||||
var createNew bool
|
||||
var createBeforeDestroyEnabled bool
|
||||
var deposedKey states.DeposedKey
|
||||
|
||||
addr := n.ResourceInstanceAddr().Resource
|
||||
provider, providerSchema, err := GetProvider(ctx, n.ResolvedProvider)
|
||||
_, providerSchema, err := getProvider(ctx, n.ResolvedProvider)
|
||||
diags = diags.Append(err)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
|
@ -240,7 +218,7 @@ func (n *NodeApplyableResourceInstance) managedResourceExecute(ctx EvalContext)
|
|||
log.Printf("[TRACE] managedResourceExecute: prior object for %s now deposed with key %s", n.Addr, deposedKey)
|
||||
}
|
||||
|
||||
state, err = n.ReadResourceInstanceState(ctx, n.ResourceInstanceAddr())
|
||||
state, err = n.readResourceInstanceState(ctx, n.ResourceInstanceAddr())
|
||||
diags = diags.Append(err)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
|
@ -255,136 +233,63 @@ func (n *NodeApplyableResourceInstance) managedResourceExecute(ctx EvalContext)
|
|||
|
||||
// Make a new diff, in case we've learned new values in the state
|
||||
// during apply which we can now incorporate.
|
||||
evalDiff := &EvalDiff{
|
||||
Addr: addr,
|
||||
Config: n.Config,
|
||||
Provider: &provider,
|
||||
ProviderAddr: n.ResolvedProvider,
|
||||
ProviderMetas: n.ProviderMetas,
|
||||
ProviderSchema: &providerSchema,
|
||||
State: &state,
|
||||
PreviousDiff: &diff,
|
||||
OutputChange: &diffApply,
|
||||
OutputState: &state,
|
||||
}
|
||||
diags = diags.Append(evalDiff.Eval(ctx))
|
||||
diffApply, state, planDiags := n.plan(ctx, diff, state, false)
|
||||
diags = diags.Append(planDiags)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
// Compare the diffs
|
||||
checkPlannedChange := &EvalCheckPlannedChange{
|
||||
Addr: addr,
|
||||
ProviderAddr: n.ResolvedProvider,
|
||||
ProviderSchema: &providerSchema,
|
||||
Planned: &diff,
|
||||
Actual: &diffApply,
|
||||
}
|
||||
diags = diags.Append(checkPlannedChange.Eval(ctx))
|
||||
diags = diags.Append(n.checkPlannedChange(ctx, diff, diffApply, providerSchema))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
state, err = n.ReadResourceInstanceState(ctx, n.ResourceInstanceAddr())
|
||||
state, err = n.readResourceInstanceState(ctx, n.ResourceInstanceAddr())
|
||||
diags = diags.Append(err)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
reduceDiff := &EvalReduceDiff{
|
||||
Addr: addr,
|
||||
InChange: &diffApply,
|
||||
Destroy: false,
|
||||
OutChange: &diffApply,
|
||||
}
|
||||
diags = diags.Append(reduceDiff.Eval(ctx))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
// EvalReduceDiff may have simplified our planned change
|
||||
diffApply = reducePlan(addr, diffApply, false)
|
||||
// reducePlan may have simplified our planned change
|
||||
// into a NoOp if it only requires destroying, since destroying
|
||||
// is handled by NodeDestroyResourceInstance.
|
||||
if diffApply == nil || diffApply.Action == plans.NoOp {
|
||||
return diags
|
||||
}
|
||||
|
||||
diags = diags.Append(n.PreApplyHook(ctx, diffApply))
|
||||
diags = diags.Append(n.preApplyHook(ctx, diffApply))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
var applyError error
|
||||
evalApply := &EvalApply{
|
||||
Addr: addr,
|
||||
Config: n.Config,
|
||||
State: &state,
|
||||
Change: &diffApply,
|
||||
Provider: &provider,
|
||||
ProviderAddr: n.ResolvedProvider,
|
||||
ProviderMetas: n.ProviderMetas,
|
||||
ProviderSchema: &providerSchema,
|
||||
Output: &state,
|
||||
Error: &applyError,
|
||||
CreateNew: &createNew,
|
||||
CreateBeforeDestroy: n.CreateBeforeDestroy(),
|
||||
}
|
||||
diags = diags.Append(evalApply.Eval(ctx))
|
||||
state, applyError, applyDiags := n.apply(ctx, state, diffApply, n.Config, n.CreateBeforeDestroy(), applyError)
|
||||
diags = diags.Append(applyDiags)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
// We clear the change out here so that future nodes don't see a change
|
||||
// that is already complete.
|
||||
writeDiff := &EvalWriteDiff{
|
||||
Addr: addr,
|
||||
ProviderSchema: &providerSchema,
|
||||
Change: nil,
|
||||
}
|
||||
diags = diags.Append(writeDiff.Eval(ctx))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
diags = diags.Append(n.writeChange(ctx, nil, ""))
|
||||
|
||||
evalMaybeTainted := &EvalMaybeTainted{
|
||||
Addr: addr,
|
||||
State: &state,
|
||||
Change: &diffApply,
|
||||
Error: &applyError,
|
||||
}
|
||||
diags = diags.Append(evalMaybeTainted.Eval(ctx))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
state = maybeTainted(addr.Absolute(ctx.Path()), state, diffApply, applyError)
|
||||
|
||||
diags = diags.Append(n.writeResourceInstanceState(ctx, state, n.Dependencies, workingState))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
applyProvisioners := &EvalApplyProvisioners{
|
||||
Addr: addr,
|
||||
State: &state, // EvalApplyProvisioners will skip if already tainted
|
||||
ResourceConfig: n.Config,
|
||||
CreateNew: &createNew,
|
||||
Error: &applyError,
|
||||
When: configs.ProvisionerWhenCreate,
|
||||
}
|
||||
diags = diags.Append(applyProvisioners.Eval(ctx))
|
||||
createNew := (diffApply.Action == plans.Create || diffApply.Action.IsReplace())
|
||||
applyError, applyProvisionersDiags := n.evalApplyProvisioners(ctx, state, createNew, configs.ProvisionerWhenCreate, applyError)
|
||||
diags = diags.Append(applyProvisionersDiags)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
evalMaybeTainted = &EvalMaybeTainted{
|
||||
Addr: addr,
|
||||
State: &state,
|
||||
Change: &diffApply,
|
||||
Error: &applyError,
|
||||
}
|
||||
diags = diags.Append(evalMaybeTainted.Eval(ctx))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
state = maybeTainted(addr.Absolute(ctx.Path()), state, diffApply, applyError)
|
||||
|
||||
diags = diags.Append(n.writeResourceInstanceState(ctx, state, n.Dependencies, workingState))
|
||||
if diags.HasErrors() {
|
||||
|
@ -418,9 +323,9 @@ func (n *NodeApplyableResourceInstance) managedResourceExecute(ctx EvalContext)
|
|||
} else {
|
||||
restored := ctx.State().MaybeRestoreResourceInstanceDeposed(addr.Absolute(ctx.Path()), deposedKey)
|
||||
if restored {
|
||||
log.Printf("[TRACE] EvalMaybeRestoreDeposedObject: %s deposed object %s was restored as the current object", addr, deposedKey)
|
||||
log.Printf("[TRACE] managedResourceExecute: %s deposed object %s was restored as the current object", addr, deposedKey)
|
||||
} else {
|
||||
log.Printf("[TRACE] EvalMaybeRestoreDeposedObject: %s deposed object %s remains deposed", addr, deposedKey)
|
||||
log.Printf("[TRACE] managedResourceExecute: %s deposed object %s remains deposed", addr, deposedKey)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -428,11 +333,105 @@ func (n *NodeApplyableResourceInstance) managedResourceExecute(ctx EvalContext)
|
|||
return diags
|
||||
}
|
||||
|
||||
diags = diags.Append(n.PostApplyHook(ctx, state, &applyError))
|
||||
diags = diags.Append(n.postApplyHook(ctx, state, &applyError))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
diags = diags.Append(UpdateStateHook(ctx))
|
||||
diags = diags.Append(updateStateHook(ctx))
|
||||
return diags
|
||||
}
|
||||
|
||||
// checkPlannedChange produces errors if the _actual_ expected value is not
|
||||
// compatible with what was recorded in the plan.
|
||||
//
|
||||
// Errors here are most often indicative of a bug in the provider, so our error
|
||||
// messages will report with that in mind. It's also possible that there's a bug
|
||||
// in Terraform's Core's own "proposed new value" code in EvalDiff.
|
||||
func (n *NodeApplyableResourceInstance) checkPlannedChange(ctx EvalContext, plannedChange, actualChange *plans.ResourceInstanceChange, providerSchema *ProviderSchema) tfdiags.Diagnostics {
|
||||
var diags tfdiags.Diagnostics
|
||||
addr := n.ResourceInstanceAddr().Resource
|
||||
|
||||
schema, _ := providerSchema.SchemaForResourceAddr(addr.ContainingResource())
|
||||
if schema == nil {
|
||||
// Should be caught during validation, so we don't bother with a pretty error here
|
||||
diags = diags.Append(fmt.Errorf("provider does not support %q", addr.Resource.Type))
|
||||
return diags
|
||||
}
|
||||
|
||||
absAddr := addr.Absolute(ctx.Path())
|
||||
|
||||
log.Printf("[TRACE] checkPlannedChange: Verifying that actual change (action %s) matches planned change (action %s)", actualChange.Action, plannedChange.Action)
|
||||
|
||||
if plannedChange.Action != actualChange.Action {
|
||||
switch {
|
||||
case plannedChange.Action == plans.Update && actualChange.Action == plans.NoOp:
|
||||
// It's okay for an update to become a NoOp once we've filled in
|
||||
// all of the unknown values, since the final values might actually
|
||||
// match what was there before after all.
|
||||
log.Printf("[DEBUG] After incorporating new values learned so far during apply, %s change has become NoOp", absAddr)
|
||||
|
||||
case (plannedChange.Action == plans.CreateThenDelete && actualChange.Action == plans.DeleteThenCreate) ||
|
||||
(plannedChange.Action == plans.DeleteThenCreate && actualChange.Action == plans.CreateThenDelete):
|
||||
// If the order of replacement changed, then that is a bug in terraform
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Terraform produced inconsistent final plan",
|
||||
fmt.Sprintf(
|
||||
"When expanding the plan for %s to include new values learned so far during apply, the planned action changed from %s to %s.\n\nThis is a bug in Terraform and should be reported.",
|
||||
absAddr, plannedChange.Action, actualChange.Action,
|
||||
),
|
||||
))
|
||||
default:
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Provider produced inconsistent final plan",
|
||||
fmt.Sprintf(
|
||||
"When expanding the plan for %s to include new values learned so far during apply, provider %q changed the planned action from %s to %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
|
||||
absAddr, n.ResolvedProvider.Provider.String(),
|
||||
plannedChange.Action, actualChange.Action,
|
||||
),
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
errs := objchange.AssertObjectCompatible(schema, plannedChange.After, actualChange.After)
|
||||
for _, err := range errs {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Provider produced inconsistent final plan",
|
||||
fmt.Sprintf(
|
||||
"When expanding the plan for %s to include new values learned so far during apply, provider %q produced an invalid new value for %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
|
||||
absAddr, n.ResolvedProvider.Provider.String(), tfdiags.FormatError(err),
|
||||
),
|
||||
))
|
||||
}
|
||||
return diags
|
||||
}
|
||||
|
||||
// maybeTainted takes the resource addr, new value, planned change, and possible
|
||||
// error from an apply operation and return a new instance object marked as
|
||||
// tainted if it appears that a create operation has failed.
|
||||
func maybeTainted(addr addrs.AbsResourceInstance, state *states.ResourceInstanceObject, change *plans.ResourceInstanceChange, err error) *states.ResourceInstanceObject {
|
||||
if state == nil || change == nil || err == nil {
|
||||
return state
|
||||
}
|
||||
if state.Status == states.ObjectTainted {
|
||||
log.Printf("[TRACE] maybeTainted: %s was already tainted, so nothing to do", addr)
|
||||
return state
|
||||
}
|
||||
if change.Action == plans.Create {
|
||||
// If there are errors during a _create_ then the object is
|
||||
// in an undefined state, and so we'll mark it as tainted so
|
||||
// we can try again on the next run.
|
||||
//
|
||||
// We don't do this for other change actions because errors
|
||||
// during updates will often not change the remote object at all.
|
||||
// If there _were_ changes prior to the error, it's the provider's
|
||||
// responsibility to record the effect of those changes in the
|
||||
// object value it returned.
|
||||
log.Printf("[TRACE] maybeTainted: %s encountered an error during creation, so it is now marked as tainted", addr)
|
||||
return state.AsTainted()
|
||||
}
|
||||
return state
|
||||
}
|
||||
|
|
|
@ -137,7 +137,7 @@ func (n *NodeDestroyResourceInstance) Execute(ctx EvalContext, op walkOperation)
|
|||
var state *states.ResourceInstanceObject
|
||||
var provisionerErr error
|
||||
|
||||
provider, providerSchema, err := GetProvider(ctx, n.ResolvedProvider)
|
||||
_, providerSchema, err := getProvider(ctx, n.ResolvedProvider)
|
||||
diags = diags.Append(err)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
|
@ -149,24 +149,14 @@ func (n *NodeDestroyResourceInstance) Execute(ctx EvalContext, op walkOperation)
|
|||
return diags
|
||||
}
|
||||
|
||||
evalReduceDiff := &EvalReduceDiff{
|
||||
Addr: addr.Resource,
|
||||
InChange: &changeApply,
|
||||
Destroy: true,
|
||||
OutChange: &changeApply,
|
||||
}
|
||||
diags = diags.Append(evalReduceDiff.Eval(ctx))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
// EvalReduceDiff may have simplified our planned change
|
||||
changeApply = reducePlan(addr.Resource, changeApply, true)
|
||||
// reducePlan may have simplified our planned change
|
||||
// into a NoOp if it does not require destroying.
|
||||
if changeApply == nil || changeApply.Action == plans.NoOp {
|
||||
return diags
|
||||
}
|
||||
|
||||
state, err = n.ReadResourceInstanceState(ctx, addr)
|
||||
state, err = n.readResourceInstanceState(ctx, addr)
|
||||
diags = diags.Append(err)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
|
@ -177,28 +167,23 @@ func (n *NodeDestroyResourceInstance) Execute(ctx EvalContext, op walkOperation)
|
|||
return diags
|
||||
}
|
||||
|
||||
diags = diags.Append(n.PreApplyHook(ctx, changeApply))
|
||||
diags = diags.Append(n.preApplyHook(ctx, changeApply))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
// Run destroy provisioners if not tainted
|
||||
if state != nil && state.Status != states.ObjectTainted {
|
||||
evalApplyProvisioners := &EvalApplyProvisioners{
|
||||
Addr: addr.Resource,
|
||||
State: &state,
|
||||
ResourceConfig: n.Config,
|
||||
Error: &provisionerErr,
|
||||
When: configs.ProvisionerWhenDestroy,
|
||||
}
|
||||
diags = diags.Append(evalApplyProvisioners.Eval(ctx))
|
||||
var applyProvisionersDiags tfdiags.Diagnostics
|
||||
provisionerErr, applyProvisionersDiags = n.evalApplyProvisioners(ctx, state, false, configs.ProvisionerWhenDestroy, provisionerErr)
|
||||
diags = diags.Append(applyProvisionersDiags)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
if provisionerErr != nil {
|
||||
// If we have a provisioning error, then we just call
|
||||
// the post-apply hook now.
|
||||
diags = diags.Append(n.PostApplyHook(ctx, state, &provisionerErr))
|
||||
diags = diags.Append(n.postApplyHook(ctx, state, &provisionerErr))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
@ -208,19 +193,10 @@ func (n *NodeDestroyResourceInstance) Execute(ctx EvalContext, op walkOperation)
|
|||
// Managed resources need to be destroyed, while data sources
|
||||
// are only removed from state.
|
||||
if addr.Resource.Resource.Mode == addrs.ManagedResourceMode {
|
||||
evalApply := &EvalApply{
|
||||
Addr: addr.Resource,
|
||||
Config: nil, // No configuration because we are destroying
|
||||
State: &state,
|
||||
Change: &changeApply,
|
||||
Provider: &provider,
|
||||
ProviderAddr: n.ResolvedProvider,
|
||||
ProviderMetas: n.ProviderMetas,
|
||||
ProviderSchema: &providerSchema,
|
||||
Output: &state,
|
||||
Error: &provisionerErr,
|
||||
}
|
||||
diags = diags.Append(evalApply.Eval(ctx))
|
||||
var applyDiags tfdiags.Diagnostics
|
||||
// we pass a nil configuration to apply because we are destroying
|
||||
state, provisionerErr, applyDiags = n.apply(ctx, state, changeApply, nil, false, provisionerErr)
|
||||
diags.Append(applyDiags)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
@ -234,11 +210,11 @@ func (n *NodeDestroyResourceInstance) Execute(ctx EvalContext, op walkOperation)
|
|||
state.SetResourceInstanceCurrent(n.Addr, nil, n.ResolvedProvider)
|
||||
}
|
||||
|
||||
diags = diags.Append(n.PostApplyHook(ctx, state, &provisionerErr))
|
||||
diags = diags.Append(n.postApplyHook(ctx, state, &provisionerErr))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
diags = diags.Append(UpdateStateHook(ctx))
|
||||
diags = diags.Append(updateStateHook(ctx))
|
||||
return diags
|
||||
}
|
||||
|
|
|
@ -2,6 +2,7 @@ package terraform
|
|||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
|
||||
"github.com/hashicorp/terraform/addrs"
|
||||
"github.com/hashicorp/terraform/dag"
|
||||
|
@ -65,44 +66,20 @@ func (n *NodePlanDeposedResourceInstanceObject) References() []*addrs.Reference
|
|||
|
||||
// GraphNodeEvalable impl.
|
||||
func (n *NodePlanDeposedResourceInstanceObject) Execute(ctx EvalContext, op walkOperation) (diags tfdiags.Diagnostics) {
|
||||
addr := n.ResourceInstanceAddr()
|
||||
|
||||
_, providerSchema, err := GetProvider(ctx, n.ResolvedProvider)
|
||||
diags = diags.Append(err)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
// During the plan walk we always produce a planned destroy change, because
|
||||
// destroying is the only supported action for deposed objects.
|
||||
var change *plans.ResourceInstanceChange
|
||||
|
||||
// Read the state for the deposed resource instance
|
||||
state, err := n.ReadResourceInstanceStateDeposed(ctx, n.Addr, n.DeposedKey)
|
||||
state, err := n.readResourceInstanceStateDeposed(ctx, n.Addr, n.DeposedKey)
|
||||
diags = diags.Append(err)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
diffDestroy := &EvalDiffDestroy{
|
||||
Addr: addr.Resource,
|
||||
ProviderAddr: n.ResolvedProvider,
|
||||
DeposedKey: n.DeposedKey,
|
||||
State: &state,
|
||||
Output: &change,
|
||||
}
|
||||
diags = diags.Append(diffDestroy.Eval(ctx))
|
||||
change, destroyPlanDiags := n.planDestroy(ctx, state, n.DeposedKey)
|
||||
diags = diags.Append(destroyPlanDiags)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
writeDiff := &EvalWriteDiff{
|
||||
Addr: addr.Resource,
|
||||
DeposedKey: n.DeposedKey,
|
||||
ProviderSchema: &providerSchema,
|
||||
Change: &change,
|
||||
}
|
||||
diags = diags.Append(writeDiff.Eval(ctx))
|
||||
diags = diags.Append(n.writeChange(ctx, change, n.DeposedKey))
|
||||
return diags
|
||||
}
|
||||
|
||||
|
@ -174,52 +151,31 @@ func (n *NodeDestroyDeposedResourceInstanceObject) ModifyCreateBeforeDestroy(v b
|
|||
|
||||
// GraphNodeExecutable impl.
|
||||
func (n *NodeDestroyDeposedResourceInstanceObject) Execute(ctx EvalContext, op walkOperation) (diags tfdiags.Diagnostics) {
|
||||
addr := n.ResourceInstanceAddr().Resource
|
||||
var change *plans.ResourceInstanceChange
|
||||
var applyError error
|
||||
|
||||
provider, providerSchema, err := GetProvider(ctx, n.ResolvedProvider)
|
||||
diags = diags.Append(err)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
// Read the state for the deposed resource instance
|
||||
state, err := n.ReadResourceInstanceStateDeposed(ctx, n.Addr, n.DeposedKey)
|
||||
state, err := n.readResourceInstanceStateDeposed(ctx, n.Addr, n.DeposedKey)
|
||||
diags = diags.Append(err)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
diffDestroy := &EvalDiffDestroy{
|
||||
Addr: addr,
|
||||
ProviderAddr: n.ResolvedProvider,
|
||||
State: &state,
|
||||
Output: &change,
|
||||
}
|
||||
diags = diags.Append(diffDestroy.Eval(ctx))
|
||||
change, destroyPlanDiags := n.planDestroy(ctx, state, n.DeposedKey)
|
||||
diags = diags.Append(destroyPlanDiags)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
// Call pre-apply hook
|
||||
diags = diags.Append(n.PreApplyHook(ctx, change))
|
||||
diags = diags.Append(n.preApplyHook(ctx, change))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
apply := &EvalApply{
|
||||
Addr: addr,
|
||||
Config: nil, // No configuration because we are destroying
|
||||
State: &state,
|
||||
Change: &change,
|
||||
Provider: &provider,
|
||||
ProviderAddr: n.ResolvedProvider,
|
||||
ProviderSchema: &providerSchema,
|
||||
Output: &state,
|
||||
Error: &applyError,
|
||||
}
|
||||
diags = diags.Append(apply.Eval(ctx))
|
||||
// we pass a nil configuration to apply because we are destroying
|
||||
state, applyError, applyDiags := n.apply(ctx, state, change, nil, false, applyError)
|
||||
diags = diags.Append(applyDiags)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
@ -227,29 +183,21 @@ func (n *NodeDestroyDeposedResourceInstanceObject) Execute(ctx EvalContext, op w
|
|||
// Always write the resource back to the state deposed. If it
|
||||
// was successfully destroyed it will be pruned. If it was not, it will
|
||||
// be caught on the next run.
|
||||
writeStateDeposed := &EvalWriteStateDeposed{
|
||||
Addr: addr,
|
||||
Key: n.DeposedKey,
|
||||
ProviderAddr: n.ResolvedProvider,
|
||||
ProviderSchema: &providerSchema,
|
||||
State: &state,
|
||||
}
|
||||
diags = diags.Append(writeStateDeposed.Eval(ctx))
|
||||
diags = diags.Append(n.writeResourceInstanceState(ctx, state))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
diags = diags.Append(n.PostApplyHook(ctx, state, &applyError))
|
||||
diags = diags.Append(n.postApplyHook(ctx, state, &applyError))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
if applyError != nil {
|
||||
diags = diags.Append(applyError)
|
||||
return diags
|
||||
return diags.Append(applyError)
|
||||
}
|
||||
diags = diags.Append(UpdateStateHook(ctx))
|
||||
return diags
|
||||
|
||||
return diags.Append(updateStateHook(ctx))
|
||||
}
|
||||
|
||||
// GraphNodeDeposer is an optional interface implemented by graph nodes that
|
||||
|
@ -273,3 +221,46 @@ type graphNodeDeposer struct {
|
|||
func (n *graphNodeDeposer) SetPreallocatedDeposedKey(key states.DeposedKey) {
|
||||
n.PreallocatedDeposedKey = key
|
||||
}
|
||||
|
||||
func (n *NodeDestroyDeposedResourceInstanceObject) writeResourceInstanceState(ctx EvalContext, obj *states.ResourceInstanceObject) error {
|
||||
absAddr := n.Addr
|
||||
key := n.DeposedKey
|
||||
state := ctx.State()
|
||||
|
||||
if key == states.NotDeposed {
|
||||
// should never happen
|
||||
return fmt.Errorf("can't save deposed object for %s without a deposed key; this is a bug in Terraform that should be reported", absAddr)
|
||||
}
|
||||
|
||||
if obj == nil {
|
||||
// No need to encode anything: we'll just write it directly.
|
||||
state.SetResourceInstanceDeposed(absAddr, key, nil, n.ResolvedProvider)
|
||||
log.Printf("[TRACE] writeResourceInstanceStateDeposed: removing state object for %s deposed %s", absAddr, key)
|
||||
return nil
|
||||
}
|
||||
|
||||
_, providerSchema, err := getProvider(ctx, n.ResolvedProvider)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if providerSchema == nil {
|
||||
// Should never happen, unless our state object is nil
|
||||
panic("writeResourceInstanceStateDeposed used with no ProviderSchema object")
|
||||
}
|
||||
|
||||
schema, currentVersion := providerSchema.SchemaForResourceAddr(absAddr.ContainingResource().Resource)
|
||||
if schema == nil {
|
||||
// It shouldn't be possible to get this far in any real scenario
|
||||
// without a schema, but we might end up here in contrived tests that
|
||||
// fail to set up their world properly.
|
||||
return fmt.Errorf("failed to encode %s in state: no resource type schema available", absAddr)
|
||||
}
|
||||
src, err := obj.Encode(schema.ImpliedType(), currentVersion)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to encode %s in state: %s", absAddr, err)
|
||||
}
|
||||
|
||||
log.Printf("[TRACE] writeResourceInstanceStateDeposed: writing state object for %s deposed %s", absAddr, key)
|
||||
state.SetResourceInstanceDeposed(absAddr, key, src, n.ResolvedProvider)
|
||||
return nil
|
||||
}
|
||||
|
|
|
@ -128,3 +128,47 @@ func TestNodeDestroyDeposedResourceInstanceObject_Execute(t *testing.T) {
|
|||
t.Fatalf("resources left in state after destroy")
|
||||
}
|
||||
}
|
||||
|
||||
func TestNodeDestroyDeposedResourceInstanceObject_WriteResourceInstanceState(t *testing.T) {
|
||||
state := states.NewState()
|
||||
ctx := new(MockEvalContext)
|
||||
ctx.StateState = state.SyncWrapper()
|
||||
ctx.PathPath = addrs.RootModuleInstance
|
||||
mockProvider := mockProviderWithResourceTypeSchema("aws_instance", &configschema.Block{
|
||||
Attributes: map[string]*configschema.Attribute{
|
||||
"id": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
},
|
||||
})
|
||||
ctx.ProviderProvider = mockProvider
|
||||
ctx.ProviderSchemaSchema = mockProvider.GetSchemaReturn
|
||||
|
||||
obj := &states.ResourceInstanceObject{
|
||||
Value: cty.ObjectVal(map[string]cty.Value{
|
||||
"id": cty.StringVal("i-abc123"),
|
||||
}),
|
||||
Status: states.ObjectReady,
|
||||
}
|
||||
node := &NodeDestroyDeposedResourceInstanceObject{
|
||||
NodeAbstractResourceInstance: &NodeAbstractResourceInstance{
|
||||
NodeAbstractResource: NodeAbstractResource{
|
||||
ResolvedProvider: mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"]`),
|
||||
},
|
||||
Addr: mustResourceInstanceAddr("aws_instance.foo"),
|
||||
},
|
||||
DeposedKey: states.NewDeposedKey(),
|
||||
}
|
||||
err := node.writeResourceInstanceState(ctx, obj)
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %s", err.Error())
|
||||
}
|
||||
|
||||
checkStateString(t, state, `
|
||||
aws_instance.foo: (1 deposed)
|
||||
ID = <not created>
|
||||
provider = provider["registry.terraform.io/hashicorp/aws"]
|
||||
Deposed ID 1 = i-abc123
|
||||
`)
|
||||
}
|
||||
|
|
|
@ -42,25 +42,14 @@ func (n *NodePlanDestroyableResourceInstance) Execute(ctx EvalContext, op walkOp
|
|||
var change *plans.ResourceInstanceChange
|
||||
var state *states.ResourceInstanceObject
|
||||
|
||||
_, providerSchema, err := GetProvider(ctx, n.ResolvedProvider)
|
||||
state, err := n.readResourceInstanceState(ctx, addr)
|
||||
diags = diags.Append(err)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
state, err = n.ReadResourceInstanceState(ctx, addr)
|
||||
diags = diags.Append(err)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
diffDestroy := &EvalDiffDestroy{
|
||||
Addr: addr.Resource,
|
||||
ProviderAddr: n.ResolvedProvider,
|
||||
State: &state,
|
||||
Output: &change,
|
||||
}
|
||||
diags = diags.Append(diffDestroy.Eval(ctx))
|
||||
change, destroyPlanDiags := n.planDestroy(ctx, state, "")
|
||||
diags = diags.Append(destroyPlanDiags)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
@ -70,11 +59,6 @@ func (n *NodePlanDestroyableResourceInstance) Execute(ctx EvalContext, op walkOp
|
|||
return diags
|
||||
}
|
||||
|
||||
writeDiff := &EvalWriteDiff{
|
||||
Addr: addr.Resource,
|
||||
ProviderSchema: &providerSchema,
|
||||
Change: &change,
|
||||
}
|
||||
diags = diags.Append(writeDiff.Eval(ctx))
|
||||
diags = diags.Append(n.writeChange(ctx, change, ""))
|
||||
return diags
|
||||
}
|
||||
|
|
|
@ -53,42 +53,25 @@ func (n *NodePlannableResourceInstance) dataResourceExecute(ctx EvalContext) (di
|
|||
var change *plans.ResourceInstanceChange
|
||||
var state *states.ResourceInstanceObject
|
||||
|
||||
provider, providerSchema, err := GetProvider(ctx, n.ResolvedProvider)
|
||||
_, providerSchema, err := getProvider(ctx, n.ResolvedProvider)
|
||||
diags = diags.Append(err)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
state, err = n.ReadResourceInstanceState(ctx, addr)
|
||||
state, err = n.readResourceInstanceState(ctx, addr)
|
||||
diags = diags.Append(err)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
validateSelfRef := &EvalValidateSelfRef{
|
||||
Addr: addr.Resource,
|
||||
Config: config.Config,
|
||||
ProviderSchema: &providerSchema,
|
||||
}
|
||||
diags = diags.Append(validateSelfRef.Eval(ctx))
|
||||
diags = diags.Append(validateSelfRef(addr.Resource, config.Config, providerSchema))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
readDataPlan := &evalReadDataPlan{
|
||||
evalReadData: evalReadData{
|
||||
Addr: addr.Resource,
|
||||
Config: n.Config,
|
||||
Provider: &provider,
|
||||
ProviderAddr: n.ResolvedProvider,
|
||||
ProviderMetas: n.ProviderMetas,
|
||||
ProviderSchema: &providerSchema,
|
||||
OutputChange: &change,
|
||||
State: &state,
|
||||
dependsOn: n.dependsOn,
|
||||
},
|
||||
}
|
||||
diags = diags.Append(readDataPlan.Eval(ctx))
|
||||
change, state, planDiags := n.planDataSource(ctx, state)
|
||||
diags = diags.Append(planDiags)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
@ -104,12 +87,7 @@ func (n *NodePlannableResourceInstance) dataResourceExecute(ctx EvalContext) (di
|
|||
return diags
|
||||
}
|
||||
|
||||
writeDiff := &EvalWriteDiff{
|
||||
Addr: addr.Resource,
|
||||
ProviderSchema: &providerSchema,
|
||||
Change: &change,
|
||||
}
|
||||
diags = diags.Append(writeDiff.Eval(ctx))
|
||||
diags = diags.Append(n.writeChange(ctx, change, ""))
|
||||
return diags
|
||||
}
|
||||
|
||||
|
@ -121,23 +99,18 @@ func (n *NodePlannableResourceInstance) managedResourceExecute(ctx EvalContext)
|
|||
var instanceRefreshState *states.ResourceInstanceObject
|
||||
var instancePlanState *states.ResourceInstanceObject
|
||||
|
||||
provider, providerSchema, err := GetProvider(ctx, n.ResolvedProvider)
|
||||
_, providerSchema, err := getProvider(ctx, n.ResolvedProvider)
|
||||
diags = diags.Append(err)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
validateSelfRef := &EvalValidateSelfRef{
|
||||
Addr: addr.Resource,
|
||||
Config: config.Config,
|
||||
ProviderSchema: &providerSchema,
|
||||
}
|
||||
diags = diags.Append(validateSelfRef.Eval(ctx))
|
||||
diags = diags.Append(validateSelfRef(addr.Resource, config.Config, providerSchema))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
instanceRefreshState, err = n.ReadResourceInstanceState(ctx, addr)
|
||||
instanceRefreshState, err = n.readResourceInstanceState(ctx, addr)
|
||||
diags = diags.Append(err)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
|
@ -155,16 +128,8 @@ func (n *NodePlannableResourceInstance) managedResourceExecute(ctx EvalContext)
|
|||
|
||||
// Refresh, maybe
|
||||
if !n.skipRefresh {
|
||||
refresh := &EvalRefresh{
|
||||
Addr: addr.Resource,
|
||||
ProviderAddr: n.ResolvedProvider,
|
||||
Provider: &provider,
|
||||
ProviderMetas: n.ProviderMetas,
|
||||
ProviderSchema: &providerSchema,
|
||||
State: &instanceRefreshState,
|
||||
Output: &instanceRefreshState,
|
||||
}
|
||||
diags := diags.Append(refresh.Eval(ctx))
|
||||
instanceRefreshState, refreshDiags := n.refresh(ctx, instanceRefreshState)
|
||||
diags = diags.Append(refreshDiags)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
@ -176,19 +141,8 @@ func (n *NodePlannableResourceInstance) managedResourceExecute(ctx EvalContext)
|
|||
}
|
||||
|
||||
// Plan the instance
|
||||
diff := &EvalDiff{
|
||||
Addr: addr.Resource,
|
||||
Config: n.Config,
|
||||
CreateBeforeDestroy: n.ForceCreateBeforeDestroy,
|
||||
Provider: &provider,
|
||||
ProviderAddr: n.ResolvedProvider,
|
||||
ProviderMetas: n.ProviderMetas,
|
||||
ProviderSchema: &providerSchema,
|
||||
State: &instanceRefreshState,
|
||||
OutputChange: &change,
|
||||
OutputState: &instancePlanState,
|
||||
}
|
||||
diags = diags.Append(diff.Eval(ctx))
|
||||
change, instancePlanState, planDiags := n.plan(ctx, change, instanceRefreshState, n.ForceCreateBeforeDestroy)
|
||||
diags = diags.Append(planDiags)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
@ -203,11 +157,6 @@ func (n *NodePlannableResourceInstance) managedResourceExecute(ctx EvalContext)
|
|||
return diags
|
||||
}
|
||||
|
||||
writeDiff := &EvalWriteDiff{
|
||||
Addr: addr.Resource,
|
||||
ProviderSchema: &providerSchema,
|
||||
Change: &change,
|
||||
}
|
||||
diags = diags.Append(writeDiff.Eval(ctx))
|
||||
diags = diags.Append(n.writeChange(ctx, change, ""))
|
||||
return diags
|
||||
}
|
||||
|
|
|
@ -63,14 +63,9 @@ func (n *NodePlannableResourceInstanceOrphan) managedResourceExecute(ctx EvalCon
|
|||
// evaluation. These are written to by-address below.
|
||||
var change *plans.ResourceInstanceChange
|
||||
var state *states.ResourceInstanceObject
|
||||
var err error
|
||||
|
||||
provider, providerSchema, err := GetProvider(ctx, n.ResolvedProvider)
|
||||
diags = diags.Append(err)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
state, err = n.ReadResourceInstanceState(ctx, addr)
|
||||
state, err = n.readResourceInstanceState(ctx, addr)
|
||||
diags = diags.Append(err)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
|
@ -83,16 +78,8 @@ func (n *NodePlannableResourceInstanceOrphan) managedResourceExecute(ctx EvalCon
|
|||
// plan before apply, and may not handle a missing resource during
|
||||
// Delete correctly. If this is a simple refresh, Terraform is
|
||||
// expected to remove the missing resource from the state entirely
|
||||
refresh := &EvalRefresh{
|
||||
Addr: addr.Resource,
|
||||
ProviderAddr: n.ResolvedProvider,
|
||||
Provider: &provider,
|
||||
ProviderMetas: n.ProviderMetas,
|
||||
ProviderSchema: &providerSchema,
|
||||
State: &state,
|
||||
Output: &state,
|
||||
}
|
||||
diags = diags.Append(refresh.Eval(ctx))
|
||||
state, refreshDiags := n.refresh(ctx, state)
|
||||
diags = diags.Append(refreshDiags)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
@ -103,14 +90,8 @@ func (n *NodePlannableResourceInstanceOrphan) managedResourceExecute(ctx EvalCon
|
|||
}
|
||||
}
|
||||
|
||||
diffDestroy := &EvalDiffDestroy{
|
||||
Addr: addr.Resource,
|
||||
State: &state,
|
||||
ProviderAddr: n.ResolvedProvider,
|
||||
Output: &change,
|
||||
OutputState: &state, // Will point to a nil state after this complete, signalling destroyed
|
||||
}
|
||||
diags = diags.Append(diffDestroy.Eval(ctx))
|
||||
change, destroyPlanDiags := n.planDestroy(ctx, state, "")
|
||||
diags = diags.Append(destroyPlanDiags)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
@ -120,16 +101,11 @@ func (n *NodePlannableResourceInstanceOrphan) managedResourceExecute(ctx EvalCon
|
|||
return diags
|
||||
}
|
||||
|
||||
writeDiff := &EvalWriteDiff{
|
||||
Addr: addr.Resource,
|
||||
ProviderSchema: &providerSchema,
|
||||
Change: &change,
|
||||
}
|
||||
diags = diags.Append(writeDiff.Eval(ctx))
|
||||
diags = diags.Append(n.writeChange(ctx, change, ""))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
diags = diags.Append(n.writeResourceInstanceState(ctx, state, n.Dependencies, workingState))
|
||||
diags = diags.Append(n.writeResourceInstanceState(ctx, nil, n.Dependencies, workingState))
|
||||
return diags
|
||||
}
|
||||
|
|
|
@ -3,10 +3,16 @@ package terraform
|
|||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/hashicorp/hcl/v2"
|
||||
"github.com/hashicorp/terraform/addrs"
|
||||
"github.com/hashicorp/terraform/configs"
|
||||
"github.com/hashicorp/terraform/configs/configschema"
|
||||
"github.com/hashicorp/terraform/providers"
|
||||
"github.com/hashicorp/terraform/provisioners"
|
||||
"github.com/hashicorp/terraform/tfdiags"
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
"github.com/zclconf/go-cty/cty/convert"
|
||||
"github.com/zclconf/go-cty/cty/gocty"
|
||||
)
|
||||
|
||||
// NodeValidatableResource represents a resource that is used for validation
|
||||
|
@ -33,31 +39,7 @@ func (n *NodeValidatableResource) Path() addrs.ModuleInstance {
|
|||
|
||||
// GraphNodeEvalable
|
||||
func (n *NodeValidatableResource) Execute(ctx EvalContext, op walkOperation) (diags tfdiags.Diagnostics) {
|
||||
addr := n.ResourceAddr()
|
||||
config := n.Config
|
||||
|
||||
// Declare the variables will be used are used to pass values along
|
||||
// the evaluation sequence below. These are written to via pointers
|
||||
// passed to the EvalNodes.
|
||||
var configVal cty.Value
|
||||
provider, providerSchema, err := GetProvider(ctx, n.ResolvedProvider)
|
||||
diags = diags.Append(err)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
evalValidateResource := &EvalValidateResource{
|
||||
Addr: addr.Resource,
|
||||
Provider: &provider,
|
||||
ProviderMetas: n.ProviderMetas,
|
||||
ProviderSchema: &providerSchema,
|
||||
Config: config,
|
||||
ConfigVal: &configVal,
|
||||
}
|
||||
diags = diags.Append(evalValidateResource.Validate(ctx))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
diags = diags.Append(n.validateResource(ctx))
|
||||
|
||||
if managed := n.Config.Managed; managed != nil {
|
||||
hasCount := n.Config.Count != nil
|
||||
|
@ -66,32 +48,13 @@ func (n *NodeValidatableResource) Execute(ctx EvalContext, op walkOperation) (di
|
|||
// Validate all the provisioners
|
||||
for _, p := range managed.Provisioners {
|
||||
if p.Connection == nil {
|
||||
p.Connection = config.Managed.Connection
|
||||
} else if config.Managed.Connection != nil {
|
||||
p.Connection.Config = configs.MergeBodies(config.Managed.Connection.Config, p.Connection.Config)
|
||||
}
|
||||
|
||||
provisioner := ctx.Provisioner(p.Type)
|
||||
if provisioner == nil {
|
||||
diags = diags.Append(fmt.Errorf("provisioner %s not initialized", p.Type))
|
||||
return diags
|
||||
}
|
||||
provisionerSchema := ctx.ProvisionerSchema(p.Type)
|
||||
if provisionerSchema == nil {
|
||||
diags = diags.Append(fmt.Errorf("provisioner %s not initialized", p.Type))
|
||||
return diags
|
||||
p.Connection = n.Config.Managed.Connection
|
||||
} else if n.Config.Managed.Connection != nil {
|
||||
p.Connection.Config = configs.MergeBodies(n.Config.Managed.Connection.Config, p.Connection.Config)
|
||||
}
|
||||
|
||||
// Validate Provisioner Config
|
||||
validateProvisioner := &EvalValidateProvisioner{
|
||||
ResourceAddr: addr.Resource,
|
||||
Provisioner: &provisioner,
|
||||
Schema: &provisionerSchema,
|
||||
Config: p,
|
||||
ResourceHasCount: hasCount,
|
||||
ResourceHasForEach: hasForEach,
|
||||
}
|
||||
diags = diags.Append(validateProvisioner.Validate(ctx))
|
||||
diags = diags.Append(n.validateProvisioner(ctx, p, hasCount, hasForEach))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
@ -99,3 +62,460 @@ func (n *NodeValidatableResource) Execute(ctx EvalContext, op walkOperation) (di
|
|||
}
|
||||
return diags
|
||||
}
|
||||
|
||||
// validateProvisioner validates the configuration of a provisioner belonging to
|
||||
// a resource. The provisioner config is expected to contain the merged
|
||||
// connection configurations.
|
||||
func (n *NodeValidatableResource) validateProvisioner(ctx EvalContext, p *configs.Provisioner, hasCount, hasForEach bool) tfdiags.Diagnostics {
|
||||
var diags tfdiags.Diagnostics
|
||||
|
||||
provisioner := ctx.Provisioner(p.Type)
|
||||
if provisioner == nil {
|
||||
return diags.Append(fmt.Errorf("provisioner %s not initialized", p.Type))
|
||||
}
|
||||
provisionerSchema := ctx.ProvisionerSchema(p.Type)
|
||||
if provisionerSchema == nil {
|
||||
return diags.Append(fmt.Errorf("provisioner %s not initialized", p.Type))
|
||||
}
|
||||
|
||||
// Validate the provisioner's own config first
|
||||
configVal, _, configDiags := n.evaluateBlock(ctx, p.Config, provisionerSchema, hasCount, hasForEach)
|
||||
diags = diags.Append(configDiags)
|
||||
|
||||
if configVal == cty.NilVal {
|
||||
// Should never happen for a well-behaved EvaluateBlock implementation
|
||||
return diags.Append(fmt.Errorf("EvaluateBlock returned nil value"))
|
||||
}
|
||||
|
||||
req := provisioners.ValidateProvisionerConfigRequest{
|
||||
Config: configVal,
|
||||
}
|
||||
|
||||
resp := provisioner.ValidateProvisionerConfig(req)
|
||||
diags = diags.Append(resp.Diagnostics)
|
||||
|
||||
if p.Connection != nil {
|
||||
// We can't comprehensively validate the connection config since its
|
||||
// final structure is decided by the communicator and we can't instantiate
|
||||
// that until we have a complete instance state. However, we *can* catch
|
||||
// configuration keys that are not valid for *any* communicator, catching
|
||||
// typos early rather than waiting until we actually try to run one of
|
||||
// the resource's provisioners.
|
||||
_, _, connDiags := n.evaluateBlock(ctx, p.Connection.Config, connectionBlockSupersetSchema, hasCount, hasForEach)
|
||||
diags = diags.Append(connDiags)
|
||||
}
|
||||
return diags
|
||||
}
|
||||
|
||||
func (n *NodeValidatableResource) evaluateBlock(ctx EvalContext, body hcl.Body, schema *configschema.Block, hasCount, hasForEach bool) (cty.Value, hcl.Body, tfdiags.Diagnostics) {
|
||||
keyData := EvalDataForNoInstanceKey
|
||||
selfAddr := n.ResourceAddr().Resource.Instance(addrs.NoKey)
|
||||
|
||||
if hasCount {
|
||||
// For a resource that has count, we allow count.index but don't
|
||||
// know at this stage what it will return.
|
||||
keyData = InstanceKeyEvalData{
|
||||
CountIndex: cty.UnknownVal(cty.Number),
|
||||
}
|
||||
|
||||
// "self" can't point to an unknown key, but we'll force it to be
|
||||
// key 0 here, which should return an unknown value of the
|
||||
// expected type since none of these elements are known at this
|
||||
// point anyway.
|
||||
selfAddr = n.ResourceAddr().Resource.Instance(addrs.IntKey(0))
|
||||
} else if hasForEach {
|
||||
// For a resource that has for_each, we allow each.value and each.key
|
||||
// but don't know at this stage what it will return.
|
||||
keyData = InstanceKeyEvalData{
|
||||
EachKey: cty.UnknownVal(cty.String),
|
||||
EachValue: cty.DynamicVal,
|
||||
}
|
||||
|
||||
// "self" can't point to an unknown key, but we'll force it to be
|
||||
// key "" here, which should return an unknown value of the
|
||||
// expected type since none of these elements are known at
|
||||
// this point anyway.
|
||||
selfAddr = n.ResourceAddr().Resource.Instance(addrs.StringKey(""))
|
||||
}
|
||||
|
||||
return ctx.EvaluateBlock(body, schema, selfAddr, keyData)
|
||||
}
|
||||
|
||||
// connectionBlockSupersetSchema is a schema representing the superset of all
|
||||
// possible arguments for "connection" blocks across all supported connection
|
||||
// types.
|
||||
//
|
||||
// This currently lives here because we've not yet updated our communicator
|
||||
// subsystem to be aware of schema itself. Once that is done, we can remove
|
||||
// this and use a type-specific schema from the communicator to validate
|
||||
// exactly what is expected for a given connection type.
|
||||
var connectionBlockSupersetSchema = &configschema.Block{
|
||||
Attributes: map[string]*configschema.Attribute{
|
||||
// NOTE: "type" is not included here because it's treated special
|
||||
// by the config loader and stored away in a separate field.
|
||||
|
||||
// Common attributes for both connection types
|
||||
"host": {
|
||||
Type: cty.String,
|
||||
Required: true,
|
||||
},
|
||||
"type": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"user": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"password": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"port": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"timeout": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"script_path": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
// For type=ssh only (enforced in ssh communicator)
|
||||
"target_platform": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"private_key": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"certificate": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"host_key": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"agent": {
|
||||
Type: cty.Bool,
|
||||
Optional: true,
|
||||
},
|
||||
"agent_identity": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"bastion_host": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"bastion_host_key": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"bastion_port": {
|
||||
Type: cty.Number,
|
||||
Optional: true,
|
||||
},
|
||||
"bastion_user": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"bastion_password": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"bastion_private_key": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"bastion_certificate": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
|
||||
// For type=winrm only (enforced in winrm communicator)
|
||||
"https": {
|
||||
Type: cty.Bool,
|
||||
Optional: true,
|
||||
},
|
||||
"insecure": {
|
||||
Type: cty.Bool,
|
||||
Optional: true,
|
||||
},
|
||||
"cacert": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
"use_ntlm": {
|
||||
Type: cty.Bool,
|
||||
Optional: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
func (n *NodeValidatableResource) validateResource(ctx EvalContext) tfdiags.Diagnostics {
|
||||
var diags tfdiags.Diagnostics
|
||||
|
||||
provider, providerSchema, err := getProvider(ctx, n.ResolvedProvider)
|
||||
diags = diags.Append(err)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
if providerSchema == nil {
|
||||
diags = diags.Append(fmt.Errorf("validateResource has nil schema for %s", n.Addr))
|
||||
return diags
|
||||
}
|
||||
|
||||
keyData := EvalDataForNoInstanceKey
|
||||
|
||||
switch {
|
||||
case n.Config.Count != nil:
|
||||
// If the config block has count, we'll evaluate with an unknown
|
||||
// number as count.index so we can still type check even though
|
||||
// we won't expand count until the plan phase.
|
||||
keyData = InstanceKeyEvalData{
|
||||
CountIndex: cty.UnknownVal(cty.Number),
|
||||
}
|
||||
|
||||
// Basic type-checking of the count argument. More complete validation
|
||||
// of this will happen when we DynamicExpand during the plan walk.
|
||||
countDiags := validateCount(ctx, n.Config.Count)
|
||||
diags = diags.Append(countDiags)
|
||||
|
||||
case n.Config.ForEach != nil:
|
||||
keyData = InstanceKeyEvalData{
|
||||
EachKey: cty.UnknownVal(cty.String),
|
||||
EachValue: cty.UnknownVal(cty.DynamicPseudoType),
|
||||
}
|
||||
|
||||
// Evaluate the for_each expression here so we can expose the diagnostics
|
||||
forEachDiags := validateForEach(ctx, n.Config.ForEach)
|
||||
diags = diags.Append(forEachDiags)
|
||||
}
|
||||
|
||||
diags = diags.Append(validateDependsOn(ctx, n.Config.DependsOn))
|
||||
|
||||
// Validate the provider_meta block for the provider this resource
|
||||
// belongs to, if there is one.
|
||||
//
|
||||
// Note: this will return an error for every resource a provider
|
||||
// uses in a module, if the provider_meta for that module is
|
||||
// incorrect. The only way to solve this that we've found is to
|
||||
// insert a new ProviderMeta graph node in the graph, and make all
|
||||
// that provider's resources in the module depend on the node. That's
|
||||
// an awful heavy hammer to swing for this feature, which should be
|
||||
// used only in limited cases with heavy coordination with the
|
||||
// Terraform team, so we're going to defer that solution for a future
|
||||
// enhancement to this functionality.
|
||||
/*
|
||||
if n.ProviderMetas != nil {
|
||||
if m, ok := n.ProviderMetas[n.ProviderAddr.ProviderConfig.Type]; ok && m != nil {
|
||||
// if the provider doesn't support this feature, throw an error
|
||||
if (*n.ProviderSchema).ProviderMeta == nil {
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
Severity: hcl.DiagError,
|
||||
Summary: fmt.Sprintf("Provider %s doesn't support provider_meta", cfg.ProviderConfigAddr()),
|
||||
Detail: fmt.Sprintf("The resource %s belongs to a provider that doesn't support provider_meta blocks", n.Addr),
|
||||
Subject: &m.ProviderRange,
|
||||
})
|
||||
} else {
|
||||
_, _, metaDiags := ctx.EvaluateBlock(m.Config, (*n.ProviderSchema).ProviderMeta, nil, EvalDataForNoInstanceKey)
|
||||
diags = diags.Append(metaDiags)
|
||||
}
|
||||
}
|
||||
}
|
||||
*/
|
||||
// BUG(paddy): we're not validating provider_meta blocks on EvalValidate right now
|
||||
// because the ProviderAddr for the resource isn't available on the EvalValidate
|
||||
// struct.
|
||||
|
||||
// Provider entry point varies depending on resource mode, because
|
||||
// managed resources and data resources are two distinct concepts
|
||||
// in the provider abstraction.
|
||||
switch n.Config.Mode {
|
||||
case addrs.ManagedResourceMode:
|
||||
schema, _ := providerSchema.SchemaForResourceType(n.Config.Mode, n.Config.Type)
|
||||
if schema == nil {
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
Severity: hcl.DiagError,
|
||||
Summary: "Invalid resource type",
|
||||
Detail: fmt.Sprintf("The provider %s does not support resource type %q.", n.Config.ProviderConfigAddr(), n.Config.Type),
|
||||
Subject: &n.Config.TypeRange,
|
||||
})
|
||||
return diags
|
||||
}
|
||||
|
||||
configVal, _, valDiags := ctx.EvaluateBlock(n.Config.Config, schema, nil, keyData)
|
||||
diags = diags.Append(valDiags)
|
||||
if valDiags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
if n.Config.Managed != nil { // can be nil only in tests with poorly-configured mocks
|
||||
for _, traversal := range n.Config.Managed.IgnoreChanges {
|
||||
// validate the ignore_changes traversals apply.
|
||||
moreDiags := schema.StaticValidateTraversal(traversal)
|
||||
diags = diags.Append(moreDiags)
|
||||
|
||||
// TODO: we want to notify users that they can't use
|
||||
// ignore_changes for computed attributes, but we don't have an
|
||||
// easy way to correlate the config value, schema and
|
||||
// traversal together.
|
||||
}
|
||||
}
|
||||
|
||||
// Use unmarked value for validate request
|
||||
unmarkedConfigVal, _ := configVal.UnmarkDeep()
|
||||
req := providers.ValidateResourceTypeConfigRequest{
|
||||
TypeName: n.Config.Type,
|
||||
Config: unmarkedConfigVal,
|
||||
}
|
||||
|
||||
resp := provider.ValidateResourceTypeConfig(req)
|
||||
diags = diags.Append(resp.Diagnostics.InConfigBody(n.Config.Config))
|
||||
|
||||
case addrs.DataResourceMode:
|
||||
schema, _ := providerSchema.SchemaForResourceType(n.Config.Mode, n.Config.Type)
|
||||
if schema == nil {
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
Severity: hcl.DiagError,
|
||||
Summary: "Invalid data source",
|
||||
Detail: fmt.Sprintf("The provider %s does not support data source %q.", n.Config.ProviderConfigAddr(), n.Config.Type),
|
||||
Subject: &n.Config.TypeRange,
|
||||
})
|
||||
return diags
|
||||
}
|
||||
|
||||
configVal, _, valDiags := ctx.EvaluateBlock(n.Config.Config, schema, nil, keyData)
|
||||
diags = diags.Append(valDiags)
|
||||
if valDiags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
// Use unmarked value for validate request
|
||||
unmarkedConfigVal, _ := configVal.UnmarkDeep()
|
||||
req := providers.ValidateDataSourceConfigRequest{
|
||||
TypeName: n.Config.Type,
|
||||
Config: unmarkedConfigVal,
|
||||
}
|
||||
|
||||
resp := provider.ValidateDataSourceConfig(req)
|
||||
diags = diags.Append(resp.Diagnostics.InConfigBody(n.Config.Config))
|
||||
}
|
||||
|
||||
return diags
|
||||
}
|
||||
|
||||
func validateCount(ctx EvalContext, expr hcl.Expression) tfdiags.Diagnostics {
|
||||
if expr == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
var diags tfdiags.Diagnostics
|
||||
|
||||
countVal, countDiags := ctx.EvaluateExpr(expr, cty.Number, nil)
|
||||
diags = diags.Append(countDiags)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
if countVal.IsNull() {
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
Severity: hcl.DiagError,
|
||||
Summary: "Invalid count argument",
|
||||
Detail: `The given "count" argument value is null. An integer is required.`,
|
||||
Subject: expr.Range().Ptr(),
|
||||
})
|
||||
return diags
|
||||
}
|
||||
|
||||
var err error
|
||||
countVal, err = convert.Convert(countVal, cty.Number)
|
||||
if err != nil {
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
Severity: hcl.DiagError,
|
||||
Summary: "Invalid count argument",
|
||||
Detail: fmt.Sprintf(`The given "count" argument value is unsuitable: %s.`, err),
|
||||
Subject: expr.Range().Ptr(),
|
||||
})
|
||||
return diags
|
||||
}
|
||||
|
||||
// If the value isn't known then that's the best we can do for now, but
|
||||
// we'll check more thoroughly during the plan walk.
|
||||
if !countVal.IsKnown() {
|
||||
return diags
|
||||
}
|
||||
|
||||
// If we _do_ know the value, then we can do a few more checks here.
|
||||
var count int
|
||||
err = gocty.FromCtyValue(countVal, &count)
|
||||
if err != nil {
|
||||
// Isn't a whole number, etc.
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
Severity: hcl.DiagError,
|
||||
Summary: "Invalid count argument",
|
||||
Detail: fmt.Sprintf(`The given "count" argument value is unsuitable: %s.`, err),
|
||||
Subject: expr.Range().Ptr(),
|
||||
})
|
||||
return diags
|
||||
}
|
||||
|
||||
if count < 0 {
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
Severity: hcl.DiagError,
|
||||
Summary: "Invalid count argument",
|
||||
Detail: `The given "count" argument value is unsuitable: count cannot be negative.`,
|
||||
Subject: expr.Range().Ptr(),
|
||||
})
|
||||
return diags
|
||||
}
|
||||
|
||||
return diags
|
||||
}
|
||||
|
||||
func validateForEach(ctx EvalContext, expr hcl.Expression) (diags tfdiags.Diagnostics) {
|
||||
val, forEachDiags := evaluateForEachExpressionValue(expr, ctx, true)
|
||||
// If the value isn't known then that's the best we can do for now, but
|
||||
// we'll check more thoroughly during the plan walk
|
||||
if !val.IsKnown() {
|
||||
return diags
|
||||
}
|
||||
|
||||
if forEachDiags.HasErrors() {
|
||||
diags = diags.Append(forEachDiags)
|
||||
}
|
||||
|
||||
return diags
|
||||
}
|
||||
|
||||
func validateDependsOn(ctx EvalContext, dependsOn []hcl.Traversal) (diags tfdiags.Diagnostics) {
|
||||
for _, traversal := range dependsOn {
|
||||
ref, refDiags := addrs.ParseRef(traversal)
|
||||
diags = diags.Append(refDiags)
|
||||
if !refDiags.HasErrors() && len(ref.Remaining) != 0 {
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
Severity: hcl.DiagError,
|
||||
Summary: "Invalid depends_on reference",
|
||||
Detail: "References in depends_on must be to a whole object (resource, etc), not to an attribute of an object.",
|
||||
Subject: ref.Remaining.SourceRange().Ptr(),
|
||||
})
|
||||
}
|
||||
|
||||
// The ref must also refer to something that exists. To test that,
|
||||
// we'll just eval it and count on the fact that our evaluator will
|
||||
// detect references to non-existent objects.
|
||||
if !diags.HasErrors() {
|
||||
scope := ctx.EvaluationScope(nil, EvalDataForNoInstanceKey)
|
||||
if scope != nil { // sometimes nil in tests, due to incomplete mocks
|
||||
_, refDiags = scope.EvalReference(ref, cty.DynamicPseudoType)
|
||||
diags = diags.Append(refDiags)
|
||||
}
|
||||
}
|
||||
}
|
||||
return diags
|
||||
}
|
||||
|
|
|
@ -7,17 +7,153 @@ import (
|
|||
|
||||
"github.com/hashicorp/hcl/v2"
|
||||
"github.com/hashicorp/hcl/v2/hcltest"
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
|
||||
"github.com/hashicorp/terraform/addrs"
|
||||
"github.com/hashicorp/terraform/configs"
|
||||
"github.com/hashicorp/terraform/configs/configschema"
|
||||
"github.com/hashicorp/terraform/providers"
|
||||
"github.com/hashicorp/terraform/provisioners"
|
||||
"github.com/hashicorp/terraform/tfdiags"
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
)
|
||||
|
||||
func TestEvalValidateResource_managedResource(t *testing.T) {
|
||||
func TestNodeValidatableResource_ValidateProvisioner_valid(t *testing.T) {
|
||||
ctx := &MockEvalContext{}
|
||||
ctx.installSimpleEval()
|
||||
mp := &MockProvisioner{}
|
||||
ps := &configschema.Block{}
|
||||
ctx.ProvisionerSchemaSchema = ps
|
||||
ctx.ProvisionerProvisioner = mp
|
||||
|
||||
pc := &configs.Provisioner{
|
||||
Type: "baz",
|
||||
Config: hcl.EmptyBody(),
|
||||
Connection: &configs.Connection{
|
||||
Config: configs.SynthBody("", map[string]cty.Value{
|
||||
"host": cty.StringVal("localhost"),
|
||||
"type": cty.StringVal("ssh"),
|
||||
}),
|
||||
},
|
||||
}
|
||||
|
||||
rc := &configs.Resource{
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Type: "test_foo",
|
||||
Name: "bar",
|
||||
Config: configs.SynthBody("", map[string]cty.Value{}),
|
||||
}
|
||||
|
||||
node := NodeValidatableResource{
|
||||
NodeAbstractResource: &NodeAbstractResource{
|
||||
Addr: mustConfigResourceAddr("test_foo.bar"),
|
||||
Config: rc,
|
||||
},
|
||||
}
|
||||
|
||||
diags := node.validateProvisioner(ctx, pc, false, false)
|
||||
if diags.HasErrors() {
|
||||
t.Fatalf("node.Eval failed: %s", diags.Err())
|
||||
}
|
||||
if !mp.ValidateProvisionerConfigCalled {
|
||||
t.Fatalf("p.ValidateProvisionerConfig not called")
|
||||
}
|
||||
}
|
||||
|
||||
func TestNodeValidatableResource_ValidateProvisioner__warning(t *testing.T) {
|
||||
ctx := &MockEvalContext{}
|
||||
ctx.installSimpleEval()
|
||||
mp := &MockProvisioner{}
|
||||
ps := &configschema.Block{}
|
||||
ctx.ProvisionerSchemaSchema = ps
|
||||
ctx.ProvisionerProvisioner = mp
|
||||
|
||||
pc := &configs.Provisioner{
|
||||
Type: "baz",
|
||||
Config: hcl.EmptyBody(),
|
||||
}
|
||||
|
||||
rc := &configs.Resource{
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Type: "test_foo",
|
||||
Name: "bar",
|
||||
Config: configs.SynthBody("", map[string]cty.Value{}),
|
||||
Managed: &configs.ManagedResource{},
|
||||
}
|
||||
|
||||
node := NodeValidatableResource{
|
||||
NodeAbstractResource: &NodeAbstractResource{
|
||||
Addr: mustConfigResourceAddr("test_foo.bar"),
|
||||
Config: rc,
|
||||
},
|
||||
}
|
||||
|
||||
{
|
||||
var diags tfdiags.Diagnostics
|
||||
diags = diags.Append(tfdiags.SimpleWarning("foo is deprecated"))
|
||||
mp.ValidateProvisionerConfigResponse = provisioners.ValidateProvisionerConfigResponse{
|
||||
Diagnostics: diags,
|
||||
}
|
||||
}
|
||||
|
||||
diags := node.validateProvisioner(ctx, pc, false, false)
|
||||
if len(diags) != 1 {
|
||||
t.Fatalf("wrong number of diagnostics in %s; want one warning", diags.ErrWithWarnings())
|
||||
}
|
||||
|
||||
if got, want := diags[0].Description().Summary, mp.ValidateProvisionerConfigResponse.Diagnostics[0].Description().Summary; got != want {
|
||||
t.Fatalf("wrong warning %q; want %q", got, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestNodeValidatableResource_ValidateProvisioner__conntectionInvalid(t *testing.T) {
|
||||
ctx := &MockEvalContext{}
|
||||
ctx.installSimpleEval()
|
||||
mp := &MockProvisioner{}
|
||||
ps := &configschema.Block{}
|
||||
ctx.ProvisionerSchemaSchema = ps
|
||||
ctx.ProvisionerProvisioner = mp
|
||||
|
||||
pc := &configs.Provisioner{
|
||||
Type: "baz",
|
||||
Config: hcl.EmptyBody(),
|
||||
Connection: &configs.Connection{
|
||||
Config: configs.SynthBody("", map[string]cty.Value{
|
||||
"type": cty.StringVal("ssh"),
|
||||
"bananananananana": cty.StringVal("foo"),
|
||||
"bazaz": cty.StringVal("bar"),
|
||||
}),
|
||||
},
|
||||
}
|
||||
|
||||
rc := &configs.Resource{
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Type: "test_foo",
|
||||
Name: "bar",
|
||||
Config: configs.SynthBody("", map[string]cty.Value{}),
|
||||
Managed: &configs.ManagedResource{},
|
||||
}
|
||||
|
||||
node := NodeValidatableResource{
|
||||
NodeAbstractResource: &NodeAbstractResource{
|
||||
Addr: mustConfigResourceAddr("test_foo.bar"),
|
||||
Config: rc,
|
||||
},
|
||||
}
|
||||
|
||||
diags := node.validateProvisioner(ctx, pc, false, false)
|
||||
if !diags.HasErrors() {
|
||||
t.Fatalf("node.Eval succeeded; want error")
|
||||
}
|
||||
if len(diags) != 3 {
|
||||
t.Fatalf("wrong number of diagnostics; want two errors\n\n%s", diags.Err())
|
||||
}
|
||||
|
||||
errStr := diags.Err().Error()
|
||||
if !(strings.Contains(errStr, "bananananananana") && strings.Contains(errStr, "bazaz")) {
|
||||
t.Fatalf("wrong errors %q; want something about each of our invalid connInfo keys", errStr)
|
||||
}
|
||||
}
|
||||
|
||||
func TestNodeValidatableResource_ValidateResource_managedResource(t *testing.T) {
|
||||
mp := simpleMockProvider()
|
||||
mp.ValidateResourceTypeConfigFn = func(req providers.ValidateResourceTypeConfigRequest) providers.ValidateResourceTypeConfigResponse {
|
||||
if got, want := req.TypeName, "test_object"; got != want {
|
||||
|
@ -42,21 +178,20 @@ func TestEvalValidateResource_managedResource(t *testing.T) {
|
|||
"test_number": cty.NumberIntVal(2).Mark("sensitive"),
|
||||
}),
|
||||
}
|
||||
node := &EvalValidateResource{
|
||||
Addr: addrs.Resource{
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Type: "aws_instance",
|
||||
Name: "foo",
|
||||
node := NodeValidatableResource{
|
||||
NodeAbstractResource: &NodeAbstractResource{
|
||||
Addr: mustConfigResourceAddr("test_foo.bar"),
|
||||
Config: rc,
|
||||
ResolvedProvider: mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"]`),
|
||||
},
|
||||
Provider: &p,
|
||||
Config: rc,
|
||||
ProviderSchema: &mp.GetSchemaReturn,
|
||||
}
|
||||
|
||||
ctx := &MockEvalContext{}
|
||||
ctx.installSimpleEval()
|
||||
ctx.ProviderSchemaSchema = mp.GetSchemaReturn
|
||||
ctx.ProviderProvider = p
|
||||
|
||||
err := node.Validate(ctx)
|
||||
err := node.validateResource(ctx)
|
||||
if err != nil {
|
||||
t.Fatalf("err: %s", err)
|
||||
}
|
||||
|
@ -66,7 +201,7 @@ func TestEvalValidateResource_managedResource(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestEvalValidateResource_managedResourceCount(t *testing.T) {
|
||||
func TestNodeValidatableResource_ValidateResource_managedResourceCount(t *testing.T) {
|
||||
mp := simpleMockProvider()
|
||||
mp.ValidateResourceTypeConfigFn = func(req providers.ValidateResourceTypeConfigRequest) providers.ValidateResourceTypeConfigResponse {
|
||||
if got, want := req.TypeName, "test_object"; got != want {
|
||||
|
@ -88,23 +223,22 @@ func TestEvalValidateResource_managedResourceCount(t *testing.T) {
|
|||
"test_string": cty.StringVal("bar"),
|
||||
}),
|
||||
}
|
||||
node := &EvalValidateResource{
|
||||
Addr: addrs.Resource{
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Type: "aws_instance",
|
||||
Name: "foo",
|
||||
node := NodeValidatableResource{
|
||||
NodeAbstractResource: &NodeAbstractResource{
|
||||
Addr: mustConfigResourceAddr("test_foo.bar"),
|
||||
Config: rc,
|
||||
ResolvedProvider: mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"]`),
|
||||
},
|
||||
Provider: &p,
|
||||
Config: rc,
|
||||
ProviderSchema: &mp.GetSchemaReturn,
|
||||
}
|
||||
|
||||
ctx := &MockEvalContext{}
|
||||
ctx.installSimpleEval()
|
||||
ctx.ProviderSchemaSchema = mp.GetSchemaReturn
|
||||
ctx.ProviderProvider = p
|
||||
|
||||
err := node.Validate(ctx)
|
||||
if err != nil {
|
||||
t.Fatalf("err: %s", err)
|
||||
diags := node.validateResource(ctx)
|
||||
if diags.HasErrors() {
|
||||
t.Fatalf("err: %s", diags.Err())
|
||||
}
|
||||
|
||||
if !mp.ValidateResourceTypeConfigCalled {
|
||||
|
@ -112,7 +246,7 @@ func TestEvalValidateResource_managedResourceCount(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestEvalValidateResource_dataSource(t *testing.T) {
|
||||
func TestNodeValidatableResource_ValidateResource_dataSource(t *testing.T) {
|
||||
mp := simpleMockProvider()
|
||||
mp.ValidateDataSourceConfigFn = func(req providers.ValidateDataSourceConfigRequest) providers.ValidateDataSourceConfigResponse {
|
||||
if got, want := req.TypeName, "test_object"; got != want {
|
||||
|
@ -138,23 +272,22 @@ func TestEvalValidateResource_dataSource(t *testing.T) {
|
|||
}),
|
||||
}
|
||||
|
||||
node := &EvalValidateResource{
|
||||
Addr: addrs.Resource{
|
||||
Mode: addrs.DataResourceMode,
|
||||
Type: "aws_ami",
|
||||
Name: "foo",
|
||||
node := NodeValidatableResource{
|
||||
NodeAbstractResource: &NodeAbstractResource{
|
||||
Addr: mustConfigResourceAddr("test_foo.bar"),
|
||||
Config: rc,
|
||||
ResolvedProvider: mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"]`),
|
||||
},
|
||||
Provider: &p,
|
||||
Config: rc,
|
||||
ProviderSchema: &mp.GetSchemaReturn,
|
||||
}
|
||||
|
||||
ctx := &MockEvalContext{}
|
||||
ctx.installSimpleEval()
|
||||
ctx.ProviderSchemaSchema = mp.GetSchemaReturn
|
||||
ctx.ProviderProvider = p
|
||||
|
||||
err := node.Validate(ctx)
|
||||
if err != nil {
|
||||
t.Fatalf("err: %s", err)
|
||||
diags := node.validateResource(ctx)
|
||||
if diags.HasErrors() {
|
||||
t.Fatalf("err: %s", diags.Err())
|
||||
}
|
||||
|
||||
if !mp.ValidateDataSourceConfigCalled {
|
||||
|
@ -162,7 +295,7 @@ func TestEvalValidateResource_dataSource(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestEvalValidateResource_validReturnsNilError(t *testing.T) {
|
||||
func TestNodeValidatableResource_ValidateResource_valid(t *testing.T) {
|
||||
mp := simpleMockProvider()
|
||||
mp.ValidateResourceTypeConfigFn = func(req providers.ValidateResourceTypeConfigRequest) providers.ValidateResourceTypeConfigResponse {
|
||||
return providers.ValidateResourceTypeConfigResponse{}
|
||||
|
@ -175,27 +308,26 @@ func TestEvalValidateResource_validReturnsNilError(t *testing.T) {
|
|||
Name: "foo",
|
||||
Config: configs.SynthBody("", map[string]cty.Value{}),
|
||||
}
|
||||
node := &EvalValidateResource{
|
||||
Addr: addrs.Resource{
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Type: "test_object",
|
||||
Name: "foo",
|
||||
node := NodeValidatableResource{
|
||||
NodeAbstractResource: &NodeAbstractResource{
|
||||
Addr: mustConfigResourceAddr("test_object.foo"),
|
||||
Config: rc,
|
||||
ResolvedProvider: mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"]`),
|
||||
},
|
||||
Provider: &p,
|
||||
Config: rc,
|
||||
ProviderSchema: &mp.GetSchemaReturn,
|
||||
}
|
||||
|
||||
ctx := &MockEvalContext{}
|
||||
ctx.installSimpleEval()
|
||||
ctx.ProviderSchemaSchema = mp.GetSchemaReturn
|
||||
ctx.ProviderProvider = p
|
||||
|
||||
err := node.Validate(ctx)
|
||||
if err != nil {
|
||||
t.Fatalf("Expected nil error, got: %s", err)
|
||||
diags := node.validateResource(ctx)
|
||||
if diags.HasErrors() {
|
||||
t.Fatalf("err: %s", diags.Err())
|
||||
}
|
||||
}
|
||||
|
||||
func TestEvalValidateResource_warningsAndErrorsPassedThrough(t *testing.T) {
|
||||
func TestNodeValidatableResource_ValidateResource_warningsAndErrorsPassedThrough(t *testing.T) {
|
||||
mp := simpleMockProvider()
|
||||
mp.ValidateResourceTypeConfigFn = func(req providers.ValidateResourceTypeConfigRequest) providers.ValidateResourceTypeConfigResponse {
|
||||
var diags tfdiags.Diagnostics
|
||||
|
@ -213,27 +345,24 @@ func TestEvalValidateResource_warningsAndErrorsPassedThrough(t *testing.T) {
|
|||
Name: "foo",
|
||||
Config: configs.SynthBody("", map[string]cty.Value{}),
|
||||
}
|
||||
node := &EvalValidateResource{
|
||||
Addr: addrs.Resource{
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Type: "test_object",
|
||||
Name: "foo",
|
||||
node := NodeValidatableResource{
|
||||
NodeAbstractResource: &NodeAbstractResource{
|
||||
Addr: mustConfigResourceAddr("test_foo.bar"),
|
||||
Config: rc,
|
||||
ResolvedProvider: mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"]`),
|
||||
},
|
||||
Provider: &p,
|
||||
Config: rc,
|
||||
ProviderSchema: &mp.GetSchemaReturn,
|
||||
}
|
||||
|
||||
ctx := &MockEvalContext{}
|
||||
ctx.installSimpleEval()
|
||||
ctx.ProviderSchemaSchema = mp.GetSchemaReturn
|
||||
ctx.ProviderProvider = p
|
||||
|
||||
err := node.Validate(ctx)
|
||||
if err == nil {
|
||||
diags := node.validateResource(ctx)
|
||||
if !diags.HasErrors() {
|
||||
t.Fatal("unexpected success; want error")
|
||||
}
|
||||
|
||||
var diags tfdiags.Diagnostics
|
||||
diags = diags.Append(err)
|
||||
bySeverity := map[tfdiags.Severity]tfdiags.Diagnostics{}
|
||||
for _, diag := range diags {
|
||||
bySeverity[diag.Severity()] = append(bySeverity[diag.Severity()], diag)
|
||||
|
@ -246,7 +375,7 @@ func TestEvalValidateResource_warningsAndErrorsPassedThrough(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestEvalValidateResource_invalidDependsOn(t *testing.T) {
|
||||
func TestNodeValidatableResource_ValidateResource_invalidDependsOn(t *testing.T) {
|
||||
mp := simpleMockProvider()
|
||||
mp.ValidateResourceTypeConfigFn = func(req providers.ValidateResourceTypeConfigRequest) providers.ValidateResourceTypeConfigResponse {
|
||||
return providers.ValidateResourceTypeConfigResponse{}
|
||||
|
@ -278,21 +407,20 @@ func TestEvalValidateResource_invalidDependsOn(t *testing.T) {
|
|||
},
|
||||
},
|
||||
}
|
||||
node := &EvalValidateResource{
|
||||
Addr: addrs.Resource{
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Type: "aws_instance",
|
||||
Name: "foo",
|
||||
node := NodeValidatableResource{
|
||||
NodeAbstractResource: &NodeAbstractResource{
|
||||
Addr: mustConfigResourceAddr("test_foo.bar"),
|
||||
Config: rc,
|
||||
ResolvedProvider: mustProviderConfig(`provider["registry.terraform.io/hashicorp/aws"]`),
|
||||
},
|
||||
Provider: &p,
|
||||
Config: rc,
|
||||
ProviderSchema: &mp.GetSchemaReturn,
|
||||
}
|
||||
|
||||
ctx := &MockEvalContext{}
|
||||
ctx.installSimpleEval()
|
||||
ctx.ProviderSchemaSchema = mp.GetSchemaReturn
|
||||
ctx.ProviderProvider = p
|
||||
|
||||
diags := node.Validate(ctx)
|
||||
diags := node.validateResource(ctx)
|
||||
if diags.HasErrors() {
|
||||
t.Fatalf("error for supposedly-valid config: %s", diags.ErrWithWarnings())
|
||||
}
|
||||
|
@ -313,7 +441,7 @@ func TestEvalValidateResource_invalidDependsOn(t *testing.T) {
|
|||
},
|
||||
})
|
||||
|
||||
diags = node.Validate(ctx)
|
||||
diags = node.validateResource(ctx)
|
||||
if !diags.HasErrors() {
|
||||
t.Fatal("no error for invalid depends_on")
|
||||
}
|
||||
|
@ -329,7 +457,7 @@ func TestEvalValidateResource_invalidDependsOn(t *testing.T) {
|
|||
},
|
||||
})
|
||||
|
||||
diags = node.Validate(ctx)
|
||||
diags = node.validateResource(ctx)
|
||||
if !diags.HasErrors() {
|
||||
t.Fatal("no error for invalid depends_on")
|
||||
}
|
||||
|
@ -337,151 +465,3 @@ func TestEvalValidateResource_invalidDependsOn(t *testing.T) {
|
|||
t.Fatalf("wrong error\ngot: %s\nwant: Message containing %q", got, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestEvalValidateProvisioner_valid(t *testing.T) {
|
||||
mp := &MockProvisioner{}
|
||||
var p provisioners.Interface = mp
|
||||
ctx := &MockEvalContext{}
|
||||
ctx.installSimpleEval()
|
||||
|
||||
schema := &configschema.Block{}
|
||||
|
||||
node := &EvalValidateProvisioner{
|
||||
ResourceAddr: addrs.Resource{
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Type: "foo",
|
||||
Name: "bar",
|
||||
},
|
||||
Provisioner: &p,
|
||||
Schema: &schema,
|
||||
Config: &configs.Provisioner{
|
||||
Type: "baz",
|
||||
Config: hcl.EmptyBody(),
|
||||
Connection: &configs.Connection{
|
||||
Config: configs.SynthBody("", map[string]cty.Value{
|
||||
"host": cty.StringVal("localhost"),
|
||||
"type": cty.StringVal("ssh"),
|
||||
}),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
err := node.Validate(ctx)
|
||||
if err != nil {
|
||||
t.Fatalf("node.Eval failed: %s", err)
|
||||
}
|
||||
if !mp.ValidateProvisionerConfigCalled {
|
||||
t.Fatalf("p.ValidateProvisionerConfig not called")
|
||||
}
|
||||
}
|
||||
|
||||
func TestEvalValidateProvisioner_warning(t *testing.T) {
|
||||
mp := &MockProvisioner{}
|
||||
var p provisioners.Interface = mp
|
||||
ctx := &MockEvalContext{}
|
||||
ctx.installSimpleEval()
|
||||
|
||||
schema := &configschema.Block{
|
||||
Attributes: map[string]*configschema.Attribute{
|
||||
"type": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
node := &EvalValidateProvisioner{
|
||||
ResourceAddr: addrs.Resource{
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Type: "foo",
|
||||
Name: "bar",
|
||||
},
|
||||
Provisioner: &p,
|
||||
Schema: &schema,
|
||||
Config: &configs.Provisioner{
|
||||
Type: "baz",
|
||||
Config: hcl.EmptyBody(),
|
||||
Connection: &configs.Connection{
|
||||
Config: configs.SynthBody("", map[string]cty.Value{
|
||||
"host": cty.StringVal("localhost"),
|
||||
"type": cty.StringVal("ssh"),
|
||||
}),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
{
|
||||
var diags tfdiags.Diagnostics
|
||||
diags = diags.Append(tfdiags.SimpleWarning("foo is deprecated"))
|
||||
mp.ValidateProvisionerConfigResponse = provisioners.ValidateProvisionerConfigResponse{
|
||||
Diagnostics: diags,
|
||||
}
|
||||
}
|
||||
|
||||
err := node.Validate(ctx)
|
||||
if err == nil {
|
||||
t.Fatalf("node.Eval succeeded; want error")
|
||||
}
|
||||
|
||||
var diags tfdiags.Diagnostics
|
||||
diags = diags.Append(err)
|
||||
if len(diags) != 1 {
|
||||
t.Fatalf("wrong number of diagnostics in %s; want one warning", diags.ErrWithWarnings())
|
||||
}
|
||||
|
||||
if got, want := diags[0].Description().Summary, mp.ValidateProvisionerConfigResponse.Diagnostics[0].Description().Summary; got != want {
|
||||
t.Fatalf("wrong warning %q; want %q", got, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestEvalValidateProvisioner_connectionInvalid(t *testing.T) {
|
||||
var p provisioners.Interface = &MockProvisioner{}
|
||||
ctx := &MockEvalContext{}
|
||||
ctx.installSimpleEval()
|
||||
|
||||
schema := &configschema.Block{
|
||||
Attributes: map[string]*configschema.Attribute{
|
||||
"type": {
|
||||
Type: cty.String,
|
||||
Optional: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
node := &EvalValidateProvisioner{
|
||||
ResourceAddr: addrs.Resource{
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Type: "foo",
|
||||
Name: "bar",
|
||||
},
|
||||
Provisioner: &p,
|
||||
Schema: &schema,
|
||||
Config: &configs.Provisioner{
|
||||
Type: "baz",
|
||||
Config: hcl.EmptyBody(),
|
||||
Connection: &configs.Connection{
|
||||
Config: configs.SynthBody("", map[string]cty.Value{
|
||||
"type": cty.StringVal("ssh"),
|
||||
"bananananananana": cty.StringVal("foo"),
|
||||
"bazaz": cty.StringVal("bar"),
|
||||
}),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
err := node.Validate(ctx)
|
||||
if err == nil {
|
||||
t.Fatalf("node.Eval succeeded; want error")
|
||||
}
|
||||
|
||||
var diags tfdiags.Diagnostics
|
||||
diags = diags.Append(err)
|
||||
if len(diags) != 3 {
|
||||
t.Fatalf("wrong number of diagnostics; want two errors\n\n%s", diags.Err())
|
||||
}
|
||||
|
||||
errStr := diags.Err().Error()
|
||||
if !(strings.Contains(errStr, "bananananananana") && strings.Contains(errStr, "bazaz")) {
|
||||
t.Fatalf("wrong errors %q; want something about each of our invalid connInfo keys", errStr)
|
||||
}
|
||||
}
|
|
@ -0,0 +1,32 @@
|
|||
package terraform
|
||||
|
||||
import (
|
||||
"log"
|
||||
|
||||
"github.com/hashicorp/terraform/addrs"
|
||||
"github.com/hashicorp/terraform/plans"
|
||||
)
|
||||
|
||||
// reducePlan takes a planned resource instance change as might be produced by
|
||||
// Plan or PlanDestroy and "simplifies" it to a single atomic action to be
|
||||
// performed by a specific graph node.
|
||||
//
|
||||
// Callers must specify whether they are a destroy node or a regular apply node.
|
||||
// If the result is NoOp then the given change requires no action for the
|
||||
// specific graph node calling this and so evaluation of the that graph node
|
||||
// should exit early and take no action.
|
||||
//
|
||||
// The returned object may either be identical to the input change or a new
|
||||
// change object derived from the input. Because of the former case, the caller
|
||||
// must not mutate the object returned in OutChange.
|
||||
func reducePlan(addr addrs.ResourceInstance, in *plans.ResourceInstanceChange, destroy bool) *plans.ResourceInstanceChange {
|
||||
out := in.Simplify(destroy)
|
||||
if out.Action != in.Action {
|
||||
if destroy {
|
||||
log.Printf("[TRACE] reducePlan: %s change simplified from %s to %s for destroy node", addr, in.Action, out.Action)
|
||||
} else {
|
||||
log.Printf("[TRACE] reducePlan: %s change simplified from %s to %s for apply node", addr, in.Action, out.Action)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
|
@ -92,6 +92,30 @@ func TestProcessIgnoreChangesIndividual(t *testing.T) {
|
|||
"b": cty.StringVal("new b value"),
|
||||
}),
|
||||
},
|
||||
"map": {
|
||||
cty.ObjectVal(map[string]cty.Value{
|
||||
"a": cty.MapVal(map[string]cty.Value{
|
||||
"a0": cty.StringVal("a0 value"),
|
||||
"a1": cty.StringVal("a1 value"),
|
||||
}),
|
||||
"b": cty.StringVal("b value"),
|
||||
}),
|
||||
cty.ObjectVal(map[string]cty.Value{
|
||||
"a": cty.MapVal(map[string]cty.Value{
|
||||
"a0": cty.StringVal("new a0 value"),
|
||||
"a1": cty.UnknownVal(cty.String),
|
||||
}),
|
||||
"b": cty.StringVal("b value"),
|
||||
}),
|
||||
[]string{`a`},
|
||||
cty.ObjectVal(map[string]cty.Value{
|
||||
"a": cty.MapVal(map[string]cty.Value{
|
||||
"a0": cty.StringVal("a0 value"),
|
||||
"a1": cty.StringVal("a1 value"),
|
||||
}),
|
||||
"b": cty.StringVal("b value"),
|
||||
}),
|
||||
},
|
||||
"map_index": {
|
||||
cty.ObjectVal(map[string]cty.Value{
|
||||
"a": cty.MapVal(map[string]cty.Value{
|
||||
|
@ -136,6 +160,30 @@ func TestProcessIgnoreChangesIndividual(t *testing.T) {
|
|||
"b": cty.StringVal("b value"),
|
||||
}),
|
||||
},
|
||||
"map_index_unknown_value": {
|
||||
cty.ObjectVal(map[string]cty.Value{
|
||||
"a": cty.MapVal(map[string]cty.Value{
|
||||
"a0": cty.StringVal("a0 value"),
|
||||
"a1": cty.StringVal("a1 value"),
|
||||
}),
|
||||
"b": cty.StringVal("b value"),
|
||||
}),
|
||||
cty.ObjectVal(map[string]cty.Value{
|
||||
"a": cty.MapVal(map[string]cty.Value{
|
||||
"a0": cty.StringVal("a0 value"),
|
||||
"a1": cty.UnknownVal(cty.String),
|
||||
}),
|
||||
"b": cty.StringVal("b value"),
|
||||
}),
|
||||
[]string{`a["a1"]`},
|
||||
cty.ObjectVal(map[string]cty.Value{
|
||||
"a": cty.MapVal(map[string]cty.Value{
|
||||
"a0": cty.StringVal("a0 value"),
|
||||
"a1": cty.StringVal("a1 value"),
|
||||
}),
|
||||
"b": cty.StringVal("b value"),
|
||||
}),
|
||||
},
|
||||
"map_index_multiple_keys": {
|
||||
cty.ObjectVal(map[string]cty.Value{
|
||||
"a": cty.MapVal(map[string]cty.Value{
|
||||
|
@ -297,6 +345,30 @@ func TestProcessIgnoreChangesIndividual(t *testing.T) {
|
|||
"b": cty.StringVal("new b value"),
|
||||
}),
|
||||
},
|
||||
"unknown_object_attribute": {
|
||||
cty.ObjectVal(map[string]cty.Value{
|
||||
"a": cty.ObjectVal(map[string]cty.Value{
|
||||
"foo": cty.StringVal("a.foo value"),
|
||||
"bar": cty.StringVal("a.bar value"),
|
||||
}),
|
||||
"b": cty.StringVal("b value"),
|
||||
}),
|
||||
cty.ObjectVal(map[string]cty.Value{
|
||||
"a": cty.ObjectVal(map[string]cty.Value{
|
||||
"foo": cty.StringVal("new a.foo value"),
|
||||
"bar": cty.UnknownVal(cty.String),
|
||||
}),
|
||||
"b": cty.StringVal("new b value"),
|
||||
}),
|
||||
[]string{"a.bar"},
|
||||
cty.ObjectVal(map[string]cty.Value{
|
||||
"a": cty.ObjectVal(map[string]cty.Value{
|
||||
"foo": cty.StringVal("new a.foo value"),
|
||||
"bar": cty.StringVal("a.bar value"),
|
||||
}),
|
||||
"b": cty.StringVal("new b value"),
|
||||
}),
|
||||
},
|
||||
}
|
||||
|
||||
for name, test := range tests {
|
|
@ -120,7 +120,7 @@ func (n *graphNodeImportState) Execute(ctx EvalContext, op walkOperation) (diags
|
|||
// Reset our states
|
||||
n.states = nil
|
||||
|
||||
provider, _, err := GetProvider(ctx, n.ResolvedProvider)
|
||||
provider, _, err := getProvider(ctx, n.ResolvedProvider)
|
||||
diags = diags.Append(err)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
|
@ -267,53 +267,20 @@ func (n *graphNodeImportStateSub) Execute(ctx EvalContext, op walkOperation) (di
|
|||
}
|
||||
|
||||
state := n.State.AsInstanceObject()
|
||||
provider, providerSchema, err := GetProvider(ctx, n.ResolvedProvider)
|
||||
diags = diags.Append(err)
|
||||
|
||||
// Refresh
|
||||
riNode := &NodeAbstractResourceInstance{
|
||||
Addr: n.TargetAddr,
|
||||
NodeAbstractResource: NodeAbstractResource{
|
||||
ResolvedProvider: n.ResolvedProvider,
|
||||
},
|
||||
}
|
||||
state, refreshDiags := riNode.refresh(ctx, state)
|
||||
diags = diags.Append(refreshDiags)
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
// EvalRefresh
|
||||
evalRefresh := &EvalRefresh{
|
||||
Addr: n.TargetAddr.Resource,
|
||||
ProviderAddr: n.ResolvedProvider,
|
||||
Provider: &provider,
|
||||
ProviderSchema: &providerSchema,
|
||||
State: &state,
|
||||
Output: &state,
|
||||
}
|
||||
diags = diags.Append(evalRefresh.Eval(ctx))
|
||||
if diags.HasErrors() {
|
||||
return diags
|
||||
}
|
||||
|
||||
// Verify the existance of the imported resource
|
||||
if state.Value.IsNull() {
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Cannot import non-existent remote object",
|
||||
fmt.Sprintf(
|
||||
"While attempting to import an existing object to %s, the provider detected that no object exists with the given id. Only pre-existing objects can be imported; check that the id is correct and that it is associated with the provider's configured region or endpoint, or use \"terraform apply\" to create a new remote object for this resource.",
|
||||
n.TargetAddr.Resource.String(),
|
||||
),
|
||||
))
|
||||
return diags
|
||||
}
|
||||
|
||||
schema, currentVersion := providerSchema.SchemaForResourceAddr(n.TargetAddr.ContainingResource().Resource)
|
||||
if schema == nil {
|
||||
// It shouldn't be possible to get this far in any real scenario
|
||||
// without a schema, but we might end up here in contrived tests that
|
||||
// fail to set up their world properly.
|
||||
diags = diags.Append(fmt.Errorf("failed to encode %s in state: no resource type schema available", n.TargetAddr.Resource))
|
||||
return diags
|
||||
}
|
||||
src, err := state.Encode(schema.ImpliedType(), currentVersion)
|
||||
if err != nil {
|
||||
diags = diags.Append(fmt.Errorf("failed to encode %s in state: %s", n.TargetAddr.Resource, err))
|
||||
return diags
|
||||
}
|
||||
ctx.State().SetResourceInstanceCurrent(n.TargetAddr, src, n.ResolvedProvider)
|
||||
|
||||
diags = diags.Append(riNode.writeResourceInstanceState(ctx, state, nil, workingState))
|
||||
return diags
|
||||
}
|
||||
|
|
|
@ -0,0 +1,19 @@
|
|||
package terraform
|
||||
|
||||
// updateStateHook calls the PostStateUpdate hook with the current state.
|
||||
func updateStateHook(ctx EvalContext) error {
|
||||
// In principle we could grab the lock here just long enough to take a
|
||||
// deep copy and then pass that to our hooks below, but we'll instead
|
||||
// hold the hook for the duration to avoid the potential confusing
|
||||
// situation of us racing to call PostStateUpdate concurrently with
|
||||
// different state snapshots.
|
||||
stateSync := ctx.State()
|
||||
state := stateSync.Lock().DeepCopy()
|
||||
defer stateSync.Unlock()
|
||||
|
||||
// Call the hook
|
||||
err := ctx.Hook(func(h Hook) (HookAction, error) {
|
||||
return h.PostStateUpdate(state)
|
||||
})
|
||||
return err
|
||||
}
|
|
@ -0,0 +1,33 @@
|
|||
package terraform
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/davecgh/go-spew/spew"
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
|
||||
"github.com/hashicorp/terraform/addrs"
|
||||
"github.com/hashicorp/terraform/states"
|
||||
)
|
||||
|
||||
func TestUpdateStateHook(t *testing.T) {
|
||||
mockHook := new(MockHook)
|
||||
|
||||
state := states.NewState()
|
||||
state.Module(addrs.RootModuleInstance).SetLocalValue("foo", cty.StringVal("hello"))
|
||||
|
||||
ctx := new(MockEvalContext)
|
||||
ctx.HookHook = mockHook
|
||||
ctx.StateState = state.SyncWrapper()
|
||||
|
||||
if err := updateStateHook(ctx); err != nil {
|
||||
t.Fatalf("err: %s", err)
|
||||
}
|
||||
|
||||
if !mockHook.PostStateUpdateCalled {
|
||||
t.Fatal("should call PostStateUpdate")
|
||||
}
|
||||
if mockHook.PostStateUpdateState.LocalValue(addrs.LocalValue{Name: "foo"}.Absolute(addrs.RootModuleInstance)) != cty.StringVal("hello") {
|
||||
t.Fatalf("wrong state passed to hook: %s", spew.Sdump(mockHook.PostStateUpdateState))
|
||||
}
|
||||
}
|
|
@ -13,14 +13,14 @@ import (
|
|||
"github.com/zclconf/go-cty/cty"
|
||||
)
|
||||
|
||||
// UpgradeResourceState will, if necessary, run the provider-defined upgrade
|
||||
// upgradeResourceState will, if necessary, run the provider-defined upgrade
|
||||
// logic against the given state object to make it compliant with the
|
||||
// current schema version. This is a no-op if the given state object is
|
||||
// already at the latest version.
|
||||
//
|
||||
// If any errors occur during upgrade, error diagnostics are returned. In that
|
||||
// case it is not safe to proceed with using the original state object.
|
||||
func UpgradeResourceState(addr addrs.AbsResourceInstance, provider providers.Interface, src *states.ResourceInstanceObjectSrc, currentSchema *configschema.Block, currentVersion uint64) (*states.ResourceInstanceObjectSrc, tfdiags.Diagnostics) {
|
||||
func upgradeResourceState(addr addrs.AbsResourceInstance, provider providers.Interface, src *states.ResourceInstanceObjectSrc, currentSchema *configschema.Block, currentVersion uint64) (*states.ResourceInstanceObjectSrc, tfdiags.Diagnostics) {
|
||||
// Remove any attributes from state that are not present in the schema.
|
||||
// This was previously taken care of by the provider, but data sources do
|
||||
// not go through the UpgradeResourceState process.
|
||||
|
@ -42,7 +42,7 @@ func UpgradeResourceState(addr addrs.AbsResourceInstance, provider providers.Int
|
|||
// TODO: This should eventually use a proper FQN.
|
||||
providerType := addr.Resource.Resource.ImpliedProvider()
|
||||
if src.SchemaVersion > currentVersion {
|
||||
log.Printf("[TRACE] UpgradeResourceState: can't downgrade state for %s from version %d to %d", addr, src.SchemaVersion, currentVersion)
|
||||
log.Printf("[TRACE] upgradeResourceState: can't downgrade state for %s from version %d to %d", addr, src.SchemaVersion, currentVersion)
|
||||
var diags tfdiags.Diagnostics
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
|
@ -62,9 +62,9 @@ func UpgradeResourceState(addr addrs.AbsResourceInstance, provider providers.Int
|
|||
// representation, since only the provider has enough information to
|
||||
// understand a flatmap built against an older schema.
|
||||
if src.SchemaVersion != currentVersion {
|
||||
log.Printf("[TRACE] UpgradeResourceState: upgrading state for %s from version %d to %d using provider %q", addr, src.SchemaVersion, currentVersion, providerType)
|
||||
log.Printf("[TRACE] upgradeResourceState: upgrading state for %s from version %d to %d using provider %q", addr, src.SchemaVersion, currentVersion, providerType)
|
||||
} else {
|
||||
log.Printf("[TRACE] UpgradeResourceState: schema version of %s is still %d; calling provider %q for any other minor fixups", addr, currentVersion, providerType)
|
||||
log.Printf("[TRACE] upgradeResourceState: schema version of %s is still %d; calling provider %q for any other minor fixups", addr, currentVersion, providerType)
|
||||
}
|
||||
|
||||
req := providers.UpgradeResourceStateRequest{
|
|
@ -11,18 +11,10 @@ import (
|
|||
"github.com/hashicorp/terraform/tfdiags"
|
||||
)
|
||||
|
||||
// EvalValidateSelfRef is an EvalNode implementation that checks to ensure that
|
||||
// expressions within a particular referencable block do not reference that
|
||||
// same block.
|
||||
type EvalValidateSelfRef struct {
|
||||
Addr addrs.Referenceable
|
||||
Config hcl.Body
|
||||
ProviderSchema **ProviderSchema
|
||||
}
|
||||
|
||||
func (n *EvalValidateSelfRef) Eval(ctx EvalContext) tfdiags.Diagnostics {
|
||||
// validateSelfRef checks to ensure that expressions within a particular
|
||||
// referencable block do not reference that same block.
|
||||
func validateSelfRef(addr addrs.Referenceable, config hcl.Body, providerSchema *ProviderSchema) tfdiags.Diagnostics {
|
||||
var diags tfdiags.Diagnostics
|
||||
addr := n.Addr
|
||||
|
||||
addrStrs := make([]string, 0, 1)
|
||||
addrStrs = append(addrStrs, addr.String())
|
||||
|
@ -32,12 +24,11 @@ func (n *EvalValidateSelfRef) Eval(ctx EvalContext) tfdiags.Diagnostics {
|
|||
addrStrs = append(addrStrs, tAddr.ContainingResource().String())
|
||||
}
|
||||
|
||||
if n.ProviderSchema == nil || *n.ProviderSchema == nil {
|
||||
if providerSchema == nil {
|
||||
diags = diags.Append(fmt.Errorf("provider schema unavailable while validating %s for self-references; this is a bug in Terraform and should be reported", addr))
|
||||
return diags
|
||||
}
|
||||
|
||||
providerSchema := *n.ProviderSchema
|
||||
var schema *configschema.Block
|
||||
switch tAddr := addr.(type) {
|
||||
case addrs.Resource:
|
||||
|
@ -51,7 +42,7 @@ func (n *EvalValidateSelfRef) Eval(ctx EvalContext) tfdiags.Diagnostics {
|
|||
return diags
|
||||
}
|
||||
|
||||
refs, _ := lang.ReferencesInBlock(n.Config, schema)
|
||||
refs, _ := lang.ReferencesInBlock(config, schema)
|
||||
for _, ref := range refs {
|
||||
for _, addrStr := range addrStrs {
|
||||
if ref.Subject.String() == addrStr {
|
|
@ -12,7 +12,7 @@ import (
|
|||
"github.com/zclconf/go-cty/cty"
|
||||
)
|
||||
|
||||
func TestEvalValidateSelfRef(t *testing.T) {
|
||||
func TestValidateSelfRef(t *testing.T) {
|
||||
rAddr := addrs.Resource{
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Type: "aws_instance",
|
||||
|
@ -92,12 +92,7 @@ func TestEvalValidateSelfRef(t *testing.T) {
|
|||
},
|
||||
}
|
||||
|
||||
n := &EvalValidateSelfRef{
|
||||
Addr: test.Addr,
|
||||
Config: body,
|
||||
ProviderSchema: &ps,
|
||||
}
|
||||
diags := n.Eval(nil)
|
||||
diags := validateSelfRef(test.Addr, body, ps)
|
||||
if diags.HasErrors() != test.Err {
|
||||
if test.Err {
|
||||
t.Errorf("unexpected success; want error")
|
|
@ -171,9 +171,7 @@ Therefore if automatic installation is not desired, it is important to ensure
|
|||
that version constraints within Terraform configurations do not exclude all
|
||||
of the versions available from the bundle. If a suitable version cannot be
|
||||
found in the bundle, Terraform _will_ attempt to satisfy that dependency by
|
||||
automatic installation from the official repository. If you want
|
||||
`terraform init` to explicitly fail instead of contacting the repository, pass
|
||||
the `-get-plugins=false` option.
|
||||
automatic installation from the official repository.
|
||||
|
||||
For full details about provider resolution, see
|
||||
[How Terraform Works: Plugin Discovery](https://www.terraform.io/docs/extend/how-terraform-works.html#discovery).
|
||||
|
|
|
@ -142,11 +142,10 @@ You can modify `terraform init`'s plugin behavior with the following options:
|
|||
cause Terraform to ignore any selections recorded in the dependency lock
|
||||
file, and to take the newest available version matching the configured
|
||||
version constraints.
|
||||
- `-get-plugins=false` — Skip plugin installation. If you previously ran
|
||||
`terraform init` without this option, the previously-installed plugins will
|
||||
remain available in your current working directory. If you have not
|
||||
previously run without this option, subsequent Terraform commands will
|
||||
fail due to the needed provider plugins being unavailable.
|
||||
- `-get-plugins=false` — Skip plugin installation. _Note: Since Terraform 0.13, this
|
||||
command has been superseded by [`provider_installation`](./cli-config.html#provider-installation)
|
||||
blocks or using the [`plugin_cache_dir`](./cli-config.html#plugin_cache_dir) setting.
|
||||
It should not be used in Terraform versions 0.13+.
|
||||
- `-plugin-dir=PATH` — Force plugin installation to read plugins _only_ from
|
||||
the specified directory, as if it had been configured as a `filesystem_mirror`
|
||||
in the CLI configuration. If you intend to routinely use a particular
|
||||
|
|
|
@ -24,6 +24,11 @@ The command-line flags are all optional. The list of available flags are:
|
|||
* `-json` - If specified, the outputs are formatted as a JSON object, with
|
||||
a key per output. If `NAME` is specified, only the output specified will be
|
||||
returned. This can be piped into tools such as `jq` for further processing.
|
||||
* `-raw` - If specified, Terraform will convert the specified output value to a
|
||||
string and print that string directly to the output, without any special
|
||||
formatting. This can be convenient when working with shell scripts, but
|
||||
it only supports string, number, and boolean values. Use `-json` instead
|
||||
for processing complex data types.
|
||||
* `-no-color` - If specified, output won't contain any color.
|
||||
* `-state=path` - Path to the state file. Defaults to "terraform.tfstate".
|
||||
Ignored when [remote state](/docs/state/remote.html) is used.
|
||||
|
@ -88,21 +93,31 @@ instance_ips = [
|
|||
## Use in automation
|
||||
|
||||
The `terraform output` command by default displays in a human-readable format,
|
||||
which can change over time to improve clarity. For use in automation, use
|
||||
`-json` to output the stable JSON format. You can parse the output using a JSON
|
||||
command-line parser such as [jq](https://stedolan.github.io/jq/).
|
||||
which can change over time to improve clarity.
|
||||
|
||||
For string outputs, you can remove quotes using `jq -r`:
|
||||
For scripting and automation, use `-json` to produce the stable JSON format.
|
||||
You can parse the output using a JSON command-line parser such as
|
||||
[jq](https://stedolan.github.io/jq/):
|
||||
|
||||
```shellsession
|
||||
$ terraform output -json lb_address | jq -r .
|
||||
$ terraform output -json instance_ips | jq -r '.[0]'
|
||||
54.43.114.12
|
||||
```
|
||||
|
||||
For the common case of directly using a string value in a shell script, you
|
||||
can use `-raw` instead, which will print the string directly with no extra
|
||||
escaping or whitespace.
|
||||
|
||||
```shellsession
|
||||
$ terraform output -raw lb_address
|
||||
my-app-alb-1657023003.us-east-1.elb.amazonaws.com
|
||||
```
|
||||
|
||||
To query for a particular value in a list, use `jq` with an index filter. For
|
||||
example, to query for the first instance's IP address:
|
||||
The `-raw` option works only with values that Terraform can automatically
|
||||
convert to strings. Use `-json` instead, possibly combined with `jq`, to
|
||||
work with complex-typed values such as objects.
|
||||
|
||||
```shellsession
|
||||
$ terraform output -json instance_ips | jq '.[0]'
|
||||
"54.43.114.12"
|
||||
```
|
||||
Terraform strings are sequences of Unicode characters rather than raw bytes,
|
||||
so the `-raw` output will be UTF-8 encoded when it contains non-ASCII
|
||||
characters. If you need a different character encoding, use a separate command
|
||||
such as `iconv` to transcode Terraform's raw output.
|
||||
|
|
|
@ -57,11 +57,12 @@ aws_instance.bar[1]
|
|||
|
||||
## Example: Filtering by Module
|
||||
|
||||
This example will only list resources in the given module:
|
||||
This example will list resources in the given module and any submodules:
|
||||
|
||||
```
|
||||
$ terraform state list module.elb
|
||||
module.elb.aws_elb.main
|
||||
module.elb.module.secgroups.aws_security_group.sg
|
||||
```
|
||||
|
||||
## Example: Filtering by ID
|
||||
|
|
|
@ -118,6 +118,12 @@ Data resources have the same dependency resolution behavior
|
|||
Setting the `depends_on` meta-argument within `data` blocks defers reading of
|
||||
the data source until after all changes to the dependencies have been applied.
|
||||
|
||||
In order to ensure that data sources are accessing the most up to date
|
||||
information possible in a wide variety of use cases, arguments directly
|
||||
referencing managed resources are treated the same as if the resource was
|
||||
listed in `depends_on`. This behavior can be avoided when desired by indirectly
|
||||
referencing the managed resource values through a `local` value.
|
||||
|
||||
~> **NOTE:** **In Terraform 0.12 and earlier**, due to the data resource behavior of deferring the read until the apply phase when depending on values that are not yet known, using `depends_on` with `data` resources will force the read to always be deferred to the apply phase, and therefore a configuration that uses `depends_on` with a `data` resource can never converge. Due to this behavior, we do not recommend using `depends_on` with data resources.
|
||||
|
||||
|
||||
|
|
|
@ -41,3 +41,28 @@ The two result values may be of any type, but they must both
|
|||
be of the _same_ type so that Terraform can determine what type the whole
|
||||
conditional expression will return without knowing the condition value.
|
||||
|
||||
If the two result expressions don't produce the same type then Terraform will
|
||||
attempt to find a type that they can both convert to, and make those
|
||||
conversions automatically if so.
|
||||
|
||||
For example, the following expression is valid and will always return a string,
|
||||
because in Terraform all numbers can convert automatically to a string using
|
||||
decimal digits:
|
||||
|
||||
```hcl
|
||||
var.example ? 12 : "hello"
|
||||
```
|
||||
|
||||
Relying on this automatic conversion behavior can be confusing for those who
|
||||
are not familiar with Terraform's conversion rules though, so we recommend
|
||||
being explicit using type conversion functions in any situation where there may
|
||||
be some uncertainty about the expected result type.
|
||||
|
||||
The following example is contrived because it would be easier to write the
|
||||
constant `"12"` instead of the type conversion in this case, but shows how to
|
||||
use [`tostring`](../functions/tostring.html) to explicitly convert a number to
|
||||
a string.
|
||||
|
||||
```hcl
|
||||
var.example ? tostring(12) : "hello"
|
||||
```
|
||||
|
|
|
@ -9,7 +9,8 @@ page_title: "Dynamic Blocks - Configuration Language"
|
|||
Within top-level block constructs like resources, expressions can usually be
|
||||
used only when assigning a value to an argument using the `name = expression`
|
||||
form. This covers many uses, but some resource types include repeatable _nested
|
||||
blocks_ in their arguments, which do not accept expressions:
|
||||
blocks_ in their arguments, which typically represent separate objects that
|
||||
are related to (or embedded within) the containing object:
|
||||
|
||||
```hcl
|
||||
resource "aws_elastic_beanstalk_environment" "tfenvtest" {
|
||||
|
@ -42,9 +43,10 @@ resource "aws_elastic_beanstalk_environment" "tfenvtest" {
|
|||
}
|
||||
```
|
||||
|
||||
A `dynamic` block acts much like a `for` expression, but produces nested blocks
|
||||
instead of a complex typed value. It iterates over a given complex value, and
|
||||
generates a nested block for each element of that complex value.
|
||||
A `dynamic` block acts much like a [`for` expression](for.html), but produces
|
||||
nested blocks instead of a complex typed value. It iterates over a given
|
||||
complex value, and generates a nested block for each element of that complex
|
||||
value.
|
||||
|
||||
- The label of the dynamic block (`"setting"` in the example above) specifies
|
||||
what kind of nested block to generate.
|
||||
|
@ -86,6 +88,56 @@ and
|
|||
[`setproduct`](/docs/configuration/functions/setproduct.html)
|
||||
functions.
|
||||
|
||||
## Multi-level Nested Block Structures
|
||||
|
||||
Some providers define resource types that include multiple levels of blocks
|
||||
nested inside one another. You can generate these nested structures dynamically
|
||||
when necessary by nesting `dynamic` blocks in the `content` portion of other
|
||||
`dynamic` blocks.
|
||||
|
||||
For example, a module might accept a complex data structure like the following:
|
||||
|
||||
```hcl
|
||||
variable "load_balancer_origin_groups" {
|
||||
type = map(object({
|
||||
origins = set(object({
|
||||
hostname = string
|
||||
}))
|
||||
}))
|
||||
}
|
||||
```
|
||||
|
||||
If you were defining a resource whose type expects a block for each origin
|
||||
group and then nested blocks for each origin within a group, you could ask
|
||||
Terraform to generate that dynamically using the following nested `dynamic`
|
||||
blocks:
|
||||
|
||||
```hcl
|
||||
dynamic "origin_group" {
|
||||
for_each = var.load_balancer_origin_groups
|
||||
content {
|
||||
name = origin_group.key
|
||||
|
||||
dynamic "origin" {
|
||||
for_each = origin_group.value.origins
|
||||
content {
|
||||
hostname = origin.value.hostname
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
When using nested `dynamic` blocks it's particularly important to pay attention
|
||||
to the iterator symbol for each block. In the above example,
|
||||
`origin_group.value` refers to the current element of the outer block, while
|
||||
`origin.value` refers to the current element of the inner block.
|
||||
|
||||
If a particular resource type defines nested blocks that have the same type
|
||||
name as one of their parents, you can use the `iterator` argument in each of
|
||||
`dynamic` blocks to choose a different iterator symbol that makes the two
|
||||
easier to distinguish.
|
||||
|
||||
## Best Practices for `dynamic` Blocks
|
||||
|
||||
Overuse of `dynamic` blocks can make configuration hard to read and maintain, so
|
||||
|
@ -93,3 +145,10 @@ we recommend using them only when you need to hide details in order to build a
|
|||
clean user interface for a re-usable module. Always write nested blocks out
|
||||
literally where possible.
|
||||
|
||||
If you find yourself defining most or all of a `resource` block's arguments and
|
||||
nested blocks using directly-corresponding attributes from an input variable
|
||||
then that might suggest that your module is not creating a useful abstraction.
|
||||
It may be better for the calling module to define the resource itself then
|
||||
pass information about it into your module. For more information on this design
|
||||
tradeoff, see [When to Write a Module](/docs/modules/#when-to-write-a-module)
|
||||
and [Module Composition](/docs/modules/composition.html).
|
||||
|
|
|
@ -10,8 +10,8 @@ another complex type value. Each element in the input value
|
|||
can correspond to either one or zero values in the result, and an arbitrary
|
||||
expression can be used to transform each input element into an output element.
|
||||
|
||||
For example, if `var.list` is a list of strings, then the following expression
|
||||
produces a list of strings with all-uppercase letters:
|
||||
For example, if `var.list` were a list of strings, then the following expression
|
||||
would produce a tuple of strings with all-uppercase letters:
|
||||
|
||||
```hcl
|
||||
[for s in var.list : upper(s)]
|
||||
|
@ -22,10 +22,41 @@ evaluates the expression `upper(s)` with `s` set to each respective element.
|
|||
It then builds a new tuple value with all of the results of executing that
|
||||
expression in the same order.
|
||||
|
||||
## Input Types
|
||||
|
||||
A `for` expression's input (given after the `in` keyword) can be a list,
|
||||
a set, a tuple, a map, or an object.
|
||||
|
||||
The above example showed a `for` expression with only a single temporary
|
||||
symbol `s`, but a `for` expression can optionally declare a pair of temporary
|
||||
symbols in order to use the key or index of each item too:
|
||||
|
||||
```hcl
|
||||
[for k, v in var.map : length(k) + length(v)]
|
||||
```
|
||||
|
||||
For a map or object type, like above, the `k` symbol refers to the key or
|
||||
attribute name of the current element. You can also use the two-symbol form
|
||||
with lists and tuples, in which case the additional symbol is the index
|
||||
of each element starting from zero, which conventionally has the symbol name
|
||||
`i` or `idx` unless it's helpful to choose a more specific name:
|
||||
|
||||
```hcl
|
||||
[for i, v in var.list : "${i} is ${v}"]
|
||||
```
|
||||
|
||||
The index or key symbol is always optional. If you specify only a single
|
||||
symbol after the `for` keyword then that symbol will always represent the
|
||||
_value_ of each element of the input collection.
|
||||
|
||||
## Result Types
|
||||
|
||||
The type of brackets around the `for` expression decide what type of result
|
||||
it produces. The above example uses `[` and `]`, which produces a tuple. If
|
||||
`{` and `}` are used instead, the result is an object, and two result
|
||||
expressions must be provided separated by the `=>` symbol:
|
||||
it produces.
|
||||
|
||||
The above example uses `[` and `]`, which produces a tuple. If you use `{` and
|
||||
`}` instead, the result is an object and you must provide two result
|
||||
expressions that are separated by the `=>` symbol:
|
||||
|
||||
```hcl
|
||||
{for s in var.list : s => upper(s)}
|
||||
|
@ -33,39 +64,146 @@ expressions must be provided separated by the `=>` symbol:
|
|||
|
||||
This expression produces an object whose attributes are the original elements
|
||||
from `var.list` and their corresponding values are the uppercase versions.
|
||||
For example, the resulting value might be as follows:
|
||||
|
||||
```hcl
|
||||
{
|
||||
foo = "FOO"
|
||||
bar = "BAR"
|
||||
baz = "BAZ"
|
||||
}
|
||||
```
|
||||
|
||||
A `for` expression alone can only produce either an object value or a tuple
|
||||
value, but Terraform's automatic type conversion rules mean that you can
|
||||
typically use the results in locations where lists, maps, and sets are expected.
|
||||
|
||||
## Filtering Elements
|
||||
|
||||
A `for` expression can also include an optional `if` clause to filter elements
|
||||
from the source collection, which can produce a value with fewer elements than
|
||||
the source:
|
||||
from the source collection, producing a value with fewer elements than
|
||||
the source value:
|
||||
|
||||
```
|
||||
[for s in var.list : upper(s) if s != ""]
|
||||
```
|
||||
|
||||
The source value can also be an object or map value, in which case two
|
||||
temporary variable names can be provided to access the keys and values
|
||||
respectively:
|
||||
One common reason for filtering collections in `for` expressions is to split
|
||||
a single source collection into two separate collections based on some
|
||||
criteria. For example, if the input `var.users` is a map of objects where the
|
||||
objects each have an attribute `is_admin` then you may wish to produce separate
|
||||
maps with admin vs non-admin objects:
|
||||
|
||||
```
|
||||
[for k, v in var.map : length(k) + length(v)]
|
||||
```hcl
|
||||
variable "users" {
|
||||
type = map(object({
|
||||
is_admin = boolean
|
||||
}))
|
||||
}
|
||||
|
||||
locals {
|
||||
admin_users = {
|
||||
for name, user in var.users : name => user
|
||||
if user.is_admin
|
||||
}
|
||||
regular_users = {
|
||||
for name, user in var.users : name => user
|
||||
if !user.is_admin
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Finally, if the result type is an object (using `{` and `}` delimiters) then
|
||||
the value result expression can be followed by the `...` symbol to group
|
||||
together results that have a common key:
|
||||
## Element Ordering
|
||||
|
||||
```
|
||||
{for s in var.list : substr(s, 0, 1) => s... if s != ""}
|
||||
Because `for` expressions can convert from unordered types (maps, objects, sets)
|
||||
to unordered types (lists, tuples), Terraform must choose an implied ordering
|
||||
for the elements of an unordered collection.
|
||||
|
||||
For maps and objects, Terraform sorts the elements by key or attribute name,
|
||||
using lexical sorting.
|
||||
|
||||
For sets of strings, Terraform sorts the elements by their value, using
|
||||
lexical sorting.
|
||||
|
||||
For sets of other types, Terraform uses an arbitrary ordering that may change
|
||||
in future versions of Terraform. For that reason, we recommend converting the
|
||||
result of such an expression to itself be a set so that it's clear elsewhere
|
||||
in the configuration that the result is unordered. You can use
|
||||
[the `toset` function](../functions/toset.html)
|
||||
to concisely convert a `for` expression result to be of a set type.
|
||||
|
||||
```hcl
|
||||
toset([for e in var.set : e.example])
|
||||
```
|
||||
|
||||
For expressions are particularly useful when combined with other language
|
||||
features to combine collections together in various ways. For example,
|
||||
the following two patterns are commonly used when constructing map values
|
||||
to use with
|
||||
[the `for_each` meta-argument](/docs/configuration/meta-arguments/for_each.html):
|
||||
## Grouping Results
|
||||
|
||||
* Transform a multi-level nested structure into a flat list by
|
||||
[using nested `for` expressions with the `flatten` function](/docs/configuration/functions/flatten.html#flattening-nested-structures-for-for_each).
|
||||
* Produce an exhaustive list of combinations of elements from two or more
|
||||
collections by
|
||||
[using the `setproduct` function inside a `for` expression](/docs/configuration/functions/setproduct.html#finding-combinations-for-for_each).
|
||||
If the result type is an object (using `{` and `}` delimiters) then normally
|
||||
the given key expression must be unique across all elements in the result,
|
||||
or Terraform will return an error.
|
||||
|
||||
Sometimes the resulting keys are _not_ unique, and so to support that situation
|
||||
Terraform supports a special _grouping mode_ which changes the result to support
|
||||
multiple elements per key.
|
||||
|
||||
To activate grouping mode, add the symbol `...` after the value expression.
|
||||
For example:
|
||||
|
||||
```hcl
|
||||
variable "users" {
|
||||
type = map(object({
|
||||
role = string
|
||||
}))
|
||||
}
|
||||
|
||||
locals {
|
||||
users_by_role = {
|
||||
for name, user in var.users : user.role => name...
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The above represents a situation where a module expects a map describing
|
||||
various users who each have a single "role", where the map keys are usernames.
|
||||
The usernames are guaranteed unique because they are map keys in the input,
|
||||
but many users may all share a single role name.
|
||||
|
||||
The `local.users_by_role` expression inverts the input map so that the keys
|
||||
are the role names and the values are usernames, but the expression is in
|
||||
grouping mode (due to the `...` after `name`) and so the result will be a
|
||||
map of lists of strings, such as the following:
|
||||
|
||||
```hcl
|
||||
{
|
||||
"admin": [
|
||||
"ps",
|
||||
],
|
||||
"maintainer": [
|
||||
"am",
|
||||
"jb",
|
||||
"kl",
|
||||
"ma",
|
||||
],
|
||||
"viewer": [
|
||||
"st",
|
||||
"zq",
|
||||
],
|
||||
}
|
||||
```
|
||||
|
||||
Due to [the element ordering rules](#element-ordering), Terraform will sort
|
||||
the users lexically by username as part of evaluating the `for` expression,
|
||||
and so the usernames associated with each role will be lexically sorted
|
||||
after grouping.
|
||||
|
||||
## Repeated Configuration Blocks
|
||||
|
||||
The `for` expressions mechanism is for constructing collection values from
|
||||
other collection values within expressions, which you can then assign to
|
||||
individual resource arguments that expect complex values.
|
||||
|
||||
Some resource types also define _nested block types_, which typically represent
|
||||
separate objects that belong to the containing resource in some way. You can't
|
||||
dynamically generated nested blocks using `for` expressions, but you _can_
|
||||
generate nested blocks for a resource dynamically using
|
||||
[`dynamic` blocks](dynamic-blocks.html).
|
||||
|
|
|
@ -30,6 +30,11 @@ min(55, 3453, 2)
|
|||
|
||||
A function call expression evaluates to the function's return value.
|
||||
|
||||
## Available Functions
|
||||
|
||||
For a full list of available functions, see
|
||||
[the function reference](/docs/configuration/functions.html).
|
||||
|
||||
## Expanding Function Arguments
|
||||
|
||||
If the arguments to pass to a function are available in a list or tuple value,
|
||||
|
@ -43,8 +48,45 @@ min([55, 2453, 2]...)
|
|||
The expansion symbol is three periods (`...`), not a Unicode ellipsis character
|
||||
(`…`). Expansion is a special syntax that is only available in function calls.
|
||||
|
||||
## Available Functions
|
||||
## When Terraform Calls Functions
|
||||
|
||||
For a full list of available functions, see
|
||||
[the function reference](/docs/configuration/functions.html).
|
||||
Most of Terraform's built-in functions are, in programming language terms,
|
||||
[pure functions](https://en.wikipedia.org/wiki/Pure_function). This means that
|
||||
their result is based only on their arguments and so it doesn't make any
|
||||
practical difference when Terraform would call them.
|
||||
|
||||
However, a small subset of functions interact with outside state and so for
|
||||
those it can be helpful to know when Terraform will call them in relation to
|
||||
other events that occur in a Terraform run.
|
||||
|
||||
The small set of special functions includes
|
||||
[`file`](../functions/file.html),
|
||||
[`templatefile`](../functions/templatefile.html),
|
||||
[`timestamp`](../functions/timestamp.html),
|
||||
and [`uuid`](../functions/uuid.html).
|
||||
If you are not working with these functions then you don't need
|
||||
to read this section, although the information here may still be interesting
|
||||
background information.
|
||||
|
||||
The `file` and `templatefile` functions are intended for reading files that
|
||||
are included as a static part of the configuration and so Terraform will
|
||||
execute these functions as part of initial configuration validation, before
|
||||
taking any other actions with the configuration. That means you cannot use
|
||||
either function to read files that your configuration might generate
|
||||
dynamically on disk as part of the plan or apply steps.
|
||||
|
||||
The `timestamp` function returns a representation of the current system time
|
||||
at the point when Terraform calls it, and the `uuid` function returns a random
|
||||
result which differs on each call. Without any special behavior these would
|
||||
would both cause the final configuration during the apply step not to match the
|
||||
actions shown in the plan, which violates the Terraform execution model.
|
||||
|
||||
For that reason, Terraform arranges for both of those functions to produce
|
||||
[unknown value](references.html#values-not-yet-known) results during the
|
||||
plan step, with the real result being decided only during the apply step.
|
||||
For `timestamp` in particular, this means that the recorded time will be
|
||||
the instant when Terraform began applying the change, rather than when
|
||||
Terraform _planned_ the change.
|
||||
|
||||
For more details on the behavior of these functions, refer to their own
|
||||
documentation pages.
|
||||
|
|
|
@ -30,15 +30,15 @@ in the following order of operations:
|
|||
1. `&&`
|
||||
1. `||`
|
||||
|
||||
Parentheses can be used to override the default order of operations. Without
|
||||
parentheses, higher levels are evaluated first, so `1 + 2 * 3` is interpreted
|
||||
as `1 + (2 * 3)` and _not_ as `(1 + 2) * 3`.
|
||||
Use parentheses to override the default order of operations. Without
|
||||
parentheses, higher levels will be evaluated first, so Terraform will interpret
|
||||
`1 + 2 * 3` as `1 + (2 * 3)` and _not_ as `(1 + 2) * 3`.
|
||||
|
||||
The different operators can be gathered into a few different groups with
|
||||
similar behavior, as described below. Each group of operators expects its
|
||||
given values to be of a particular type. Terraform will attempt to convert
|
||||
values to the required type automatically, or will produce an error message
|
||||
if this automatic conversion is not possible.
|
||||
if automatic conversion is impossible.
|
||||
|
||||
## Arithmetic Operators
|
||||
|
||||
|
@ -53,6 +53,11 @@ as results:
|
|||
generally useful only when used with whole numbers.
|
||||
* `-a` returns the result of multiplying `a` by `-1`.
|
||||
|
||||
Terraform supports some other less-common numeric operations as
|
||||
[functions](function-calls.html). For example, you can calculate exponents
|
||||
using
|
||||
[the `pow` function](../functions/pow.html).
|
||||
|
||||
## Equality Operators
|
||||
|
||||
The equality operators both take two values of any type and produce boolean
|
||||
|
@ -62,6 +67,18 @@ values as results.
|
|||
value, or `false` otherwise.
|
||||
* `a != b` is the opposite of `a == b`.
|
||||
|
||||
Because the equality operators require both arguments to be of exactly the
|
||||
same type in order to decide equality, we recommend using these operators only
|
||||
with values of primitive types or using explicit type conversion functions
|
||||
to indicate which type you are intending to use for comparison.
|
||||
|
||||
Comparisons between structural types may produce surprising results if you
|
||||
are not sure about the types of each of the arguments. For example,
|
||||
`var.list == []` may seem like it would return `true` if `var.list` were an
|
||||
empty list, but `[]` actually builds a value of type `tuple([])` and so the
|
||||
two values can never match. In this situation it's often clearer to write
|
||||
`length(var.list) == 0` instead.
|
||||
|
||||
## Comparison Operators
|
||||
|
||||
The comparison operators all expect number values and produce boolean values
|
||||
|
@ -80,3 +97,7 @@ The logical operators all expect bool values and produce bool values as results.
|
|||
* `a || b` returns `true` if either `a` or `b` is `true`, or `false` if both are `false`.
|
||||
* `a && b` returns `true` if both `a` and `b` are `true`, or `false` if either one is `false`.
|
||||
* `!a` returns `true` if `a` is `false`, and `false` if `a` is `true`.
|
||||
|
||||
Terraform does not have an operator for the "exclusive OR" operation. If you
|
||||
know that both operators are boolean values then exclusive OR is equivalent
|
||||
to the `!=` ("not equal") operator.
|
||||
|
|
|
@ -60,24 +60,58 @@ For more information about how to use resource references, see
|
|||
|
||||
`var.<NAME>` is the value of the [input variable](/docs/configuration/variables.html) of the given name.
|
||||
|
||||
If the variable has a type constraint (`type` argument) as part of its
|
||||
declaration, Terraform will automatically convert the caller's given value
|
||||
to conform to the type constraint.
|
||||
|
||||
For that reason, you can safely assume that a reference using `var.` will
|
||||
always produce a value that conforms to the type constraint, even if the caller
|
||||
provided a value of a different type that was automatically converted.
|
||||
|
||||
In particular, note that if you define a variable as being of an object type
|
||||
with particular attributes then only _those specific attributes_ will be
|
||||
available in expressions elsewhere in the module, even if the caller actually
|
||||
passed in a value with additional attributes. You must define in the type
|
||||
constraint all of the attributes you intend to use elsewhere in your module.
|
||||
|
||||
### Local Values
|
||||
|
||||
`local.<NAME>` is the value of the [local value](/docs/configuration/locals.html) of the given name.
|
||||
|
||||
Local values can refer to other local values, even within the same `locals`
|
||||
block, as long as you don't introduce circular dependencies.
|
||||
|
||||
### Child Module Outputs
|
||||
|
||||
* `module.<MODULE NAME>.<OUTPUT NAME>` is the value of the specified
|
||||
[output value](/docs/configuration/outputs.html) from a
|
||||
[child module](/docs/configuration/blocks/modules/index.html) called by the
|
||||
current module.
|
||||
`module.<MODULE NAME>` is an value representing the results of
|
||||
[a `module` block](../blocks/modules/).
|
||||
|
||||
If the corresponding `module` block does not have either `count` nor `for_each`
|
||||
set then the value will be an object with one attribute for each output value
|
||||
defined in the child module. To access one of the module's
|
||||
[output values](../outputs.html), use `module.<MODULE NAME>.<OUTPUT NAME>`.
|
||||
|
||||
If the corresponding `module` uses `for_each` then the value will be a map
|
||||
of objects whose keys correspond with the keys in the `for_each` expression,
|
||||
and whose values are each objects with one attribute for each output value
|
||||
defined in the child module, each representing one module instance.
|
||||
|
||||
If the corresponding module uses `count` then the result is similar to for
|
||||
`for_each` except that the value is a _list_ with the requested number of
|
||||
elements, each one representing one module instance.
|
||||
|
||||
### Data Sources
|
||||
|
||||
* `data.<DATA TYPE>.<NAME>` is an object representing a
|
||||
[data resource](/docs/configuration/data-sources.html) of the given data
|
||||
source type and name. If the resource has the `count` argument set, the value
|
||||
is a list of objects representing its instances. If the resource has the `for_each`
|
||||
argument set, the value is a map of objects representing its instances.
|
||||
`data.<DATA TYPE>.<NAME>` is an object representing a
|
||||
[data resource](/docs/configuration/data-sources.html) of the given data
|
||||
source type and name. If the resource has the `count` argument set, the value
|
||||
is a list of objects representing its instances. If the resource has the `for_each`
|
||||
argument set, the value is a map of objects representing its instances.
|
||||
|
||||
For more information, see
|
||||
[References to Resource Attributes](#references-to-resource-attributes), which
|
||||
also applies to data resources aside from the addition of the `data.` prefix
|
||||
to mark the reference as for a data resource.
|
||||
|
||||
### Filesystem and Workspace Info
|
||||
|
||||
|
@ -91,6 +125,36 @@ For more information about how to use resource references, see
|
|||
* `terraform.workspace` is the name of the currently selected
|
||||
[workspace](/docs/state/workspaces.html).
|
||||
|
||||
Use the values in this section carefully, because they include information
|
||||
about the context in which a configuration is being applied and so may
|
||||
inadvertently hurt the portability or composability of a module.
|
||||
|
||||
For example, if you use `path.cwd` directly to populate a path into a resource
|
||||
argument then later applying the same configuration from a different directory
|
||||
or on a different computer with a different directory structure will cause
|
||||
the provider to consider the change of path to be a change to be applied, even
|
||||
if the path still refers to the same file.
|
||||
|
||||
Similarly, if you use any of these values as a form of namespacing in a shared
|
||||
module, such as using `terraform.workspace` as a prefix for globally-unique
|
||||
object names, it may not be possible to call your module more than once in
|
||||
the same configuration.
|
||||
|
||||
Aside from `path.module`, we recommend using the values in this section only
|
||||
in the root module of your configuration. If you are writing a shared module
|
||||
which needs a prefix to help create unique names, define an input variable
|
||||
for your module and allow the calling module to define the prefix. The
|
||||
calling module can then use `terraform.workspace` to define it if appropriate,
|
||||
or some other value if not:
|
||||
|
||||
```hcl
|
||||
module "example" {
|
||||
# ...
|
||||
|
||||
name_prefix = "app-${terraform-workspace}"
|
||||
}
|
||||
```
|
||||
|
||||
### Block-Local Values
|
||||
|
||||
Within the bodies of certain blocks, or in some other specific contexts,
|
||||
|
@ -110,6 +174,12 @@ _temporary variables_ in their documentation. These are not [input
|
|||
variables](/docs/configuration/variables.html); they are just arbitrary names
|
||||
that temporarily represent a value.
|
||||
|
||||
The names in this section relate to top-level configuration blocks only.
|
||||
If you use [`dynamic` blocks](dynamic-blocks.html) to dynamically generate
|
||||
resource-type-specific _nested_ blocks within `resource` and `data` blocks then
|
||||
you'll refer to the key and value of each element differently. See the
|
||||
`dynamic` blocks documentation for details.
|
||||
|
||||
## Named Values and Dependencies
|
||||
|
||||
Constructs like resources and module calls often use references to named values
|
||||
|
|
|
@ -39,34 +39,75 @@ The above expression is equivalent to the following `for` expression:
|
|||
[for o in var.list : o.interfaces[0].name]
|
||||
```
|
||||
|
||||
Splat expressions are for lists only (and thus cannot be used [to reference resources
|
||||
created with `for_each`](/docs/configuration/meta-arguments/for_each.html#referring-to-instances),
|
||||
which are represented as maps in Terraform). However, if a splat expression is applied
|
||||
to a value that is _not_ a list or tuple then the value is automatically wrapped in
|
||||
a single-element list before processing.
|
||||
## Splat Expressions with Maps
|
||||
|
||||
For example, `var.single_object[*].id` is equivalent to `[var.single_object][*].id`,
|
||||
or effectively `[var.single_object.id]`. This behavior is not interesting in most cases,
|
||||
but it is particularly useful when referring to resources that may or may
|
||||
not have `count` set, and thus may or may not produce a tuple value:
|
||||
The splat expression patterns shown above apply only to lists, sets, and
|
||||
tuples. To get a similar result with a map or object value you must use
|
||||
[`for` expressions](for.html).
|
||||
|
||||
```hcl
|
||||
aws_instance.example[*].id
|
||||
Resources that use the `for_each` argument will appear in expressions as a map
|
||||
of objects, so you can't use splat expressions with those resources.
|
||||
For more information, see
|
||||
[Referring to Resource Instances](/docs/configuration/meta-arguments/for_each.html#referring-to-instances).
|
||||
|
||||
## Single Values as Lists
|
||||
|
||||
Splat expressions have a special behavior when you apply them to a value that
|
||||
isn't a list, set, or tuple.
|
||||
|
||||
If the value is anything other than a null value then the splat expression will
|
||||
transform it into a single-element list, or more accurately a single-element
|
||||
tuple value. If the value is _null_ then the splat expression will return an
|
||||
empty tuple.
|
||||
|
||||
This special behavior can be useful for modules that accept optional input
|
||||
variables whose default value is `null` to represent the absense of any value,
|
||||
to adapt the variable value to work with other Terraform language features that
|
||||
are designed to work with collections. For example:
|
||||
|
||||
```
|
||||
variable "website" {
|
||||
type = object({
|
||||
index_document = string
|
||||
error_document = string
|
||||
})
|
||||
default = null
|
||||
}
|
||||
|
||||
resource "aws_s3_bucket" "example" {
|
||||
# ...
|
||||
|
||||
dynamic "website" {
|
||||
for_each = var.website[*]
|
||||
content {
|
||||
index_document = website.value.index_document
|
||||
error_document = website.value.error_document
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The above will produce a list of ids whether `aws_instance.example` has
|
||||
`count` set or not, avoiding the need to revise various other expressions
|
||||
in the configuration when a particular resource switches to and from
|
||||
having `count` set.
|
||||
The above example uses a [`dynamic` block](dynamic-blocks.html), which
|
||||
generates zero or more nested blocks based on a collection value. The input
|
||||
variable `var.website` is defined as a single object that might be null,
|
||||
so the `dynamic` block's `for_each` expression uses `[*]` to ensure that
|
||||
there will be one block if the module caller sets the website argument, or
|
||||
zero blocks if the caller leaves it set to null.
|
||||
|
||||
This special behavior of splat expressions is not obvious to an unfamiliar
|
||||
reader, so we recommend using it only in `for_each` arguments and similar
|
||||
situations where the context implies working with a collection. Otherwise,
|
||||
the meaning of the expression may be unclear to future readers.
|
||||
|
||||
## Legacy (Attribute-only) Splat Expressions
|
||||
|
||||
An older variant of the splat expression is available for compatibility with
|
||||
code written in older versions of the Terraform language. This is a less useful
|
||||
version of the splat expression, and should be avoided in new configurations.
|
||||
Earlier versions of the Terraform language had a slightly different version
|
||||
of splat expressions, which Terraform continues to support for backward
|
||||
compatibility. This older variant is less useful than the modern form described
|
||||
above, and so we recommend against using it in new configurations.
|
||||
|
||||
An "attribute-only" splat expression is indicated by the sequence `.*` (instead
|
||||
of `[*]`):
|
||||
The legacy "attribute-only" splat expressions use the sequence `.*`, instead of
|
||||
`[*]`:
|
||||
|
||||
```
|
||||
var.list.*.interfaces[0].name
|
||||
|
@ -81,4 +122,7 @@ This form has a subtly different behavior, equivalent to the following
|
|||
|
||||
Notice that with the attribute-only splat expression the index operation
|
||||
`[0]` is applied to the result of the iteration, rather than as part of
|
||||
the iteration itself.
|
||||
the iteration itself. Only the attribute lookups apply to each element of
|
||||
the input. This limitation was confusing some people using older versions of
|
||||
Terraform and so we recommend always using the new-style splat expressions,
|
||||
with `[*]`, to get the more consistent behavior.
|
||||
|
|
|
@ -72,6 +72,20 @@ In the above example, `EOT` is the identifier selected. Any identifier is
|
|||
allowed, but conventionally this identifier is in all-uppercase and begins with
|
||||
`EO`, meaning "end of". `EOT` in this case stands for "end of text".
|
||||
|
||||
### Generating JSON or YAML
|
||||
|
||||
Don't use "heredoc" strings to generate JSON or YAML. Instead, use
|
||||
[the `jsonencode` function](../functions/jsonencode.html) or
|
||||
[the `yamlencode` function](../functions/yamlencode.html) so that Terraform
|
||||
can be responsible for guaranteeing valid JSON or YAML syntax.
|
||||
|
||||
```hcl
|
||||
example = jsonencode({
|
||||
a = 1
|
||||
b = "hello"
|
||||
})
|
||||
```
|
||||
|
||||
### Indented Heredocs
|
||||
|
||||
The standard heredoc form (shown above) treats all space characters as literal
|
||||
|
|
|
@ -208,7 +208,7 @@ resource "aws_subnet" "example" {
|
|||
|
||||
vpc_id = each.value.network_id
|
||||
availability_zone = each.value.subnet_key
|
||||
cidr_block = each.value_cidr_block
|
||||
cidr_block = each.value.cidr_block
|
||||
}
|
||||
```
|
||||
|
||||
|
|
|
@ -12,7 +12,7 @@ The `chef` provisioner installs, configures and runs the Chef Client on a remote
|
|||
resource. The `chef` provisioner supports both `ssh` and `winrm` type
|
||||
[connections](/docs/provisioners/connection.html).
|
||||
|
||||
!> **Note:** This provisioner was removed in the 0.14.0 version of Terraform after being deprecated as of Terraform 0.13.4. For most common situations there are better alternatives to using provisioners. For more information, see [the main Provisioners page](./).
|
||||
!> **Note:** This provisioner was removed in the 0.15.0 version of Terraform after being deprecated as of Terraform 0.13.4. For most common situations there are better alternatives to using provisioners. For more information, see [the main Provisioners page](./).
|
||||
|
||||
## Requirements
|
||||
|
||||
|
|
|
@ -10,7 +10,7 @@ description: |-
|
|||
|
||||
The `habitat` provisioner installs the [Habitat](https://habitat.sh) supervisor and loads configured services. This provisioner only supports Linux targets using the `ssh` connection type at this time.
|
||||
|
||||
!> **Note:** This provisioner was removed in the 0.14.0 version of Terraform after being deprecated as of Terraform 0.13.4. For most common situations there are better alternatives to using provisioners. For more information, see [the main Provisioners page](./).
|
||||
!> **Note:** This provisioner was removed in the 0.15.0 version of Terraform after being deprecated as of Terraform 0.13.4. For most common situations there are better alternatives to using provisioners. For more information, see [the main Provisioners page](./).
|
||||
|
||||
## Requirements
|
||||
|
||||
|
|
|
@ -12,7 +12,7 @@ The `puppet` provisioner installs, configures and runs the Puppet agent on a
|
|||
remote resource. The `puppet` provisioner supports both `ssh` and `winrm` type
|
||||
[connections](/docs/provisioners/connection.html).
|
||||
|
||||
!> **Note:** This provisioner was removed in the 0.14.0 version of Terraform after being deprecated as of Terraform 0.13.4. For most common situations there are better alternatives to using provisioners. For more information, see [the main Provisioners page](./).
|
||||
!> **Note:** This provisioner was removed in the 0.15.0 version of Terraform after being deprecated as of Terraform 0.13.4. For most common situations there are better alternatives to using provisioners. For more information, see [the main Provisioners page](./).
|
||||
|
||||
## Requirements
|
||||
|
||||
|
|
|
@ -13,7 +13,7 @@ Type: `salt-masterless`
|
|||
The `salt-masterless` Terraform provisioner provisions machines built by Terraform
|
||||
using [Salt](http://saltstack.com/) states, without connecting to a Salt master. The `salt-masterless` provisioner supports `ssh` [connections](/docs/provisioners/connection.html).
|
||||
|
||||
!> **Note:** This provisioner was removed in the 0.14.0 version of Terraform after being deprecated as of Terraform 0.13.4. For most common situations there are better alternatives to using provisioners. For more information, see [the main Provisioners page](./).
|
||||
!> **Note:** This provisioner was removed in the 0.15.0 version of Terraform after being deprecated as of Terraform 0.13.4. For most common situations there are better alternatives to using provisioners. For more information, see [the main Provisioners page](./).
|
||||
|
||||
## Requirements
|
||||
|
||||
|
|
|
@ -26,7 +26,8 @@ Remote state has been overhauled to be easier and safer to configure and use.
|
|||
you'll be prompted to migrate to the new remote backend system.
|
||||
|
||||
An in-depth guide for migrating to the new backend system
|
||||
[is available here](/docs/backends/legacy-0-8.html). This includes
|
||||
[is available here](https://github.com/hashicorp/terraform/blob/v0.9.11/website/source/docs/backends/legacy-0-8.html.md).
|
||||
This includes
|
||||
backing up your existing remote state and also rolling back if necessary.
|
||||
|
||||
The only non-backwards compatible change is in the CLI: the existing
|
||||
|
|
Loading…
Reference in New Issue