Eval() Refactor: Plan Edition (#27177)

* terraforn: refactor EvalRefresh

EvalRefresh.Eval(ctx) is now Refresh(evalRefreshReqest, ctx). While none
of the inner logic of the function has changed, it now returns a
states.ResourceInstanceObject instead of updating a pointer. This is a
human-centric change, meant to make the logic flow (in the calling
functions) easier to follow.

* terraform: refactor EvalReadDataPlan and Apply

This is a very minor refactor that removes the (currently) redundant
types EvalReadDataPlan and EvalReadDataApply in favor of using
EvalReadData with a Plan and Apply functions.

This is in effect an aesthetic change; since there is no longer an
Eval() abstraction we can rename functions to make their functionality
as obvious as possible.

* terraform: refactor EvalCheckPlannedChange

EvalCheckPlannedChange was only used by NodeApplyableResourceInstance
and has been refactored into a method on that type called
checkPlannedChange.

* terraform: refactor EvalDiff.Eval

EvalDiff.Eval is now a method on NodeResourceAbstracted called Plan
which takes as a parameter an EvalPlanRequest. Instead of updating
pointers it returns a new plan and state.

I removed as many redundant fields from the original EvalDiff struct as
possible.

* terraform: refactor EvalReduceDiff

EvalReduceDiff is now reducePlan, a regular function (without a method)
that returns a value.

* terraform: refactor EvalDiffDestroy

EvalDiffDestroy.Eval is now NodeAbstractResourceInstance.PlanDestroy
which takes ctx, state and optional DeposedKey and returns a change.
I've removed the state return value since it was only ever returning a
nil state.

* terraform: refactor EvalWriteDiff

EvalWriteDiff.Eval is now NodeAbstractResourceInstance.WriteChange.

* rename files to something more logical

* terrafrom: refresh refactor, continued!

I had originally made Refresh a stand-alone function since it was
(obnoxiously) called from a graphNodeImportStateSub, but after some
(greatly appreciated) prompting in the PR I instead made it a method on
the NodeAbstractResourceInstance, in keeping with the other refactored
eval nodes, and then built a NodeAbstractResourceInstance inside import.

Since I did that I could also remove my duplicated 'writeState' code
inside graphNodeImportStateSub and use n.writeResourceInstanceState, so
double thanks!

* unexport eval methods

* re-refactor Plan, it made more sense on NodeAbstractResourceInstance. Sorry

* Remove uninformative `Eval`s from EvalReadData, consolidate to a single
file, and rename file to match function names.

* manual rebase
This commit is contained in:
Kristin Laemmert 2020-12-08 08:50:30 -05:00 committed by GitHub
parent 776b33db32
commit e7aaf9e39f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
18 changed files with 1468 additions and 1819 deletions

View File

@ -1,940 +0,0 @@
package terraform
import (
"fmt"
"log"
"reflect"
"strings"
"github.com/hashicorp/hcl/v2"
"github.com/zclconf/go-cty/cty"
"github.com/hashicorp/terraform/addrs"
"github.com/hashicorp/terraform/configs"
"github.com/hashicorp/terraform/plans"
"github.com/hashicorp/terraform/plans/objchange"
"github.com/hashicorp/terraform/providers"
"github.com/hashicorp/terraform/states"
"github.com/hashicorp/terraform/tfdiags"
)
// EvalCheckPlannedChange is an EvalNode implementation that produces errors
// if the _actual_ expected value is not compatible with what was recorded
// in the plan.
//
// Errors here are most often indicative of a bug in the provider, so our
// error messages will report with that in mind. It's also possible that
// there's a bug in Terraform's Core's own "proposed new value" code in
// EvalDiff.
type EvalCheckPlannedChange struct {
Addr addrs.ResourceInstance
ProviderAddr addrs.AbsProviderConfig
ProviderSchema **ProviderSchema
// We take ResourceInstanceChange objects here just because that's what's
// convenient to pass in from the evaltree implementation, but we really
// only look at the "After" value of each change.
Planned, Actual **plans.ResourceInstanceChange
}
func (n *EvalCheckPlannedChange) Eval(ctx EvalContext) tfdiags.Diagnostics {
var diags tfdiags.Diagnostics
providerSchema := *n.ProviderSchema
plannedChange := *n.Planned
actualChange := *n.Actual
schema, _ := providerSchema.SchemaForResourceAddr(n.Addr.ContainingResource())
if schema == nil {
// Should be caught during validation, so we don't bother with a pretty error here
diags = diags.Append(fmt.Errorf("provider does not support %q", n.Addr.Resource.Type))
return diags
}
absAddr := n.Addr.Absolute(ctx.Path())
log.Printf("[TRACE] EvalCheckPlannedChange: Verifying that actual change (action %s) matches planned change (action %s)", actualChange.Action, plannedChange.Action)
if plannedChange.Action != actualChange.Action {
switch {
case plannedChange.Action == plans.Update && actualChange.Action == plans.NoOp:
// It's okay for an update to become a NoOp once we've filled in
// all of the unknown values, since the final values might actually
// match what was there before after all.
log.Printf("[DEBUG] After incorporating new values learned so far during apply, %s change has become NoOp", absAddr)
case (plannedChange.Action == plans.CreateThenDelete && actualChange.Action == plans.DeleteThenCreate) ||
(plannedChange.Action == plans.DeleteThenCreate && actualChange.Action == plans.CreateThenDelete):
// If the order of replacement changed, then that is a bug in terraform
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Terraform produced inconsistent final plan",
fmt.Sprintf(
"When expanding the plan for %s to include new values learned so far during apply, the planned action changed from %s to %s.\n\nThis is a bug in Terraform and should be reported.",
absAddr, plannedChange.Action, actualChange.Action,
),
))
default:
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Provider produced inconsistent final plan",
fmt.Sprintf(
"When expanding the plan for %s to include new values learned so far during apply, provider %q changed the planned action from %s to %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
absAddr, n.ProviderAddr.Provider.String(),
plannedChange.Action, actualChange.Action,
),
))
}
}
errs := objchange.AssertObjectCompatible(schema, plannedChange.After, actualChange.After)
for _, err := range errs {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Provider produced inconsistent final plan",
fmt.Sprintf(
"When expanding the plan for %s to include new values learned so far during apply, provider %q produced an invalid new value for %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
absAddr, n.ProviderAddr.Provider.String(), tfdiags.FormatError(err),
),
))
}
return diags
}
// EvalDiff is an EvalNode implementation that detects changes for a given
// resource instance.
type EvalDiff struct {
Addr addrs.ResourceInstance
Config *configs.Resource
Provider *providers.Interface
ProviderAddr addrs.AbsProviderConfig
ProviderMetas map[addrs.Provider]*configs.ProviderMeta
ProviderSchema **ProviderSchema
State **states.ResourceInstanceObject
PreviousDiff **plans.ResourceInstanceChange
// CreateBeforeDestroy is set if either the resource's own config sets
// create_before_destroy explicitly or if dependencies have forced the
// resource to be handled as create_before_destroy in order to avoid
// a dependency cycle.
CreateBeforeDestroy bool
OutputChange **plans.ResourceInstanceChange
OutputState **states.ResourceInstanceObject
Stub bool
}
// TODO: test
func (n *EvalDiff) Eval(ctx EvalContext) tfdiags.Diagnostics {
var diags tfdiags.Diagnostics
state := *n.State
config := *n.Config
provider := *n.Provider
providerSchema := *n.ProviderSchema
createBeforeDestroy := n.CreateBeforeDestroy
if n.PreviousDiff != nil {
// If we already planned the action, we stick to that plan
createBeforeDestroy = (*n.PreviousDiff).Action == plans.CreateThenDelete
}
if providerSchema == nil {
diags = diags.Append(fmt.Errorf("provider schema is unavailable for %s", n.Addr))
return diags
}
if n.ProviderAddr.Provider.Type == "" {
panic(fmt.Sprintf("EvalDiff for %s does not have ProviderAddr set", n.Addr.Absolute(ctx.Path())))
}
// Evaluate the configuration
schema, _ := providerSchema.SchemaForResourceAddr(n.Addr.ContainingResource())
if schema == nil {
// Should be caught during validation, so we don't bother with a pretty error here
diags = diags.Append(fmt.Errorf("provider does not support resource type %q", n.Addr.Resource.Type))
return diags
}
forEach, _ := evaluateForEachExpression(n.Config.ForEach, ctx)
keyData := EvalDataForInstanceKey(n.Addr.Key, forEach)
origConfigVal, _, configDiags := ctx.EvaluateBlock(config.Config, schema, nil, keyData)
diags = diags.Append(configDiags)
if configDiags.HasErrors() {
return diags
}
metaConfigVal := cty.NullVal(cty.DynamicPseudoType)
if n.ProviderMetas != nil {
if m, ok := n.ProviderMetas[n.ProviderAddr.Provider]; ok && m != nil {
// if the provider doesn't support this feature, throw an error
if (*n.ProviderSchema).ProviderMeta == nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("Provider %s doesn't support provider_meta", n.ProviderAddr.Provider.String()),
Detail: fmt.Sprintf("The resource %s belongs to a provider that doesn't support provider_meta blocks", n.Addr),
Subject: &m.ProviderRange,
})
} else {
var configDiags tfdiags.Diagnostics
metaConfigVal, _, configDiags = ctx.EvaluateBlock(m.Config, (*n.ProviderSchema).ProviderMeta, nil, EvalDataForNoInstanceKey)
diags = diags.Append(configDiags)
if configDiags.HasErrors() {
return diags
}
}
}
}
absAddr := n.Addr.Absolute(ctx.Path())
var priorVal cty.Value
var priorValTainted cty.Value
var priorPrivate []byte
if state != nil {
if state.Status != states.ObjectTainted {
priorVal = state.Value
priorPrivate = state.Private
} else {
// If the prior state is tainted then we'll proceed below like
// we're creating an entirely new object, but then turn it into
// a synthetic "Replace" change at the end, creating the same
// result as if the provider had marked at least one argument
// change as "requires replacement".
priorValTainted = state.Value
priorVal = cty.NullVal(schema.ImpliedType())
}
} else {
priorVal = cty.NullVal(schema.ImpliedType())
}
// Create an unmarked version of our config val and our prior val.
// Store the paths for the config val to re-markafter
// we've sent things over the wire.
unmarkedConfigVal, unmarkedPaths := origConfigVal.UnmarkDeepWithPaths()
unmarkedPriorVal, priorPaths := priorVal.UnmarkDeepWithPaths()
log.Printf("[TRACE] Re-validating config for %q", n.Addr.Absolute(ctx.Path()))
// Allow the provider to validate the final set of values.
// The config was statically validated early on, but there may have been
// unknown values which the provider could not validate at the time.
// TODO: It would be more correct to validate the config after
// ignore_changes has been applied, but the current implementation cannot
// exclude computed-only attributes when given the `all` option.
validateResp := provider.ValidateResourceTypeConfig(
providers.ValidateResourceTypeConfigRequest{
TypeName: n.Addr.Resource.Type,
Config: unmarkedConfigVal,
},
)
if validateResp.Diagnostics.HasErrors() {
diags = diags.Append(validateResp.Diagnostics.InConfigBody(config.Config))
return diags
}
// ignore_changes is meant to only apply to the configuration, so it must
// be applied before we generate a plan. This ensures the config used for
// the proposed value, the proposed value itself, and the config presented
// to the provider in the PlanResourceChange request all agree on the
// starting values.
configValIgnored, ignoreChangeDiags := n.processIgnoreChanges(unmarkedPriorVal, unmarkedConfigVal)
diags = diags.Append(ignoreChangeDiags)
if ignoreChangeDiags.HasErrors() {
return diags
}
proposedNewVal := objchange.ProposedNewObject(schema, unmarkedPriorVal, configValIgnored)
// Call pre-diff hook
if !n.Stub {
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
return h.PreDiff(absAddr, states.CurrentGen, priorVal, proposedNewVal)
}))
if diags.HasErrors() {
return diags
}
}
resp := provider.PlanResourceChange(providers.PlanResourceChangeRequest{
TypeName: n.Addr.Resource.Type,
Config: configValIgnored,
PriorState: unmarkedPriorVal,
ProposedNewState: proposedNewVal,
PriorPrivate: priorPrivate,
ProviderMeta: metaConfigVal,
})
diags = diags.Append(resp.Diagnostics.InConfigBody(config.Config))
if diags.HasErrors() {
return diags
}
plannedNewVal := resp.PlannedState
plannedPrivate := resp.PlannedPrivate
if plannedNewVal == cty.NilVal {
// Should never happen. Since real-world providers return via RPC a nil
// is always a bug in the client-side stub. This is more likely caused
// by an incompletely-configured mock provider in tests, though.
panic(fmt.Sprintf("PlanResourceChange of %s produced nil value", absAddr.String()))
}
// We allow the planned new value to disagree with configuration _values_
// here, since that allows the provider to do special logic like a
// DiffSuppressFunc, but we still require that the provider produces
// a value whose type conforms to the schema.
for _, err := range plannedNewVal.Type().TestConformance(schema.ImpliedType()) {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Provider produced invalid plan",
fmt.Sprintf(
"Provider %q planned an invalid value for %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
n.ProviderAddr.Provider.String(), tfdiags.FormatErrorPrefixed(err, absAddr.String()),
),
))
}
if diags.HasErrors() {
return diags
}
if errs := objchange.AssertPlanValid(schema, unmarkedPriorVal, configValIgnored, plannedNewVal); len(errs) > 0 {
if resp.LegacyTypeSystem {
// The shimming of the old type system in the legacy SDK is not precise
// enough to pass this consistency check, so we'll give it a pass here,
// but we will generate a warning about it so that we are more likely
// to notice in the logs if an inconsistency beyond the type system
// leads to a downstream provider failure.
var buf strings.Builder
fmt.Fprintf(&buf,
"[WARN] Provider %q produced an invalid plan for %s, but we are tolerating it because it is using the legacy plugin SDK.\n The following problems may be the cause of any confusing errors from downstream operations:",
n.ProviderAddr.Provider.String(), absAddr,
)
for _, err := range errs {
fmt.Fprintf(&buf, "\n - %s", tfdiags.FormatError(err))
}
log.Print(buf.String())
} else {
for _, err := range errs {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Provider produced invalid plan",
fmt.Sprintf(
"Provider %q planned an invalid value for %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
n.ProviderAddr.Provider.String(), tfdiags.FormatErrorPrefixed(err, absAddr.String()),
),
))
}
return diags
}
}
if resp.LegacyTypeSystem {
// Because we allow legacy providers to depart from the contract and
// return changes to non-computed values, the plan response may have
// altered values that were already suppressed with ignore_changes.
// A prime example of this is where providers attempt to obfuscate
// config data by turning the config value into a hash and storing the
// hash value in the state. There are enough cases of this in existing
// providers that we must accommodate the behavior for now, so for
// ignore_changes to work at all on these values, we will revert the
// ignored values once more.
plannedNewVal, ignoreChangeDiags = n.processIgnoreChanges(unmarkedPriorVal, plannedNewVal)
diags = diags.Append(ignoreChangeDiags)
if ignoreChangeDiags.HasErrors() {
return diags
}
}
// Add the marks back to the planned new value -- this must happen after ignore changes
// have been processed
unmarkedPlannedNewVal := plannedNewVal
if len(unmarkedPaths) > 0 {
plannedNewVal = plannedNewVal.MarkWithPaths(unmarkedPaths)
}
// The provider produces a list of paths to attributes whose changes mean
// that we must replace rather than update an existing remote object.
// However, we only need to do that if the identified attributes _have_
// actually changed -- particularly after we may have undone some of the
// changes in processIgnoreChanges -- so now we'll filter that list to
// include only where changes are detected.
reqRep := cty.NewPathSet()
if len(resp.RequiresReplace) > 0 {
for _, path := range resp.RequiresReplace {
if priorVal.IsNull() {
// If prior is null then we don't expect any RequiresReplace at all,
// because this is a Create action.
continue
}
priorChangedVal, priorPathDiags := hcl.ApplyPath(unmarkedPriorVal, path, nil)
plannedChangedVal, plannedPathDiags := hcl.ApplyPath(plannedNewVal, path, nil)
if plannedPathDiags.HasErrors() && priorPathDiags.HasErrors() {
// This means the path was invalid in both the prior and new
// values, which is an error with the provider itself.
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Provider produced invalid plan",
fmt.Sprintf(
"Provider %q has indicated \"requires replacement\" on %s for a non-existent attribute path %#v.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
n.ProviderAddr.Provider.String(), absAddr, path,
),
))
continue
}
// Make sure we have valid Values for both values.
// Note: if the opposing value was of the type
// cty.DynamicPseudoType, the type assigned here may not exactly
// match the schema. This is fine here, since we're only going to
// check for equality, but if the NullVal is to be used, we need to
// check the schema for th true type.
switch {
case priorChangedVal == cty.NilVal && plannedChangedVal == cty.NilVal:
// this should never happen without ApplyPath errors above
panic("requires replace path returned 2 nil values")
case priorChangedVal == cty.NilVal:
priorChangedVal = cty.NullVal(plannedChangedVal.Type())
case plannedChangedVal == cty.NilVal:
plannedChangedVal = cty.NullVal(priorChangedVal.Type())
}
// Unmark for this value for the equality test. If only sensitivity has changed,
// this does not require an Update or Replace
unmarkedPlannedChangedVal, _ := plannedChangedVal.UnmarkDeep()
eqV := unmarkedPlannedChangedVal.Equals(priorChangedVal)
if !eqV.IsKnown() || eqV.False() {
reqRep.Add(path)
}
}
if diags.HasErrors() {
return diags
}
}
// Unmark for this test for value equality.
eqV := unmarkedPlannedNewVal.Equals(unmarkedPriorVal)
eq := eqV.IsKnown() && eqV.True()
var action plans.Action
switch {
case priorVal.IsNull():
action = plans.Create
case eq:
action = plans.NoOp
case !reqRep.Empty():
// If there are any "requires replace" paths left _after our filtering
// above_ then this is a replace action.
if createBeforeDestroy {
action = plans.CreateThenDelete
} else {
action = plans.DeleteThenCreate
}
default:
action = plans.Update
// "Delete" is never chosen here, because deletion plans are always
// created more directly elsewhere, such as in "orphan" handling.
}
if action.IsReplace() {
// In this strange situation we want to produce a change object that
// shows our real prior object but has a _new_ object that is built
// from a null prior object, since we're going to delete the one
// that has all the computed values on it.
//
// Therefore we'll ask the provider to plan again here, giving it
// a null object for the prior, and then we'll meld that with the
// _actual_ prior state to produce a correctly-shaped replace change.
// The resulting change should show any computed attributes changing
// from known prior values to unknown values, unless the provider is
// able to predict new values for any of these computed attributes.
nullPriorVal := cty.NullVal(schema.ImpliedType())
// Since there is no prior state to compare after replacement, we need
// a new unmarked config from our original with no ignored values.
unmarkedConfigVal := origConfigVal
if origConfigVal.ContainsMarked() {
unmarkedConfigVal, _ = origConfigVal.UnmarkDeep()
}
// create a new proposed value from the null state and the config
proposedNewVal = objchange.ProposedNewObject(schema, nullPriorVal, unmarkedConfigVal)
resp = provider.PlanResourceChange(providers.PlanResourceChangeRequest{
TypeName: n.Addr.Resource.Type,
Config: unmarkedConfigVal,
PriorState: nullPriorVal,
ProposedNewState: proposedNewVal,
PriorPrivate: plannedPrivate,
ProviderMeta: metaConfigVal,
})
// We need to tread carefully here, since if there are any warnings
// in here they probably also came out of our previous call to
// PlanResourceChange above, and so we don't want to repeat them.
// Consequently, we break from the usual pattern here and only
// append these new diagnostics if there's at least one error inside.
if resp.Diagnostics.HasErrors() {
diags = diags.Append(resp.Diagnostics.InConfigBody(config.Config))
return diags
}
plannedNewVal = resp.PlannedState
plannedPrivate = resp.PlannedPrivate
if len(unmarkedPaths) > 0 {
plannedNewVal = plannedNewVal.MarkWithPaths(unmarkedPaths)
}
for _, err := range plannedNewVal.Type().TestConformance(schema.ImpliedType()) {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Provider produced invalid plan",
fmt.Sprintf(
"Provider %q planned an invalid value for %s%s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
n.ProviderAddr.Provider.String(), absAddr, tfdiags.FormatError(err),
),
))
}
if diags.HasErrors() {
return diags
}
}
// If our prior value was tainted then we actually want this to appear
// as a replace change, even though so far we've been treating it as a
// create.
if action == plans.Create && priorValTainted != cty.NilVal {
if createBeforeDestroy {
action = plans.CreateThenDelete
} else {
action = plans.DeleteThenCreate
}
priorVal = priorValTainted
}
// If we plan to write or delete sensitive paths from state,
// this is an Update action
if action == plans.NoOp && !reflect.DeepEqual(priorPaths, unmarkedPaths) {
action = plans.Update
}
// As a special case, if we have a previous diff (presumably from the plan
// phases, whereas we're now in the apply phase) and it was for a replace,
// we've already deleted the original object from state by the time we
// get here and so we would've ended up with a _create_ action this time,
// which we now need to paper over to get a result consistent with what
// we originally intended.
if n.PreviousDiff != nil {
prevChange := *n.PreviousDiff
if prevChange.Action.IsReplace() && action == plans.Create {
log.Printf("[TRACE] EvalDiff: %s treating Create change as %s change to match with earlier plan", absAddr, prevChange.Action)
action = prevChange.Action
priorVal = prevChange.Before
}
}
// Call post-refresh hook
if !n.Stub {
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
return h.PostDiff(absAddr, states.CurrentGen, action, priorVal, plannedNewVal)
}))
if diags.HasErrors() {
return diags
}
}
// Update our output if we care
if n.OutputChange != nil {
*n.OutputChange = &plans.ResourceInstanceChange{
Addr: absAddr,
Private: plannedPrivate,
ProviderAddr: n.ProviderAddr,
Change: plans.Change{
Action: action,
Before: priorVal,
// Pass the marked planned value through in our change
// to propogate through evaluation.
// Marks will be removed when encoding.
After: plannedNewVal,
},
RequiredReplace: reqRep,
}
}
// Update the state if we care
if n.OutputState != nil {
*n.OutputState = &states.ResourceInstanceObject{
// We use the special "planned" status here to note that this
// object's value is not yet complete. Objects with this status
// cannot be used during expression evaluation, so the caller
// must _also_ record the returned change in the active plan,
// which the expression evaluator will use in preference to this
// incomplete value recorded in the state.
Status: states.ObjectPlanned,
Value: plannedNewVal,
Private: plannedPrivate,
}
}
return diags
}
func (n *EvalDiff) processIgnoreChanges(prior, config cty.Value) (cty.Value, tfdiags.Diagnostics) {
// ignore_changes only applies when an object already exists, since we
// can't ignore changes to a thing we've not created yet.
if prior.IsNull() {
return config, nil
}
ignoreChanges := n.Config.Managed.IgnoreChanges
ignoreAll := n.Config.Managed.IgnoreAllChanges
if len(ignoreChanges) == 0 && !ignoreAll {
return config, nil
}
if ignoreAll {
return prior, nil
}
if prior.IsNull() || config.IsNull() {
// Ignore changes doesn't apply when we're creating for the first time.
// Proposed should never be null here, but if it is then we'll just let it be.
return config, nil
}
return processIgnoreChangesIndividual(prior, config, ignoreChanges)
}
func processIgnoreChangesIndividual(prior, config cty.Value, ignoreChanges []hcl.Traversal) (cty.Value, tfdiags.Diagnostics) {
// When we walk below we will be using cty.Path values for comparison, so
// we'll convert our traversals here so we can compare more easily.
ignoreChangesPath := make([]cty.Path, len(ignoreChanges))
for i, traversal := range ignoreChanges {
path := make(cty.Path, len(traversal))
for si, step := range traversal {
switch ts := step.(type) {
case hcl.TraverseRoot:
path[si] = cty.GetAttrStep{
Name: ts.Name,
}
case hcl.TraverseAttr:
path[si] = cty.GetAttrStep{
Name: ts.Name,
}
case hcl.TraverseIndex:
path[si] = cty.IndexStep{
Key: ts.Key,
}
default:
panic(fmt.Sprintf("unsupported traversal step %#v", step))
}
}
ignoreChangesPath[i] = path
}
type ignoreChange struct {
// Path is the full path, minus any trailing map index
path cty.Path
// Value is the value we are to retain at the above path. If there is a
// key value, this must be a map and the desired value will be at the
// key index.
value cty.Value
// Key is the index key if the ignored path ends in a map index.
key cty.Value
}
var ignoredValues []ignoreChange
// Find the actual changes first and store them in the ignoreChange struct.
// If the change was to a map value, and the key doesn't exist in the
// config, it would never be visited in the transform walk.
for _, icPath := range ignoreChangesPath {
key := cty.NullVal(cty.String)
// check for a map index, since maps are the only structure where we
// could have invalid path steps.
last, ok := icPath[len(icPath)-1].(cty.IndexStep)
if ok {
if last.Key.Type() == cty.String {
icPath = icPath[:len(icPath)-1]
key = last.Key
}
}
// The structure should have been validated already, and we already
// trimmed the trailing map index. Any other intermediate index error
// means we wouldn't be able to apply the value below, so no need to
// record this.
p, err := icPath.Apply(prior)
if err != nil {
continue
}
c, err := icPath.Apply(config)
if err != nil {
continue
}
// If this is a map, it is checking the entire map value for equality
// rather than the individual key. This means that the change is stored
// here even if our ignored key doesn't change. That is OK since it
// won't cause any changes in the transformation, but allows us to skip
// breaking up the maps and checking for key existence here too.
eq := p.Equals(c)
if !eq.IsKnown() || eq.False() {
// there a change to ignore at this path, store the prior value
ignoredValues = append(ignoredValues, ignoreChange{icPath, p, key})
}
}
if len(ignoredValues) == 0 {
return config, nil
}
ret, _ := cty.Transform(config, func(path cty.Path, v cty.Value) (cty.Value, error) {
// Easy path for when we are only matching the entire value. The only
// values we break up for inspection are maps.
if !v.Type().IsMapType() {
for _, ignored := range ignoredValues {
if path.Equals(ignored.path) {
return ignored.value, nil
}
}
return v, nil
}
// We now know this must be a map, so we need to accumulate the values
// key-by-key.
if !v.IsNull() && !v.IsKnown() {
// since v is not known, we cannot ignore individual keys
return v, nil
}
// The configMap is the current configuration value, which we will
// mutate based on the ignored paths and the prior map value.
var configMap map[string]cty.Value
switch {
case v.IsNull() || v.LengthInt() == 0:
configMap = map[string]cty.Value{}
default:
configMap = v.AsValueMap()
}
for _, ignored := range ignoredValues {
if !path.Equals(ignored.path) {
continue
}
if ignored.key.IsNull() {
// The map address is confirmed to match at this point,
// so if there is no key, we want the entire map and can
// stop accumulating values.
return ignored.value, nil
}
// Now we know we are ignoring a specific index of this map, so get
// the config map and modify, add, or remove the desired key.
// We also need to create a prior map, so we can check for
// existence while getting the value, because Value.Index will
// return null for a key with a null value and for a non-existent
// key.
var priorMap map[string]cty.Value
switch {
case ignored.value.IsNull() || ignored.value.LengthInt() == 0:
priorMap = map[string]cty.Value{}
default:
priorMap = ignored.value.AsValueMap()
}
key := ignored.key.AsString()
priorElem, keep := priorMap[key]
switch {
case !keep:
// this didn't exist in the old map value, so we're keeping the
// "absence" of the key by removing it from the config
delete(configMap, key)
default:
configMap[key] = priorElem
}
}
if len(configMap) == 0 {
return cty.MapValEmpty(v.Type().ElementType()), nil
}
return cty.MapVal(configMap), nil
})
return ret, nil
}
// EvalDiffDestroy is an EvalNode implementation that returns a plain
// destroy diff.
type EvalDiffDestroy struct {
Addr addrs.ResourceInstance
DeposedKey states.DeposedKey
State **states.ResourceInstanceObject
ProviderAddr addrs.AbsProviderConfig
Output **plans.ResourceInstanceChange
OutputState **states.ResourceInstanceObject
}
// TODO: test
func (n *EvalDiffDestroy) Eval(ctx EvalContext) tfdiags.Diagnostics {
var diags tfdiags.Diagnostics
absAddr := n.Addr.Absolute(ctx.Path())
state := *n.State
if n.ProviderAddr.Provider.Type == "" {
if n.DeposedKey == "" {
panic(fmt.Sprintf("EvalDiffDestroy for %s does not have ProviderAddr set", absAddr))
} else {
panic(fmt.Sprintf("EvalDiffDestroy for %s (deposed %s) does not have ProviderAddr set", absAddr, n.DeposedKey))
}
}
// If there is no state or our attributes object is null then we're already
// destroyed.
if state == nil || state.Value.IsNull() {
return nil
}
// Call pre-diff hook
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
return h.PreDiff(
absAddr, n.DeposedKey.Generation(),
state.Value,
cty.NullVal(cty.DynamicPseudoType),
)
}))
if diags.HasErrors() {
return diags
}
// Change is always the same for a destroy. We don't need the provider's
// help for this one.
// TODO: Should we give the provider an opportunity to veto this?
change := &plans.ResourceInstanceChange{
Addr: absAddr,
DeposedKey: n.DeposedKey,
Change: plans.Change{
Action: plans.Delete,
Before: state.Value,
After: cty.NullVal(cty.DynamicPseudoType),
},
Private: state.Private,
ProviderAddr: n.ProviderAddr,
}
// Call post-diff hook
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
return h.PostDiff(
absAddr,
n.DeposedKey.Generation(),
change.Action,
change.Before,
change.After,
)
}))
if diags.HasErrors() {
return diags
}
// Update our output
*n.Output = change
if n.OutputState != nil {
// Record our proposed new state, which is nil because we're destroying.
*n.OutputState = nil
}
return diags
}
// EvalReduceDiff is an EvalNode implementation that takes a planned resource
// instance change as might be produced by EvalDiff or EvalDiffDestroy and
// "simplifies" it to a single atomic action to be performed by a specific
// graph node.
//
// Callers must specify whether they are a destroy node or a regular apply
// node. If the result is NoOp then the given change requires no action for
// the specific graph node calling this and so evaluation of the that graph
// node should exit early and take no action.
//
// The object written to OutChange may either be identical to InChange or
// a new change object derived from InChange. Because of the former case, the
// caller must not mutate the object returned in OutChange.
type EvalReduceDiff struct {
Addr addrs.ResourceInstance
InChange **plans.ResourceInstanceChange
Destroy bool
OutChange **plans.ResourceInstanceChange
}
// TODO: test
func (n *EvalReduceDiff) Eval(ctx EvalContext) tfdiags.Diagnostics {
in := *n.InChange
out := in.Simplify(n.Destroy)
if n.OutChange != nil {
*n.OutChange = out
}
if out.Action != in.Action {
if n.Destroy {
log.Printf("[TRACE] EvalReduceDiff: %s change simplified from %s to %s for destroy node", n.Addr, in.Action, out.Action)
} else {
log.Printf("[TRACE] EvalReduceDiff: %s change simplified from %s to %s for apply node", n.Addr, in.Action, out.Action)
}
}
return nil
}
// EvalWriteDiff is an EvalNode implementation that saves a planned change
// for an instance object into the set of global planned changes.
type EvalWriteDiff struct {
Addr addrs.ResourceInstance
DeposedKey states.DeposedKey
ProviderSchema **ProviderSchema
Change **plans.ResourceInstanceChange
}
// TODO: test
func (n *EvalWriteDiff) Eval(ctx EvalContext) tfdiags.Diagnostics {
var diags tfdiags.Diagnostics
changes := ctx.Changes()
addr := n.Addr.Absolute(ctx.Path())
if n.Change == nil || *n.Change == nil {
// Caller sets nil to indicate that we need to remove a change from
// the set of changes.
gen := states.CurrentGen
if n.DeposedKey != states.NotDeposed {
gen = n.DeposedKey
}
changes.RemoveResourceInstanceChange(addr, gen)
return nil
}
providerSchema := *n.ProviderSchema
change := *n.Change
if change.Addr.String() != addr.String() || change.DeposedKey != n.DeposedKey {
// Should never happen, and indicates a bug in the caller.
panic("inconsistent address and/or deposed key in EvalWriteDiff")
}
schema, _ := providerSchema.SchemaForResourceAddr(n.Addr.ContainingResource())
if schema == nil {
// Should be caught during validation, so we don't bother with a pretty error here
diags = diags.Append(fmt.Errorf("provider does not support resource type %q", n.Addr.Resource.Type))
return diags
}
csrc, err := change.Encode(schema.ImpliedType())
if err != nil {
diags = diags.Append(fmt.Errorf("failed to encode planned changes for %s: %s", addr, err))
return diags
}
changes.AppendResourceInstanceChange(csrc)
if n.DeposedKey == states.NotDeposed {
log.Printf("[TRACE] EvalWriteDiff: recorded %s change for %s", change.Action, addr)
} else {
log.Printf("[TRACE] EvalWriteDiff: recorded %s change for %s deposed object %s", change.Action, addr, n.DeposedKey)
}
return diags
}

View File

@ -1,192 +0,0 @@
package terraform
import (
"fmt"
"log"
"github.com/zclconf/go-cty/cty"
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/terraform/addrs"
"github.com/hashicorp/terraform/configs"
"github.com/hashicorp/terraform/plans"
"github.com/hashicorp/terraform/providers"
"github.com/hashicorp/terraform/states"
"github.com/hashicorp/terraform/tfdiags"
)
// evalReadData implements shared methods and data for the individual data
// source eval nodes.
type evalReadData struct {
Addr addrs.ResourceInstance
Config *configs.Resource
Provider *providers.Interface
ProviderAddr addrs.AbsProviderConfig
ProviderMetas map[addrs.Provider]*configs.ProviderMeta
ProviderSchema **ProviderSchema
// Planned is set when dealing with data resources that were deferred to
// the apply walk, to let us see what was planned. If this is set, the
// evaluation of the config is required to produce a wholly-known
// configuration which is consistent with the partial object included
// in this planned change.
Planned **plans.ResourceInstanceChange
// State is the current state for the data source, and is updated once the
// new state has been read.
// While data sources are read-only, we need to start with the prior state
// to determine if we have a change or not. If we needed to read a new
// value, but it still matches the previous state, then we can record a
// NoNop change. If the states don't match then we record a Read change so
// that the new value is applied to the state.
State **states.ResourceInstanceObject
// Output change records any change for this data source, which is
// interpreted differently than changes for managed resources.
// - During Refresh, this change is only used to correctly evaluate
// references to the data source, but it is not saved.
// - If a planned change has the action of plans.Read, it indicates that the
// data source could not be evaluated yet, and reading is being deferred to
// apply.
// - If planned action is plans.Update, it indicates that the data source
// was read, and the result needs to be stored in state during apply.
OutputChange **plans.ResourceInstanceChange
// dependsOn stores the list of transitive resource addresses that any
// configuration depends_on references may resolve to. This is used to
// determine if there are any changes that will force this data sources to
// be deferred to apply.
dependsOn []addrs.ConfigResource
}
// readDataSource handles everything needed to call ReadDataSource on the provider.
// A previously evaluated configVal can be passed in, or a new one is generated
// from the resource configuration.
func (n *evalReadData) readDataSource(ctx EvalContext, configVal cty.Value) (cty.Value, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
var newVal cty.Value
config := *n.Config
absAddr := n.Addr.Absolute(ctx.Path())
if n.ProviderSchema == nil || *n.ProviderSchema == nil {
diags = diags.Append(fmt.Errorf("provider schema not available for %s", n.Addr))
return newVal, diags
}
provider := *n.Provider
providerSchema := *n.ProviderSchema
schema, _ := providerSchema.SchemaForResourceAddr(n.Addr.ContainingResource())
if schema == nil {
// Should be caught during validation, so we don't bother with a pretty error here
diags = diags.Append(fmt.Errorf("provider %q does not support data source %q", n.ProviderAddr.Provider.String(), n.Addr.Resource.Type))
return newVal, diags
}
metaConfigVal, metaDiags := n.providerMetas(ctx)
diags = diags.Append(metaDiags)
if diags.HasErrors() {
return newVal, diags
}
log.Printf("[TRACE] EvalReadData: Re-validating config for %s", absAddr)
validateResp := provider.ValidateDataSourceConfig(
providers.ValidateDataSourceConfigRequest{
TypeName: n.Addr.Resource.Type,
Config: configVal,
},
)
if validateResp.Diagnostics.HasErrors() {
return newVal, validateResp.Diagnostics.InConfigBody(config.Config)
}
// If we get down here then our configuration is complete and we're read
// to actually call the provider to read the data.
log.Printf("[TRACE] EvalReadData: %s configuration is complete, so reading from provider", absAddr)
resp := provider.ReadDataSource(providers.ReadDataSourceRequest{
TypeName: n.Addr.Resource.Type,
Config: configVal,
ProviderMeta: metaConfigVal,
})
diags = diags.Append(resp.Diagnostics.InConfigBody(config.Config))
if diags.HasErrors() {
return newVal, diags
}
newVal = resp.State
if newVal == cty.NilVal {
// This can happen with incompletely-configured mocks. We'll allow it
// and treat it as an alias for a properly-typed null value.
newVal = cty.NullVal(schema.ImpliedType())
}
for _, err := range newVal.Type().TestConformance(schema.ImpliedType()) {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Provider produced invalid object",
fmt.Sprintf(
"Provider %q produced an invalid value for %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
n.ProviderAddr.Provider.String(), tfdiags.FormatErrorPrefixed(err, absAddr.String()),
),
))
}
if diags.HasErrors() {
return newVal, diags
}
if newVal.IsNull() {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Provider produced null object",
fmt.Sprintf(
"Provider %q produced a null value for %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
n.ProviderAddr.Provider.String(), absAddr,
),
))
}
if !newVal.IsNull() && !newVal.IsWhollyKnown() {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Provider produced invalid object",
fmt.Sprintf(
"Provider %q produced a value for %s that is not wholly known.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
n.ProviderAddr.Provider.String(), absAddr,
),
))
// We'll still save the object, but we need to eliminate any unknown
// values first because we can't serialize them in the state file.
// Note that this may cause set elements to be coalesced if they
// differed only by having unknown values, but we don't worry about
// that here because we're saving the value only for inspection
// purposes; the error we added above will halt the graph walk.
newVal = cty.UnknownAsNull(newVal)
}
return newVal, diags
}
func (n *evalReadData) providerMetas(ctx EvalContext) (cty.Value, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
metaConfigVal := cty.NullVal(cty.DynamicPseudoType)
if n.ProviderMetas != nil {
if m, ok := n.ProviderMetas[n.ProviderAddr.Provider]; ok && m != nil {
// if the provider doesn't support this feature, throw an error
if (*n.ProviderSchema).ProviderMeta == nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("Provider %s doesn't support provider_meta", n.ProviderAddr.Provider.String()),
Detail: fmt.Sprintf("The resource %s belongs to a provider that doesn't support provider_meta blocks", n.Addr),
Subject: &m.ProviderRange,
})
} else {
var configDiags tfdiags.Diagnostics
metaConfigVal, _, configDiags = ctx.EvaluateBlock(m.Config, (*n.ProviderSchema).ProviderMeta, nil, EvalDataForNoInstanceKey)
diags = diags.Append(configDiags)
}
}
}
return metaConfigVal, diags
}

View File

@ -1,84 +0,0 @@
package terraform
import (
"fmt"
"github.com/hashicorp/terraform/plans"
"github.com/hashicorp/terraform/states"
"github.com/hashicorp/terraform/tfdiags"
)
// evalReadDataApply is an EvalNode implementation that deals with the main part
// of the data resource lifecycle: either actually reading from the data source
// or generating a plan to do so.
type evalReadDataApply struct {
evalReadData
}
func (n *evalReadDataApply) Eval(ctx EvalContext) tfdiags.Diagnostics {
absAddr := n.Addr.Absolute(ctx.Path())
var diags tfdiags.Diagnostics
var planned *plans.ResourceInstanceChange
if n.Planned != nil {
planned = *n.Planned
}
if n.ProviderSchema == nil || *n.ProviderSchema == nil {
diags = diags.Append(fmt.Errorf("provider schema not available for %s", n.Addr))
return diags
}
if planned != nil && planned.Action != plans.Read {
// If any other action gets in here then that's always a bug; this
// EvalNode only deals with reading.
diags = diags.Append(fmt.Errorf(
"invalid action %s for %s: only Read is supported (this is a bug in Terraform; please report it!)",
planned.Action, absAddr,
))
return diags
}
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
return h.PreApply(absAddr, states.CurrentGen, planned.Action, planned.Before, planned.After)
}))
if diags.HasErrors() {
return diags
}
config := *n.Config
providerSchema := *n.ProviderSchema
schema, _ := providerSchema.SchemaForResourceAddr(n.Addr.ContainingResource())
if schema == nil {
// Should be caught during validation, so we don't bother with a pretty error here
diags = diags.Append(fmt.Errorf("provider %q does not support data source %q", n.ProviderAddr.Provider.String(), n.Addr.Resource.Type))
return diags
}
forEach, _ := evaluateForEachExpression(config.ForEach, ctx)
keyData := EvalDataForInstanceKey(n.Addr.Key, forEach)
configVal, _, configDiags := ctx.EvaluateBlock(config.Config, schema, nil, keyData)
diags = diags.Append(configDiags)
if configDiags.HasErrors() {
return diags
}
newVal, readDiags := n.readDataSource(ctx, configVal)
diags = diags.Append(readDiags)
if diags.HasErrors() {
return diags
}
*n.State = &states.ResourceInstanceObject{
Value: newVal,
Status: states.ObjectReady,
}
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
return h.PostApply(absAddr, states.CurrentGen, newVal, diags.Err())
}))
return diags
}

View File

@ -1,170 +0,0 @@
package terraform
import (
"fmt"
"log"
"strings"
"github.com/zclconf/go-cty/cty"
"github.com/hashicorp/terraform/addrs"
"github.com/hashicorp/terraform/plans"
"github.com/hashicorp/terraform/plans/objchange"
"github.com/hashicorp/terraform/states"
"github.com/hashicorp/terraform/tfdiags"
)
// evalReadDataPlan is an EvalNode implementation that deals with the main part
// of the data resource lifecycle: either actually reading from the data source
// or generating a plan to do so.
type evalReadDataPlan struct {
evalReadData
}
func (n *evalReadDataPlan) Eval(ctx EvalContext) tfdiags.Diagnostics {
absAddr := n.Addr.Absolute(ctx.Path())
var diags tfdiags.Diagnostics
var configVal cty.Value
if n.ProviderSchema == nil || *n.ProviderSchema == nil {
diags = diags.Append(fmt.Errorf("provider schema not available for %s", n.Addr))
return diags
}
config := *n.Config
providerSchema := *n.ProviderSchema
schema, _ := providerSchema.SchemaForResourceAddr(n.Addr.ContainingResource())
if schema == nil {
// Should be caught during validation, so we don't bother with a pretty error here
diags = diags.Append(fmt.Errorf("provider %q does not support data source %q", n.ProviderAddr.Provider.String(), n.Addr.Resource.Type))
return diags
}
objTy := schema.ImpliedType()
priorVal := cty.NullVal(objTy)
if n.State != nil && *n.State != nil {
priorVal = (*n.State).Value
}
forEach, _ := evaluateForEachExpression(config.ForEach, ctx)
keyData := EvalDataForInstanceKey(n.Addr.Key, forEach)
var configDiags tfdiags.Diagnostics
configVal, _, configDiags = ctx.EvaluateBlock(config.Config, schema, nil, keyData)
diags = diags.Append(configDiags)
if configDiags.HasErrors() {
return diags
}
configKnown := configVal.IsWhollyKnown()
// If our configuration contains any unknown values, or we depend on any
// unknown values then we must defer the read to the apply phase by
// producing a "Read" change for this resource, and a placeholder value for
// it in the state.
if n.forcePlanRead(ctx) || !configKnown {
if configKnown {
log.Printf("[TRACE] evalReadDataPlan: %s configuration is fully known, but we're forcing a read plan to be created", absAddr)
} else {
log.Printf("[TRACE] evalReadDataPlan: %s configuration not fully known yet, so deferring to apply phase", absAddr)
}
proposedNewVal := objchange.PlannedDataResourceObject(schema, configVal)
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
return h.PreDiff(absAddr, states.CurrentGen, priorVal, proposedNewVal)
}))
if diags.HasErrors() {
return diags
}
// Apply detects that the data source will need to be read by the After
// value containing unknowns from PlanDataResourceObject.
*n.OutputChange = &plans.ResourceInstanceChange{
Addr: absAddr,
ProviderAddr: n.ProviderAddr,
Change: plans.Change{
Action: plans.Read,
Before: priorVal,
After: proposedNewVal,
},
}
*n.State = &states.ResourceInstanceObject{
Value: proposedNewVal,
Status: states.ObjectPlanned,
}
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
return h.PostDiff(absAddr, states.CurrentGen, plans.Read, priorVal, proposedNewVal)
}))
return diags
}
// We have a complete configuration with no dependencies to wait on, so we
// can read the data source into the state.
newVal, readDiags := n.readDataSource(ctx, configVal)
diags = diags.Append(readDiags)
if diags.HasErrors() {
return diags
}
// if we have a prior value, we can check for any irregularities in the response
if !priorVal.IsNull() {
// While we don't propose planned changes for data sources, we can
// generate a proposed value for comparison to ensure the data source
// is returning a result following the rules of the provider contract.
proposedVal := objchange.ProposedNewObject(schema, priorVal, configVal)
if errs := objchange.AssertObjectCompatible(schema, proposedVal, newVal); len(errs) > 0 {
// Resources have the LegacyTypeSystem field to signal when they are
// using an SDK which may not produce precise values. While data
// sources are read-only, they can still return a value which is not
// compatible with the config+schema. Since we can't detect the legacy
// type system, we can only warn about this for now.
var buf strings.Builder
fmt.Fprintf(&buf, "[WARN] Provider %q produced an unexpected new value for %s.",
n.ProviderAddr.Provider.String(), absAddr)
for _, err := range errs {
fmt.Fprintf(&buf, "\n - %s", tfdiags.FormatError(err))
}
log.Print(buf.String())
}
}
*n.State = &states.ResourceInstanceObject{
Value: newVal,
Status: states.ObjectReady,
}
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
return h.PostDiff(absAddr, states.CurrentGen, plans.Update, priorVal, newVal)
}))
return diags
}
// forcePlanRead determines if we need to override the usual behavior of
// immediately reading from the data source where possible, instead forcing us
// to generate a plan.
func (n *evalReadDataPlan) forcePlanRead(ctx EvalContext) bool {
// Check and see if any depends_on dependencies have
// changes, since they won't show up as changes in the
// configuration.
changes := ctx.Changes()
for _, d := range n.dependsOn {
if d.Resource.Mode == addrs.DataResourceMode {
// Data sources have no external side effects, so they pose a need
// to delay this read. If they do have a change planned, it must be
// because of a dependency on a managed resource, in which case
// we'll also encounter it in this list of dependencies.
continue
}
for _, change := range changes.GetChangesForConfigResource(d) {
if change != nil && change.Action != plans.NoOp {
return true
}
}
}
return false
}

View File

@ -1,165 +0,0 @@
package terraform
import (
"fmt"
"log"
"strings"
"github.com/zclconf/go-cty/cty"
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/terraform/addrs"
"github.com/hashicorp/terraform/configs"
"github.com/hashicorp/terraform/plans/objchange"
"github.com/hashicorp/terraform/providers"
"github.com/hashicorp/terraform/states"
"github.com/hashicorp/terraform/tfdiags"
)
// EvalRefresh is an EvalNode implementation that does a refresh for
// a resource.
type EvalRefresh struct {
Addr addrs.ResourceInstance
ProviderAddr addrs.AbsProviderConfig
Provider *providers.Interface
ProviderMetas map[addrs.Provider]*configs.ProviderMeta
ProviderSchema **ProviderSchema
State **states.ResourceInstanceObject
Output **states.ResourceInstanceObject
}
// TODO: test
func (n *EvalRefresh) Eval(ctx EvalContext) tfdiags.Diagnostics {
state := *n.State
absAddr := n.Addr.Absolute(ctx.Path())
var diags tfdiags.Diagnostics
// If we have no state, we don't do any refreshing
if state == nil {
log.Printf("[DEBUG] refresh: %s: no state, so not refreshing", n.Addr.Absolute(ctx.Path()))
return diags
}
schema, _ := (*n.ProviderSchema).SchemaForResourceAddr(n.Addr.ContainingResource())
if schema == nil {
// Should be caught during validation, so we don't bother with a pretty error here
diags = diags.Append(fmt.Errorf("provider does not support resource type %q", n.Addr.Resource.Type))
return diags
}
metaConfigVal := cty.NullVal(cty.DynamicPseudoType)
if n.ProviderMetas != nil {
if m, ok := n.ProviderMetas[n.ProviderAddr.Provider]; ok && m != nil {
log.Printf("[DEBUG] EvalRefresh: ProviderMeta config value set")
// if the provider doesn't support this feature, throw an error
if (*n.ProviderSchema).ProviderMeta == nil {
log.Printf("[DEBUG] EvalRefresh: no ProviderMeta schema")
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("Provider %s doesn't support provider_meta", n.ProviderAddr.Provider.String()),
Detail: fmt.Sprintf("The resource %s belongs to a provider that doesn't support provider_meta blocks", n.Addr),
Subject: &m.ProviderRange,
})
} else {
log.Printf("[DEBUG] EvalRefresh: ProviderMeta schema found: %+v", (*n.ProviderSchema).ProviderMeta)
var configDiags tfdiags.Diagnostics
metaConfigVal, _, configDiags = ctx.EvaluateBlock(m.Config, (*n.ProviderSchema).ProviderMeta, nil, EvalDataForNoInstanceKey)
diags = diags.Append(configDiags)
if configDiags.HasErrors() {
return diags
}
}
}
}
// Call pre-refresh hook
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
return h.PreRefresh(absAddr, states.CurrentGen, state.Value)
}))
if diags.HasErrors() {
return diags
}
// Refresh!
priorVal := state.Value
// Unmarked before sending to provider
var priorPaths []cty.PathValueMarks
if priorVal.ContainsMarked() {
priorVal, priorPaths = priorVal.UnmarkDeepWithPaths()
}
req := providers.ReadResourceRequest{
TypeName: n.Addr.Resource.Type,
PriorState: priorVal,
Private: state.Private,
ProviderMeta: metaConfigVal,
}
provider := *n.Provider
resp := provider.ReadResource(req)
diags = diags.Append(resp.Diagnostics)
if diags.HasErrors() {
return diags
}
if resp.NewState == cty.NilVal {
// This ought not to happen in real cases since it's not possible to
// send NilVal over the plugin RPC channel, but it can come up in
// tests due to sloppy mocking.
panic("new state is cty.NilVal")
}
for _, err := range resp.NewState.Type().TestConformance(schema.ImpliedType()) {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Provider produced invalid object",
fmt.Sprintf(
"Provider %q planned an invalid value for %s during refresh: %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
n.ProviderAddr.Provider.String(), absAddr, tfdiags.FormatError(err),
),
))
}
if diags.HasErrors() {
return diags
}
// We have no way to exempt provider using the legacy SDK from this check,
// so we can only log inconsistencies with the updated state values.
// In most cases these are not errors anyway, and represent "drift" from
// external changes which will be handled by the subsequent plan.
if errs := objchange.AssertObjectCompatible(schema, priorVal, resp.NewState); len(errs) > 0 {
var buf strings.Builder
fmt.Fprintf(&buf, "[WARN] Provider %q produced an unexpected new value for %s during refresh.", n.ProviderAddr.Provider.String(), absAddr)
for _, err := range errs {
fmt.Fprintf(&buf, "\n - %s", tfdiags.FormatError(err))
}
log.Print(buf.String())
}
newState := state.DeepCopy()
newState.Value = resp.NewState
newState.Private = resp.Private
newState.Dependencies = state.Dependencies
newState.CreateBeforeDestroy = state.CreateBeforeDestroy
// Call post-refresh hook
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
return h.PostRefresh(absAddr, states.CurrentGen, priorVal, newState.Value)
}))
if diags.HasErrors() {
return diags
}
// Mark the value if necessary
if len(priorPaths) > 0 {
newState.Value = newState.Value.MarkWithPaths(priorPaths)
}
if n.Output != nil {
*n.Output = newState
}
return diags
}

View File

@ -345,9 +345,9 @@ func (n *NodeAbstractResource) writeResourceState(ctx EvalContext, addr addrs.Ab
return diags
}
// ReadResourceInstanceState reads the current object for a specific instance in
// readResourceInstanceState reads the current object for a specific instance in
// the state.
func (n *NodeAbstractResource) ReadResourceInstanceState(ctx EvalContext, addr addrs.AbsResourceInstance) (*states.ResourceInstanceObject, error) {
func (n *NodeAbstractResource) readResourceInstanceState(ctx EvalContext, addr addrs.AbsResourceInstance) (*states.ResourceInstanceObject, error) {
provider, providerSchema, err := GetProvider(ctx, n.ResolvedProvider)
if err != nil {
return nil, err

View File

@ -3,10 +3,14 @@ package terraform
import (
"fmt"
"log"
"reflect"
"strings"
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/terraform/addrs"
"github.com/hashicorp/terraform/plans"
"github.com/hashicorp/terraform/plans/objchange"
"github.com/hashicorp/terraform/providers"
"github.com/hashicorp/terraform/states"
"github.com/hashicorp/terraform/tfdiags"
"github.com/zclconf/go-cty/cty"
@ -224,8 +228,8 @@ func (n *NodeAbstractResourceInstance) PreApplyHook(ctx EvalContext, change *pla
return nil
}
// PostApplyHook calls the post-Apply hook
func (n *NodeAbstractResourceInstance) PostApplyHook(ctx EvalContext, state *states.ResourceInstanceObject, err *error) tfdiags.Diagnostics {
// postApplyHook calls the post-Apply hook
func (n *NodeAbstractResourceInstance) postApplyHook(ctx EvalContext, state *states.ResourceInstanceObject, err *error) tfdiags.Diagnostics {
var diags tfdiags.Diagnostics
if resourceHasUserVisibleApply(n.Addr.Resource) {
@ -317,3 +321,874 @@ func (n *NodeAbstractResourceInstance) writeResourceInstanceState(ctx EvalContex
state.SetResourceInstanceCurrent(absAddr, src, n.ResolvedProvider)
return nil
}
// planDestroy returns a plain destroy diff.
func (n *NodeAbstractResourceInstance) planDestroy(ctx EvalContext, currentState *states.ResourceInstanceObject, deposedKey states.DeposedKey) (*plans.ResourceInstanceChange, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
absAddr := n.Addr
if n.ResolvedProvider.Provider.Type == "" {
if deposedKey == "" {
panic(fmt.Sprintf("DestroyPlan for %s does not have ProviderAddr set", absAddr))
} else {
panic(fmt.Sprintf("DestroyPlan for %s (deposed %s) does not have ProviderAddr set", absAddr, deposedKey))
}
}
// If there is no state or our attributes object is null then we're already
// destroyed.
if currentState == nil || currentState.Value.IsNull() {
return nil, nil
}
// Call pre-diff hook
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
return h.PreDiff(
absAddr, deposedKey.Generation(),
currentState.Value,
cty.NullVal(cty.DynamicPseudoType),
)
}))
if diags.HasErrors() {
return nil, diags
}
// Plan is always the same for a destroy. We don't need the provider's
// help for this one.
plan := &plans.ResourceInstanceChange{
Addr: absAddr,
DeposedKey: deposedKey,
Change: plans.Change{
Action: plans.Delete,
Before: currentState.Value,
After: cty.NullVal(cty.DynamicPseudoType),
},
Private: currentState.Private,
ProviderAddr: n.ResolvedProvider,
}
// Call post-diff hook
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
return h.PostDiff(
absAddr,
deposedKey.Generation(),
plan.Action,
plan.Before,
plan.After,
)
}))
return plan, diags
}
// writeChange saves a planned change for an instance object into the set of
// global planned changes.
func (n *NodeAbstractResourceInstance) writeChange(ctx EvalContext, change *plans.ResourceInstanceChange, deposedKey states.DeposedKey) error {
changes := ctx.Changes()
if change == nil {
// Caller sets nil to indicate that we need to remove a change from
// the set of changes.
gen := states.CurrentGen
if deposedKey != states.NotDeposed {
gen = deposedKey
}
changes.RemoveResourceInstanceChange(n.Addr, gen)
return nil
}
_, providerSchema, err := GetProvider(ctx, n.ResolvedProvider)
if err != nil {
return err
}
if change.Addr.String() != n.Addr.String() || change.DeposedKey != deposedKey {
// Should never happen, and indicates a bug in the caller.
panic("inconsistent address and/or deposed key in WriteChange")
}
ri := n.Addr.Resource
schema, _ := providerSchema.SchemaForResourceAddr(ri.Resource)
if schema == nil {
// Should be caught during validation, so we don't bother with a pretty error here
return fmt.Errorf("provider does not support resource type %q", ri.Resource.Type)
}
csrc, err := change.Encode(schema.ImpliedType())
if err != nil {
return fmt.Errorf("failed to encode planned changes for %s: %s", n.Addr, err)
}
changes.AppendResourceInstanceChange(csrc)
if deposedKey == states.NotDeposed {
log.Printf("[TRACE] WriteChange: recorded %s change for %s", change.Action, n.Addr)
} else {
log.Printf("[TRACE] WriteChange: recorded %s change for %s deposed object %s", change.Action, n.Addr, deposedKey)
}
return nil
}
// refresh does a refresh for a resource
func (n *NodeAbstractResourceInstance) refresh(ctx EvalContext, state *states.ResourceInstanceObject) (*states.ResourceInstanceObject, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
absAddr := n.Addr
provider, providerSchema, err := GetProvider(ctx, n.ResolvedProvider)
if err != nil {
return state, diags.Append(err)
}
// If we have no state, we don't do any refreshing
if state == nil {
log.Printf("[DEBUG] refresh: %s: no state, so not refreshing", absAddr)
return state, diags
}
schema, _ := providerSchema.SchemaForResourceAddr(n.Addr.Resource.ContainingResource())
if schema == nil {
// Should be caught during validation, so we don't bother with a pretty error here
diags = diags.Append(fmt.Errorf("provider does not support resource type %q", n.Addr.Resource.Resource.Type))
return state, diags
}
metaConfigVal := cty.NullVal(cty.DynamicPseudoType)
if n.ProviderMetas != nil {
if m, ok := n.ProviderMetas[n.ResolvedProvider.Provider]; ok && m != nil {
log.Printf("[DEBUG] EvalRefresh: ProviderMeta config value set")
// if the provider doesn't support this feature, throw an error
if providerSchema.ProviderMeta == nil {
log.Printf("[DEBUG] EvalRefresh: no ProviderMeta schema")
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("Provider %s doesn't support provider_meta", n.ResolvedProvider.Provider.String()),
Detail: fmt.Sprintf("The resource %s belongs to a provider that doesn't support provider_meta blocks", n.Addr.Resource),
Subject: &m.ProviderRange,
})
} else {
log.Printf("[DEBUG] EvalRefresh: ProviderMeta schema found: %+v", providerSchema.ProviderMeta)
var configDiags tfdiags.Diagnostics
metaConfigVal, _, configDiags = ctx.EvaluateBlock(m.Config, providerSchema.ProviderMeta, nil, EvalDataForNoInstanceKey)
diags = diags.Append(configDiags)
if configDiags.HasErrors() {
return state, diags
}
}
}
}
// Call pre-refresh hook
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
return h.PreRefresh(absAddr, states.CurrentGen, state.Value)
}))
if diags.HasErrors() {
return state, diags
}
// Refresh!
priorVal := state.Value
// Unmarked before sending to provider
var priorPaths []cty.PathValueMarks
if priorVal.ContainsMarked() {
priorVal, priorPaths = priorVal.UnmarkDeepWithPaths()
}
providerReq := providers.ReadResourceRequest{
TypeName: n.Addr.Resource.Resource.Type,
PriorState: priorVal,
Private: state.Private,
ProviderMeta: metaConfigVal,
}
resp := provider.ReadResource(providerReq)
diags = diags.Append(resp.Diagnostics)
if diags.HasErrors() {
return state, diags
}
if resp.NewState == cty.NilVal {
// This ought not to happen in real cases since it's not possible to
// send NilVal over the plugin RPC channel, but it can come up in
// tests due to sloppy mocking.
panic("new state is cty.NilVal")
}
for _, err := range resp.NewState.Type().TestConformance(schema.ImpliedType()) {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Provider produced invalid object",
fmt.Sprintf(
"Provider %q planned an invalid value for %s during refresh: %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
n.ResolvedProvider.Provider.String(), absAddr, tfdiags.FormatError(err),
),
))
}
if diags.HasErrors() {
return state, diags
}
// We have no way to exempt provider using the legacy SDK from this check,
// so we can only log inconsistencies with the updated state values.
// In most cases these are not errors anyway, and represent "drift" from
// external changes which will be handled by the subsequent plan.
if errs := objchange.AssertObjectCompatible(schema, priorVal, resp.NewState); len(errs) > 0 {
var buf strings.Builder
fmt.Fprintf(&buf, "[WARN] Provider %q produced an unexpected new value for %s during refresh.", n.ResolvedProvider.Provider.String(), absAddr)
for _, err := range errs {
fmt.Fprintf(&buf, "\n - %s", tfdiags.FormatError(err))
}
log.Print(buf.String())
}
ret := state.DeepCopy()
ret.Value = resp.NewState
ret.Private = resp.Private
ret.Dependencies = state.Dependencies
ret.CreateBeforeDestroy = state.CreateBeforeDestroy
// Call post-refresh hook
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
return h.PostRefresh(absAddr, states.CurrentGen, priorVal, ret.Value)
}))
if diags.HasErrors() {
return ret, diags
}
// Mark the value if necessary
if len(priorPaths) > 0 {
ret.Value = ret.Value.MarkWithPaths(priorPaths)
}
return ret, diags
}
func (n *NodeAbstractResourceInstance) plan(
ctx EvalContext,
plannedChange *plans.ResourceInstanceChange,
currentState *states.ResourceInstanceObject,
createBeforeDestroy bool) (*plans.ResourceInstanceChange, *states.ResourceInstanceObject, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
var state *states.ResourceInstanceObject
var plan *plans.ResourceInstanceChange
config := *n.Config
resource := n.Addr.Resource.Resource
provider, providerSchema, err := GetProvider(ctx, n.ResolvedProvider)
if err != nil {
return plan, state, diags.Append(err)
}
if plannedChange != nil {
// If we already planned the action, we stick to that plan
createBeforeDestroy = plannedChange.Action == plans.CreateThenDelete
}
if providerSchema == nil {
diags = diags.Append(fmt.Errorf("provider schema is unavailable for %s", n.Addr))
return plan, state, diags
}
// Evaluate the configuration
schema, _ := providerSchema.SchemaForResourceAddr(resource)
if schema == nil {
// Should be caught during validation, so we don't bother with a pretty error here
diags = diags.Append(fmt.Errorf("provider does not support resource type %q", resource.Type))
return plan, state, diags
}
forEach, _ := evaluateForEachExpression(n.Config.ForEach, ctx)
keyData := EvalDataForInstanceKey(n.ResourceInstanceAddr().Resource.Key, forEach)
origConfigVal, _, configDiags := ctx.EvaluateBlock(config.Config, schema, nil, keyData)
diags = diags.Append(configDiags)
if configDiags.HasErrors() {
return plan, state, diags
}
metaConfigVal := cty.NullVal(cty.DynamicPseudoType)
if n.ProviderMetas != nil {
if m, ok := n.ProviderMetas[n.ResolvedProvider.Provider]; ok && m != nil {
// if the provider doesn't support this feature, throw an error
if providerSchema.ProviderMeta == nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("Provider %s doesn't support provider_meta", n.ResolvedProvider.Provider),
Detail: fmt.Sprintf("The resource %s belongs to a provider that doesn't support provider_meta blocks", n.Addr.Resource),
Subject: &m.ProviderRange,
})
} else {
var configDiags tfdiags.Diagnostics
metaConfigVal, _, configDiags = ctx.EvaluateBlock(m.Config, providerSchema.ProviderMeta, nil, EvalDataForNoInstanceKey)
diags = diags.Append(configDiags)
if configDiags.HasErrors() {
return plan, state, diags
}
}
}
}
var priorVal cty.Value
var priorValTainted cty.Value
var priorPrivate []byte
if currentState != nil {
if currentState.Status != states.ObjectTainted {
priorVal = currentState.Value
priorPrivate = currentState.Private
} else {
// If the prior state is tainted then we'll proceed below like
// we're creating an entirely new object, but then turn it into
// a synthetic "Replace" change at the end, creating the same
// result as if the provider had marked at least one argument
// change as "requires replacement".
priorValTainted = currentState.Value
priorVal = cty.NullVal(schema.ImpliedType())
}
} else {
priorVal = cty.NullVal(schema.ImpliedType())
}
// Create an unmarked version of our config val and our prior val.
// Store the paths for the config val to re-markafter
// we've sent things over the wire.
unmarkedConfigVal, unmarkedPaths := origConfigVal.UnmarkDeepWithPaths()
unmarkedPriorVal, priorPaths := priorVal.UnmarkDeepWithPaths()
log.Printf("[TRACE] Re-validating config for %q", n.Addr)
// Allow the provider to validate the final set of values.
// The config was statically validated early on, but there may have been
// unknown values which the provider could not validate at the time.
// TODO: It would be more correct to validate the config after
// ignore_changes has been applied, but the current implementation cannot
// exclude computed-only attributes when given the `all` option.
validateResp := provider.ValidateResourceTypeConfig(
providers.ValidateResourceTypeConfigRequest{
TypeName: n.Addr.Resource.Resource.Type,
Config: unmarkedConfigVal,
},
)
if validateResp.Diagnostics.HasErrors() {
diags = diags.Append(validateResp.Diagnostics.InConfigBody(config.Config))
return plan, state, diags
}
// ignore_changes is meant to only apply to the configuration, so it must
// be applied before we generate a plan. This ensures the config used for
// the proposed value, the proposed value itself, and the config presented
// to the provider in the PlanResourceChange request all agree on the
// starting values.
configValIgnored, ignoreChangeDiags := n.processIgnoreChanges(unmarkedPriorVal, unmarkedConfigVal)
diags = diags.Append(ignoreChangeDiags)
if ignoreChangeDiags.HasErrors() {
return plan, state, diags
}
proposedNewVal := objchange.ProposedNewObject(schema, unmarkedPriorVal, configValIgnored)
// Call pre-diff hook
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
return h.PreDiff(n.Addr, states.CurrentGen, priorVal, proposedNewVal)
}))
if diags.HasErrors() {
return plan, state, diags
}
resp := provider.PlanResourceChange(providers.PlanResourceChangeRequest{
TypeName: n.Addr.Resource.Resource.Type,
Config: configValIgnored,
PriorState: unmarkedPriorVal,
ProposedNewState: proposedNewVal,
PriorPrivate: priorPrivate,
ProviderMeta: metaConfigVal,
})
diags = diags.Append(resp.Diagnostics.InConfigBody(config.Config))
if diags.HasErrors() {
return plan, state, diags
}
plannedNewVal := resp.PlannedState
plannedPrivate := resp.PlannedPrivate
if plannedNewVal == cty.NilVal {
// Should never happen. Since real-world providers return via RPC a nil
// is always a bug in the client-side stub. This is more likely caused
// by an incompletely-configured mock provider in tests, though.
panic(fmt.Sprintf("PlanResourceChange of %s produced nil value", n.Addr))
}
// We allow the planned new value to disagree with configuration _values_
// here, since that allows the provider to do special logic like a
// DiffSuppressFunc, but we still require that the provider produces
// a value whose type conforms to the schema.
for _, err := range plannedNewVal.Type().TestConformance(schema.ImpliedType()) {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Provider produced invalid plan",
fmt.Sprintf(
"Provider %q planned an invalid value for %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
n.ResolvedProvider.Provider, tfdiags.FormatErrorPrefixed(err, n.Addr.String()),
),
))
}
if diags.HasErrors() {
return plan, state, diags
}
if errs := objchange.AssertPlanValid(schema, unmarkedPriorVal, configValIgnored, plannedNewVal); len(errs) > 0 {
if resp.LegacyTypeSystem {
// The shimming of the old type system in the legacy SDK is not precise
// enough to pass this consistency check, so we'll give it a pass here,
// but we will generate a warning about it so that we are more likely
// to notice in the logs if an inconsistency beyond the type system
// leads to a downstream provider failure.
var buf strings.Builder
fmt.Fprintf(&buf,
"[WARN] Provider %q produced an invalid plan for %s, but we are tolerating it because it is using the legacy plugin SDK.\n The following problems may be the cause of any confusing errors from downstream operations:",
n.ResolvedProvider.Provider, n.Addr,
)
for _, err := range errs {
fmt.Fprintf(&buf, "\n - %s", tfdiags.FormatError(err))
}
log.Print(buf.String())
} else {
for _, err := range errs {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Provider produced invalid plan",
fmt.Sprintf(
"Provider %q planned an invalid value for %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
n.ResolvedProvider.Provider, tfdiags.FormatErrorPrefixed(err, n.Addr.String()),
),
))
}
return plan, state, diags
}
}
if resp.LegacyTypeSystem {
// Because we allow legacy providers to depart from the contract and
// return changes to non-computed values, the plan response may have
// altered values that were already suppressed with ignore_changes.
// A prime example of this is where providers attempt to obfuscate
// config data by turning the config value into a hash and storing the
// hash value in the state. There are enough cases of this in existing
// providers that we must accommodate the behavior for now, so for
// ignore_changes to work at all on these values, we will revert the
// ignored values once more.
plannedNewVal, ignoreChangeDiags = n.processIgnoreChanges(unmarkedPriorVal, plannedNewVal)
diags = diags.Append(ignoreChangeDiags)
if ignoreChangeDiags.HasErrors() {
return plan, state, diags
}
}
// Add the marks back to the planned new value -- this must happen after ignore changes
// have been processed
unmarkedPlannedNewVal := plannedNewVal
if len(unmarkedPaths) > 0 {
plannedNewVal = plannedNewVal.MarkWithPaths(unmarkedPaths)
}
// The provider produces a list of paths to attributes whose changes mean
// that we must replace rather than update an existing remote object.
// However, we only need to do that if the identified attributes _have_
// actually changed -- particularly after we may have undone some of the
// changes in processIgnoreChanges -- so now we'll filter that list to
// include only where changes are detected.
reqRep := cty.NewPathSet()
if len(resp.RequiresReplace) > 0 {
for _, path := range resp.RequiresReplace {
if priorVal.IsNull() {
// If prior is null then we don't expect any RequiresReplace at all,
// because this is a Create action.
continue
}
priorChangedVal, priorPathDiags := hcl.ApplyPath(unmarkedPriorVal, path, nil)
plannedChangedVal, plannedPathDiags := hcl.ApplyPath(plannedNewVal, path, nil)
if plannedPathDiags.HasErrors() && priorPathDiags.HasErrors() {
// This means the path was invalid in both the prior and new
// values, which is an error with the provider itself.
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Provider produced invalid plan",
fmt.Sprintf(
"Provider %q has indicated \"requires replacement\" on %s for a non-existent attribute path %#v.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
n.ResolvedProvider.Provider, n.Addr, path,
),
))
continue
}
// Make sure we have valid Values for both values.
// Note: if the opposing value was of the type
// cty.DynamicPseudoType, the type assigned here may not exactly
// match the schema. This is fine here, since we're only going to
// check for equality, but if the NullVal is to be used, we need to
// check the schema for th true type.
switch {
case priorChangedVal == cty.NilVal && plannedChangedVal == cty.NilVal:
// this should never happen without ApplyPath errors above
panic("requires replace path returned 2 nil values")
case priorChangedVal == cty.NilVal:
priorChangedVal = cty.NullVal(plannedChangedVal.Type())
case plannedChangedVal == cty.NilVal:
plannedChangedVal = cty.NullVal(priorChangedVal.Type())
}
// Unmark for this value for the equality test. If only sensitivity has changed,
// this does not require an Update or Replace
unmarkedPlannedChangedVal, _ := plannedChangedVal.UnmarkDeep()
eqV := unmarkedPlannedChangedVal.Equals(priorChangedVal)
if !eqV.IsKnown() || eqV.False() {
reqRep.Add(path)
}
}
if diags.HasErrors() {
return plan, state, diags
}
}
// Unmark for this test for value equality.
eqV := unmarkedPlannedNewVal.Equals(unmarkedPriorVal)
eq := eqV.IsKnown() && eqV.True()
var action plans.Action
switch {
case priorVal.IsNull():
action = plans.Create
case eq:
action = plans.NoOp
case !reqRep.Empty():
// If there are any "requires replace" paths left _after our filtering
// above_ then this is a replace action.
if createBeforeDestroy {
action = plans.CreateThenDelete
} else {
action = plans.DeleteThenCreate
}
default:
action = plans.Update
// "Delete" is never chosen here, because deletion plans are always
// created more directly elsewhere, such as in "orphan" handling.
}
if action.IsReplace() {
// In this strange situation we want to produce a change object that
// shows our real prior object but has a _new_ object that is built
// from a null prior object, since we're going to delete the one
// that has all the computed values on it.
//
// Therefore we'll ask the provider to plan again here, giving it
// a null object for the prior, and then we'll meld that with the
// _actual_ prior state to produce a correctly-shaped replace change.
// The resulting change should show any computed attributes changing
// from known prior values to unknown values, unless the provider is
// able to predict new values for any of these computed attributes.
nullPriorVal := cty.NullVal(schema.ImpliedType())
// Since there is no prior state to compare after replacement, we need
// a new unmarked config from our original with no ignored values.
unmarkedConfigVal := origConfigVal
if origConfigVal.ContainsMarked() {
unmarkedConfigVal, _ = origConfigVal.UnmarkDeep()
}
// create a new proposed value from the null state and the config
proposedNewVal = objchange.ProposedNewObject(schema, nullPriorVal, unmarkedConfigVal)
resp = provider.PlanResourceChange(providers.PlanResourceChangeRequest{
TypeName: n.Addr.Resource.Resource.Type,
Config: unmarkedConfigVal,
PriorState: nullPriorVal,
ProposedNewState: proposedNewVal,
PriorPrivate: plannedPrivate,
ProviderMeta: metaConfigVal,
})
// We need to tread carefully here, since if there are any warnings
// in here they probably also came out of our previous call to
// PlanResourceChange above, and so we don't want to repeat them.
// Consequently, we break from the usual pattern here and only
// append these new diagnostics if there's at least one error inside.
if resp.Diagnostics.HasErrors() {
diags = diags.Append(resp.Diagnostics.InConfigBody(config.Config))
return plan, state, diags
}
plannedNewVal = resp.PlannedState
plannedPrivate = resp.PlannedPrivate
if len(unmarkedPaths) > 0 {
plannedNewVal = plannedNewVal.MarkWithPaths(unmarkedPaths)
}
for _, err := range plannedNewVal.Type().TestConformance(schema.ImpliedType()) {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Provider produced invalid plan",
fmt.Sprintf(
"Provider %q planned an invalid value for %s%s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
n.ResolvedProvider.Provider, n.Addr, tfdiags.FormatError(err),
),
))
}
if diags.HasErrors() {
return plan, state, diags
}
}
// If our prior value was tainted then we actually want this to appear
// as a replace change, even though so far we've been treating it as a
// create.
if action == plans.Create && priorValTainted != cty.NilVal {
if createBeforeDestroy {
action = plans.CreateThenDelete
} else {
action = plans.DeleteThenCreate
}
priorVal = priorValTainted
}
// If we plan to write or delete sensitive paths from state,
// this is an Update action
if action == plans.NoOp && !reflect.DeepEqual(priorPaths, unmarkedPaths) {
action = plans.Update
}
// As a special case, if we have a previous diff (presumably from the plan
// phases, whereas we're now in the apply phase) and it was for a replace,
// we've already deleted the original object from state by the time we
// get here and so we would've ended up with a _create_ action this time,
// which we now need to paper over to get a result consistent with what
// we originally intended.
if plannedChange != nil {
prevChange := *plannedChange
if prevChange.Action.IsReplace() && action == plans.Create {
log.Printf("[TRACE] EvalDiff: %s treating Create change as %s change to match with earlier plan", n.Addr, prevChange.Action)
action = prevChange.Action
priorVal = prevChange.Before
}
}
// Call post-refresh hook
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
return h.PostDiff(n.Addr, states.CurrentGen, action, priorVal, plannedNewVal)
}))
if diags.HasErrors() {
return plan, state, diags
}
// Update our return plan
plan = &plans.ResourceInstanceChange{
Addr: n.Addr,
Private: plannedPrivate,
ProviderAddr: n.ResolvedProvider,
Change: plans.Change{
Action: action,
Before: priorVal,
// Pass the marked planned value through in our change
// to propogate through evaluation.
// Marks will be removed when encoding.
After: plannedNewVal,
},
RequiredReplace: reqRep,
}
// Update our return state
state = &states.ResourceInstanceObject{
// We use the special "planned" status here to note that this
// object's value is not yet complete. Objects with this status
// cannot be used during expression evaluation, so the caller
// must _also_ record the returned change in the active plan,
// which the expression evaluator will use in preference to this
// incomplete value recorded in the state.
Status: states.ObjectPlanned,
Value: plannedNewVal,
Private: plannedPrivate,
}
return plan, state, diags
}
func (n *NodeAbstractResource) processIgnoreChanges(prior, config cty.Value) (cty.Value, tfdiags.Diagnostics) {
// ignore_changes only applies when an object already exists, since we
// can't ignore changes to a thing we've not created yet.
if prior.IsNull() {
return config, nil
}
ignoreChanges := n.Config.Managed.IgnoreChanges
ignoreAll := n.Config.Managed.IgnoreAllChanges
if len(ignoreChanges) == 0 && !ignoreAll {
return config, nil
}
if ignoreAll {
return prior, nil
}
if prior.IsNull() || config.IsNull() {
// Ignore changes doesn't apply when we're creating for the first time.
// Proposed should never be null here, but if it is then we'll just let it be.
return config, nil
}
return processIgnoreChangesIndividual(prior, config, ignoreChanges)
}
func processIgnoreChangesIndividual(prior, config cty.Value, ignoreChanges []hcl.Traversal) (cty.Value, tfdiags.Diagnostics) {
// When we walk below we will be using cty.Path values for comparison, so
// we'll convert our traversals here so we can compare more easily.
ignoreChangesPath := make([]cty.Path, len(ignoreChanges))
for i, traversal := range ignoreChanges {
path := make(cty.Path, len(traversal))
for si, step := range traversal {
switch ts := step.(type) {
case hcl.TraverseRoot:
path[si] = cty.GetAttrStep{
Name: ts.Name,
}
case hcl.TraverseAttr:
path[si] = cty.GetAttrStep{
Name: ts.Name,
}
case hcl.TraverseIndex:
path[si] = cty.IndexStep{
Key: ts.Key,
}
default:
panic(fmt.Sprintf("unsupported traversal step %#v", step))
}
}
ignoreChangesPath[i] = path
}
type ignoreChange struct {
// Path is the full path, minus any trailing map index
path cty.Path
// Value is the value we are to retain at the above path. If there is a
// key value, this must be a map and the desired value will be at the
// key index.
value cty.Value
// Key is the index key if the ignored path ends in a map index.
key cty.Value
}
var ignoredValues []ignoreChange
// Find the actual changes first and store them in the ignoreChange struct.
// If the change was to a map value, and the key doesn't exist in the
// config, it would never be visited in the transform walk.
for _, icPath := range ignoreChangesPath {
key := cty.NullVal(cty.String)
// check for a map index, since maps are the only structure where we
// could have invalid path steps.
last, ok := icPath[len(icPath)-1].(cty.IndexStep)
if ok {
if last.Key.Type() == cty.String {
icPath = icPath[:len(icPath)-1]
key = last.Key
}
}
// The structure should have been validated already, and we already
// trimmed the trailing map index. Any other intermediate index error
// means we wouldn't be able to apply the value below, so no need to
// record this.
p, err := icPath.Apply(prior)
if err != nil {
continue
}
c, err := icPath.Apply(config)
if err != nil {
continue
}
// If this is a map, it is checking the entire map value for equality
// rather than the individual key. This means that the change is stored
// here even if our ignored key doesn't change. That is OK since it
// won't cause any changes in the transformation, but allows us to skip
// breaking up the maps and checking for key existence here too.
eq := p.Equals(c)
if !eq.IsKnown() || eq.False() {
// there a change to ignore at this path, store the prior value
ignoredValues = append(ignoredValues, ignoreChange{icPath, p, key})
}
}
if len(ignoredValues) == 0 {
return config, nil
}
ret, _ := cty.Transform(config, func(path cty.Path, v cty.Value) (cty.Value, error) {
// Easy path for when we are only matching the entire value. The only
// values we break up for inspection are maps.
if !v.Type().IsMapType() {
for _, ignored := range ignoredValues {
if path.Equals(ignored.path) {
return ignored.value, nil
}
}
return v, nil
}
// We now know this must be a map, so we need to accumulate the values
// key-by-key.
if !v.IsNull() && !v.IsKnown() {
// since v is not known, we cannot ignore individual keys
return v, nil
}
// The configMap is the current configuration value, which we will
// mutate based on the ignored paths and the prior map value.
var configMap map[string]cty.Value
switch {
case v.IsNull() || v.LengthInt() == 0:
configMap = map[string]cty.Value{}
default:
configMap = v.AsValueMap()
}
for _, ignored := range ignoredValues {
if !path.Equals(ignored.path) {
continue
}
if ignored.key.IsNull() {
// The map address is confirmed to match at this point,
// so if there is no key, we want the entire map and can
// stop accumulating values.
return ignored.value, nil
}
// Now we know we are ignoring a specific index of this map, so get
// the config map and modify, add, or remove the desired key.
// We also need to create a prior map, so we can check for
// existence while getting the value, because Value.Index will
// return null for a key with a null value and for a non-existent
// key.
var priorMap map[string]cty.Value
switch {
case ignored.value.IsNull() || ignored.value.LengthInt() == 0:
priorMap = map[string]cty.Value{}
default:
priorMap = ignored.value.AsValueMap()
}
key := ignored.key.AsString()
priorElem, keep := priorMap[key]
switch {
case !keep:
// this didn't exist in the old map value, so we're keeping the
// "absence" of the key by removing it from the config
delete(configMap, key)
default:
configMap[key] = priorElem
}
}
if len(configMap) == 0 {
return cty.MapValEmpty(v.Type().ElementType()), nil
}
return cty.MapVal(configMap), nil
})
return ret, nil
}

View File

@ -160,7 +160,7 @@ func TestNodeAbstractResource_ReadResourceInstanceState(t *testing.T) {
ctx.ProviderSchemaSchema = mockProvider.GetSchemaReturn
ctx.ProviderProvider = providers.Interface(mockProvider)
got, err := test.Node.ReadResourceInstanceState(ctx, test.Node.Addr.Resource.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance))
got, err := test.Node.readResourceInstanceState(ctx, test.Node.Addr.Resource.Instance(addrs.NoKey).Absolute(addrs.RootModuleInstance))
if err != nil {
t.Fatalf("[%s] Got err: %#v", k, err.Error())
}

View File

@ -7,6 +7,7 @@ import (
"github.com/hashicorp/terraform/addrs"
"github.com/hashicorp/terraform/configs"
"github.com/hashicorp/terraform/plans"
"github.com/hashicorp/terraform/plans/objchange"
"github.com/hashicorp/terraform/states"
"github.com/hashicorp/terraform/tfdiags"
)
@ -156,19 +157,17 @@ func (n *NodeApplyableResourceInstance) dataResourceExecute(ctx EvalContext) (di
// change, which signals that we expect this read to complete fully
// with no unknown values; it'll produce an error if not.
var state *states.ResourceInstanceObject
readDataApply := &evalReadDataApply{
evalReadData{
Addr: addr,
Config: n.Config,
Planned: &change,
Provider: &provider,
ProviderAddr: n.ResolvedProvider,
ProviderMetas: n.ProviderMetas,
ProviderSchema: &providerSchema,
State: &state,
},
evalReadData := &readData{
Addr: addr,
Config: n.Config,
Planned: &change,
Provider: &provider,
ProviderAddr: n.ResolvedProvider,
ProviderMetas: n.ProviderMetas,
ProviderSchema: &providerSchema,
State: &state,
}
diags = diags.Append(readDataApply.Eval(ctx))
diags = diags.Append(evalReadData.apply(ctx))
if diags.HasErrors() {
return diags
}
@ -179,15 +178,7 @@ func (n *NodeApplyableResourceInstance) dataResourceExecute(ctx EvalContext) (di
return diags
}
writeDiff := &EvalWriteDiff{
Addr: addr,
ProviderSchema: &providerSchema,
Change: nil,
}
diags = diags.Append(writeDiff.Eval(ctx))
if diags.HasErrors() {
return diags
}
diags = diags.Append(n.writeChange(ctx, nil, ""))
diags = diags.Append(UpdateStateHook(ctx))
return diags
@ -240,7 +231,7 @@ func (n *NodeApplyableResourceInstance) managedResourceExecute(ctx EvalContext)
log.Printf("[TRACE] managedResourceExecute: prior object for %s now deposed with key %s", n.Addr, deposedKey)
}
state, err = n.ReadResourceInstanceState(ctx, n.ResourceInstanceAddr())
state, err = n.readResourceInstanceState(ctx, n.ResourceInstanceAddr())
diags = diags.Append(err)
if diags.HasErrors() {
return diags
@ -255,54 +246,26 @@ func (n *NodeApplyableResourceInstance) managedResourceExecute(ctx EvalContext)
// Make a new diff, in case we've learned new values in the state
// during apply which we can now incorporate.
evalDiff := &EvalDiff{
Addr: addr,
Config: n.Config,
Provider: &provider,
ProviderAddr: n.ResolvedProvider,
ProviderMetas: n.ProviderMetas,
ProviderSchema: &providerSchema,
State: &state,
PreviousDiff: &diff,
OutputChange: &diffApply,
OutputState: &state,
}
diags = diags.Append(evalDiff.Eval(ctx))
diffApply, state, planDiags := n.plan(ctx, diff, state, false)
diags = diags.Append(planDiags)
if diags.HasErrors() {
return diags
}
// Compare the diffs
checkPlannedChange := &EvalCheckPlannedChange{
Addr: addr,
ProviderAddr: n.ResolvedProvider,
ProviderSchema: &providerSchema,
Planned: &diff,
Actual: &diffApply,
}
diags = diags.Append(checkPlannedChange.Eval(ctx))
diags = diags.Append(n.checkPlannedChange(ctx, diff, diffApply, providerSchema))
if diags.HasErrors() {
return diags
}
state, err = n.ReadResourceInstanceState(ctx, n.ResourceInstanceAddr())
state, err = n.readResourceInstanceState(ctx, n.ResourceInstanceAddr())
diags = diags.Append(err)
if diags.HasErrors() {
return diags
}
reduceDiff := &EvalReduceDiff{
Addr: addr,
InChange: &diffApply,
Destroy: false,
OutChange: &diffApply,
}
diags = diags.Append(reduceDiff.Eval(ctx))
if diags.HasErrors() {
return diags
}
// EvalReduceDiff may have simplified our planned change
diffApply = reducePlan(addr, diffApply, false)
// reducePlan may have simplified our planned change
// into a NoOp if it only requires destroying, since destroying
// is handled by NodeDestroyResourceInstance.
if diffApply == nil || diffApply.Action == plans.NoOp {
@ -336,15 +299,7 @@ func (n *NodeApplyableResourceInstance) managedResourceExecute(ctx EvalContext)
// We clear the change out here so that future nodes don't see a change
// that is already complete.
writeDiff := &EvalWriteDiff{
Addr: addr,
ProviderSchema: &providerSchema,
Change: nil,
}
diags = diags.Append(writeDiff.Eval(ctx))
if diags.HasErrors() {
return diags
}
diags = diags.Append(n.writeChange(ctx, nil, ""))
evalMaybeTainted := &EvalMaybeTainted{
Addr: addr,
@ -428,7 +383,7 @@ func (n *NodeApplyableResourceInstance) managedResourceExecute(ctx EvalContext)
return diags
}
diags = diags.Append(n.PostApplyHook(ctx, state, &applyError))
diags = diags.Append(n.postApplyHook(ctx, state, &applyError))
if diags.HasErrors() {
return diags
}
@ -436,3 +391,70 @@ func (n *NodeApplyableResourceInstance) managedResourceExecute(ctx EvalContext)
diags = diags.Append(UpdateStateHook(ctx))
return diags
}
// checkPlannedChange produces errors if the _actual_ expected value is not
// compatible with what was recorded in the plan.
//
// Errors here are most often indicative of a bug in the provider, so our error
// messages will report with that in mind. It's also possible that there's a bug
// in Terraform's Core's own "proposed new value" code in EvalDiff.
func (n *NodeApplyableResourceInstance) checkPlannedChange(ctx EvalContext, plannedChange, actualChange *plans.ResourceInstanceChange, providerSchema *ProviderSchema) tfdiags.Diagnostics {
var diags tfdiags.Diagnostics
addr := n.ResourceInstanceAddr().Resource
schema, _ := providerSchema.SchemaForResourceAddr(addr.ContainingResource())
if schema == nil {
// Should be caught during validation, so we don't bother with a pretty error here
diags = diags.Append(fmt.Errorf("provider does not support %q", addr.Resource.Type))
return diags
}
absAddr := addr.Absolute(ctx.Path())
log.Printf("[TRACE] EvalCheckPlannedChange: Verifying that actual change (action %s) matches planned change (action %s)", actualChange.Action, plannedChange.Action)
if plannedChange.Action != actualChange.Action {
switch {
case plannedChange.Action == plans.Update && actualChange.Action == plans.NoOp:
// It's okay for an update to become a NoOp once we've filled in
// all of the unknown values, since the final values might actually
// match what was there before after all.
log.Printf("[DEBUG] After incorporating new values learned so far during apply, %s change has become NoOp", absAddr)
case (plannedChange.Action == plans.CreateThenDelete && actualChange.Action == plans.DeleteThenCreate) ||
(plannedChange.Action == plans.DeleteThenCreate && actualChange.Action == plans.CreateThenDelete):
// If the order of replacement changed, then that is a bug in terraform
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Terraform produced inconsistent final plan",
fmt.Sprintf(
"When expanding the plan for %s to include new values learned so far during apply, the planned action changed from %s to %s.\n\nThis is a bug in Terraform and should be reported.",
absAddr, plannedChange.Action, actualChange.Action,
),
))
default:
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Provider produced inconsistent final plan",
fmt.Sprintf(
"When expanding the plan for %s to include new values learned so far during apply, provider %q changed the planned action from %s to %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
absAddr, n.ResolvedProvider.Provider.String(),
plannedChange.Action, actualChange.Action,
),
))
}
}
errs := objchange.AssertObjectCompatible(schema, plannedChange.After, actualChange.After)
for _, err := range errs {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Provider produced inconsistent final plan",
fmt.Sprintf(
"When expanding the plan for %s to include new values learned so far during apply, provider %q produced an invalid new value for %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
absAddr, n.ResolvedProvider.Provider.String(), tfdiags.FormatError(err),
),
))
}
return diags
}

View File

@ -149,24 +149,14 @@ func (n *NodeDestroyResourceInstance) Execute(ctx EvalContext, op walkOperation)
return diags
}
evalReduceDiff := &EvalReduceDiff{
Addr: addr.Resource,
InChange: &changeApply,
Destroy: true,
OutChange: &changeApply,
}
diags = diags.Append(evalReduceDiff.Eval(ctx))
if diags.HasErrors() {
return diags
}
// EvalReduceDiff may have simplified our planned change
changeApply = reducePlan(addr.Resource, changeApply, true)
// reducePlan may have simplified our planned change
// into a NoOp if it does not require destroying.
if changeApply == nil || changeApply.Action == plans.NoOp {
return diags
}
state, err = n.ReadResourceInstanceState(ctx, addr)
state, err = n.readResourceInstanceState(ctx, addr)
diags = diags.Append(err)
if diags.HasErrors() {
return diags
@ -198,7 +188,7 @@ func (n *NodeDestroyResourceInstance) Execute(ctx EvalContext, op walkOperation)
if provisionerErr != nil {
// If we have a provisioning error, then we just call
// the post-apply hook now.
diags = diags.Append(n.PostApplyHook(ctx, state, &provisionerErr))
diags = diags.Append(n.postApplyHook(ctx, state, &provisionerErr))
if diags.HasErrors() {
return diags
}
@ -234,7 +224,7 @@ func (n *NodeDestroyResourceInstance) Execute(ctx EvalContext, op walkOperation)
state.SetResourceInstanceCurrent(n.Addr, nil, n.ResolvedProvider)
}
diags = diags.Append(n.PostApplyHook(ctx, state, &provisionerErr))
diags = diags.Append(n.postApplyHook(ctx, state, &provisionerErr))
if diags.HasErrors() {
return diags
}

View File

@ -66,18 +66,6 @@ func (n *NodePlanDeposedResourceInstanceObject) References() []*addrs.Reference
// GraphNodeEvalable impl.
func (n *NodePlanDeposedResourceInstanceObject) Execute(ctx EvalContext, op walkOperation) (diags tfdiags.Diagnostics) {
addr := n.ResourceInstanceAddr()
_, providerSchema, err := GetProvider(ctx, n.ResolvedProvider)
diags = diags.Append(err)
if diags.HasErrors() {
return diags
}
// During the plan walk we always produce a planned destroy change, because
// destroying is the only supported action for deposed objects.
var change *plans.ResourceInstanceChange
// Read the state for the deposed resource instance
state, err := n.ReadResourceInstanceStateDeposed(ctx, n.Addr, n.DeposedKey)
diags = diags.Append(err)
@ -85,25 +73,13 @@ func (n *NodePlanDeposedResourceInstanceObject) Execute(ctx EvalContext, op walk
return diags
}
diffDestroy := &EvalDiffDestroy{
Addr: addr.Resource,
ProviderAddr: n.ResolvedProvider,
DeposedKey: n.DeposedKey,
State: &state,
Output: &change,
}
diags = diags.Append(diffDestroy.Eval(ctx))
change, destroyPlanDiags := n.planDestroy(ctx, state, n.DeposedKey)
diags = diags.Append(destroyPlanDiags)
if diags.HasErrors() {
return diags
}
writeDiff := &EvalWriteDiff{
Addr: addr.Resource,
DeposedKey: n.DeposedKey,
ProviderSchema: &providerSchema,
Change: &change,
}
diags = diags.Append(writeDiff.Eval(ctx))
diags = diags.Append(n.writeChange(ctx, change, n.DeposedKey))
return diags
}
@ -192,13 +168,8 @@ func (n *NodeDestroyDeposedResourceInstanceObject) Execute(ctx EvalContext, op w
return diags
}
diffDestroy := &EvalDiffDestroy{
Addr: addr,
ProviderAddr: n.ResolvedProvider,
State: &state,
Output: &change,
}
diags = diags.Append(diffDestroy.Eval(ctx))
change, destroyPlanDiags := n.planDestroy(ctx, state, n.DeposedKey)
diags = diags.Append(destroyPlanDiags)
if diags.HasErrors() {
return diags
}
@ -233,7 +204,7 @@ func (n *NodeDestroyDeposedResourceInstanceObject) Execute(ctx EvalContext, op w
return diags
}
diags = diags.Append(n.PostApplyHook(ctx, state, &applyError))
diags = diags.Append(n.postApplyHook(ctx, state, &applyError))
if diags.HasErrors() {
return diags
}

View File

@ -42,25 +42,14 @@ func (n *NodePlanDestroyableResourceInstance) Execute(ctx EvalContext, op walkOp
var change *plans.ResourceInstanceChange
var state *states.ResourceInstanceObject
_, providerSchema, err := GetProvider(ctx, n.ResolvedProvider)
state, err := n.readResourceInstanceState(ctx, addr)
diags = diags.Append(err)
if diags.HasErrors() {
return diags
}
state, err = n.ReadResourceInstanceState(ctx, addr)
diags = diags.Append(err)
if diags.HasErrors() {
return diags
}
diffDestroy := &EvalDiffDestroy{
Addr: addr.Resource,
ProviderAddr: n.ResolvedProvider,
State: &state,
Output: &change,
}
diags = diags.Append(diffDestroy.Eval(ctx))
change, destroyPlanDiags := n.planDestroy(ctx, state, "")
diags = diags.Append(destroyPlanDiags)
if diags.HasErrors() {
return diags
}
@ -70,11 +59,6 @@ func (n *NodePlanDestroyableResourceInstance) Execute(ctx EvalContext, op walkOp
return diags
}
writeDiff := &EvalWriteDiff{
Addr: addr.Resource,
ProviderSchema: &providerSchema,
Change: &change,
}
diags = diags.Append(writeDiff.Eval(ctx))
diags = diags.Append(n.writeChange(ctx, change, ""))
return diags
}

View File

@ -59,7 +59,7 @@ func (n *NodePlannableResourceInstance) dataResourceExecute(ctx EvalContext) (di
return diags
}
state, err = n.ReadResourceInstanceState(ctx, addr)
state, err = n.readResourceInstanceState(ctx, addr)
diags = diags.Append(err)
if diags.HasErrors() {
return diags
@ -75,20 +75,18 @@ func (n *NodePlannableResourceInstance) dataResourceExecute(ctx EvalContext) (di
return diags
}
readDataPlan := &evalReadDataPlan{
evalReadData: evalReadData{
Addr: addr.Resource,
Config: n.Config,
Provider: &provider,
ProviderAddr: n.ResolvedProvider,
ProviderMetas: n.ProviderMetas,
ProviderSchema: &providerSchema,
OutputChange: &change,
State: &state,
dependsOn: n.dependsOn,
},
evalReadData := &readData{
Addr: addr.Resource,
Config: n.Config,
Provider: &provider,
ProviderAddr: n.ResolvedProvider,
ProviderMetas: n.ProviderMetas,
ProviderSchema: &providerSchema,
OutputChange: &change,
State: &state,
dependsOn: n.dependsOn,
}
diags = diags.Append(readDataPlan.Eval(ctx))
diags = diags.Append(evalReadData.plan(ctx))
if diags.HasErrors() {
return diags
}
@ -104,12 +102,7 @@ func (n *NodePlannableResourceInstance) dataResourceExecute(ctx EvalContext) (di
return diags
}
writeDiff := &EvalWriteDiff{
Addr: addr.Resource,
ProviderSchema: &providerSchema,
Change: &change,
}
diags = diags.Append(writeDiff.Eval(ctx))
diags = diags.Append(n.writeChange(ctx, change, ""))
return diags
}
@ -121,7 +114,7 @@ func (n *NodePlannableResourceInstance) managedResourceExecute(ctx EvalContext)
var instanceRefreshState *states.ResourceInstanceObject
var instancePlanState *states.ResourceInstanceObject
provider, providerSchema, err := GetProvider(ctx, n.ResolvedProvider)
_, providerSchema, err := GetProvider(ctx, n.ResolvedProvider)
diags = diags.Append(err)
if diags.HasErrors() {
return diags
@ -137,7 +130,7 @@ func (n *NodePlannableResourceInstance) managedResourceExecute(ctx EvalContext)
return diags
}
instanceRefreshState, err = n.ReadResourceInstanceState(ctx, addr)
instanceRefreshState, err = n.readResourceInstanceState(ctx, addr)
diags = diags.Append(err)
if diags.HasErrors() {
return diags
@ -155,16 +148,8 @@ func (n *NodePlannableResourceInstance) managedResourceExecute(ctx EvalContext)
// Refresh, maybe
if !n.skipRefresh {
refresh := &EvalRefresh{
Addr: addr.Resource,
ProviderAddr: n.ResolvedProvider,
Provider: &provider,
ProviderMetas: n.ProviderMetas,
ProviderSchema: &providerSchema,
State: &instanceRefreshState,
Output: &instanceRefreshState,
}
diags := diags.Append(refresh.Eval(ctx))
instanceRefreshState, refreshDiags := n.refresh(ctx, instanceRefreshState)
diags = diags.Append(refreshDiags)
if diags.HasErrors() {
return diags
}
@ -176,19 +161,8 @@ func (n *NodePlannableResourceInstance) managedResourceExecute(ctx EvalContext)
}
// Plan the instance
diff := &EvalDiff{
Addr: addr.Resource,
Config: n.Config,
CreateBeforeDestroy: n.ForceCreateBeforeDestroy,
Provider: &provider,
ProviderAddr: n.ResolvedProvider,
ProviderMetas: n.ProviderMetas,
ProviderSchema: &providerSchema,
State: &instanceRefreshState,
OutputChange: &change,
OutputState: &instancePlanState,
}
diags = diags.Append(diff.Eval(ctx))
change, instancePlanState, planDiags := n.plan(ctx, change, instanceRefreshState, n.ForceCreateBeforeDestroy)
diags = diags.Append(planDiags)
if diags.HasErrors() {
return diags
}
@ -203,11 +177,6 @@ func (n *NodePlannableResourceInstance) managedResourceExecute(ctx EvalContext)
return diags
}
writeDiff := &EvalWriteDiff{
Addr: addr.Resource,
ProviderSchema: &providerSchema,
Change: &change,
}
diags = diags.Append(writeDiff.Eval(ctx))
diags = diags.Append(n.writeChange(ctx, change, ""))
return diags
}

View File

@ -63,14 +63,9 @@ func (n *NodePlannableResourceInstanceOrphan) managedResourceExecute(ctx EvalCon
// evaluation. These are written to by-address below.
var change *plans.ResourceInstanceChange
var state *states.ResourceInstanceObject
var err error
provider, providerSchema, err := GetProvider(ctx, n.ResolvedProvider)
diags = diags.Append(err)
if diags.HasErrors() {
return diags
}
state, err = n.ReadResourceInstanceState(ctx, addr)
state, err = n.readResourceInstanceState(ctx, addr)
diags = diags.Append(err)
if diags.HasErrors() {
return diags
@ -83,16 +78,8 @@ func (n *NodePlannableResourceInstanceOrphan) managedResourceExecute(ctx EvalCon
// plan before apply, and may not handle a missing resource during
// Delete correctly. If this is a simple refresh, Terraform is
// expected to remove the missing resource from the state entirely
refresh := &EvalRefresh{
Addr: addr.Resource,
ProviderAddr: n.ResolvedProvider,
Provider: &provider,
ProviderMetas: n.ProviderMetas,
ProviderSchema: &providerSchema,
State: &state,
Output: &state,
}
diags = diags.Append(refresh.Eval(ctx))
state, refreshDiags := n.refresh(ctx, state)
diags = diags.Append(refreshDiags)
if diags.HasErrors() {
return diags
}
@ -103,14 +90,8 @@ func (n *NodePlannableResourceInstanceOrphan) managedResourceExecute(ctx EvalCon
}
}
diffDestroy := &EvalDiffDestroy{
Addr: addr.Resource,
State: &state,
ProviderAddr: n.ResolvedProvider,
Output: &change,
OutputState: &state, // Will point to a nil state after this complete, signalling destroyed
}
diags = diags.Append(diffDestroy.Eval(ctx))
change, destroyPlanDiags := n.planDestroy(ctx, state, "")
diags = diags.Append(destroyPlanDiags)
if diags.HasErrors() {
return diags
}
@ -120,16 +101,11 @@ func (n *NodePlannableResourceInstanceOrphan) managedResourceExecute(ctx EvalCon
return diags
}
writeDiff := &EvalWriteDiff{
Addr: addr.Resource,
ProviderSchema: &providerSchema,
Change: &change,
}
diags = diags.Append(writeDiff.Eval(ctx))
diags = diags.Append(n.writeChange(ctx, change, ""))
if diags.HasErrors() {
return diags
}
diags = diags.Append(n.writeResourceInstanceState(ctx, state, n.Dependencies, workingState))
diags = diags.Append(n.writeResourceInstanceState(ctx, nil, n.Dependencies, workingState))
return diags
}

414
terraform/read_data.go Normal file
View File

@ -0,0 +1,414 @@
package terraform
import (
"fmt"
"log"
"strings"
"github.com/zclconf/go-cty/cty"
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/terraform/addrs"
"github.com/hashicorp/terraform/configs"
"github.com/hashicorp/terraform/plans"
"github.com/hashicorp/terraform/plans/objchange"
"github.com/hashicorp/terraform/providers"
"github.com/hashicorp/terraform/states"
"github.com/hashicorp/terraform/tfdiags"
)
// readData implements shared methods and data for the individual data
// source eval nodes.
type readData struct {
Addr addrs.ResourceInstance
Config *configs.Resource
Provider *providers.Interface
ProviderAddr addrs.AbsProviderConfig
ProviderMetas map[addrs.Provider]*configs.ProviderMeta
ProviderSchema **ProviderSchema
// Planned is set when dealing with data resources that were deferred to
// the apply walk, to let us see what was planned. If this is set, the
// evaluation of the config is required to produce a wholly-known
// configuration which is consistent with the partial object included
// in this planned change.
Planned **plans.ResourceInstanceChange
// State is the current state for the data source, and is updated once the
// new state has been read.
// While data sources are read-only, we need to start with the prior state
// to determine if we have a change or not. If we needed to read a new
// value, but it still matches the previous state, then we can record a
// NoNop change. If the states don't match then we record a Read change so
// that the new value is applied to the state.
State **states.ResourceInstanceObject
// Output change records any change for this data source, which is
// interpreted differently than changes for managed resources.
// - During Refresh, this change is only used to correctly evaluate
// references to the data source, but it is not saved.
// - If a planned change has the action of plans.Read, it indicates that the
// data source could not be evaluated yet, and reading is being deferred to
// apply.
// - If planned action is plans.Update, it indicates that the data source
// was read, and the result needs to be stored in state during apply.
OutputChange **plans.ResourceInstanceChange
// dependsOn stores the list of transitive resource addresses that any
// configuration depends_on references may resolve to. This is used to
// determine if there are any changes that will force this data sources to
// be deferred to apply.
dependsOn []addrs.ConfigResource
}
// readDataSource handles everything needed to call ReadDataSource on the provider.
// A previously evaluated configVal can be passed in, or a new one is generated
// from the resource configuration.
func (n *readData) readDataSource(ctx EvalContext, configVal cty.Value) (cty.Value, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
var newVal cty.Value
config := *n.Config
absAddr := n.Addr.Absolute(ctx.Path())
if n.ProviderSchema == nil || *n.ProviderSchema == nil {
diags = diags.Append(fmt.Errorf("provider schema not available for %s", n.Addr))
return newVal, diags
}
provider := *n.Provider
providerSchema := *n.ProviderSchema
schema, _ := providerSchema.SchemaForResourceAddr(n.Addr.ContainingResource())
if schema == nil {
// Should be caught during validation, so we don't bother with a pretty error here
diags = diags.Append(fmt.Errorf("provider %q does not support data source %q", n.ProviderAddr.Provider.String(), n.Addr.Resource.Type))
return newVal, diags
}
metaConfigVal, metaDiags := n.providerMetas(ctx)
diags = diags.Append(metaDiags)
if diags.HasErrors() {
return newVal, diags
}
log.Printf("[TRACE] readDataSource: Re-validating config for %s", absAddr)
validateResp := provider.ValidateDataSourceConfig(
providers.ValidateDataSourceConfigRequest{
TypeName: n.Addr.Resource.Type,
Config: configVal,
},
)
if validateResp.Diagnostics.HasErrors() {
return newVal, validateResp.Diagnostics.InConfigBody(config.Config)
}
// If we get down here then our configuration is complete and we're read
// to actually call the provider to read the data.
log.Printf("[TRACE] readDataSource: %s configuration is complete, so reading from provider", absAddr)
resp := provider.ReadDataSource(providers.ReadDataSourceRequest{
TypeName: n.Addr.Resource.Type,
Config: configVal,
ProviderMeta: metaConfigVal,
})
diags = diags.Append(resp.Diagnostics.InConfigBody(config.Config))
if diags.HasErrors() {
return newVal, diags
}
newVal = resp.State
if newVal == cty.NilVal {
// This can happen with incompletely-configured mocks. We'll allow it
// and treat it as an alias for a properly-typed null value.
newVal = cty.NullVal(schema.ImpliedType())
}
for _, err := range newVal.Type().TestConformance(schema.ImpliedType()) {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Provider produced invalid object",
fmt.Sprintf(
"Provider %q produced an invalid value for %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
n.ProviderAddr.Provider.String(), tfdiags.FormatErrorPrefixed(err, absAddr.String()),
),
))
}
if diags.HasErrors() {
return newVal, diags
}
if newVal.IsNull() {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Provider produced null object",
fmt.Sprintf(
"Provider %q produced a null value for %s.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
n.ProviderAddr.Provider.String(), absAddr,
),
))
}
if !newVal.IsNull() && !newVal.IsWhollyKnown() {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Provider produced invalid object",
fmt.Sprintf(
"Provider %q produced a value for %s that is not wholly known.\n\nThis is a bug in the provider, which should be reported in the provider's own issue tracker.",
n.ProviderAddr.Provider.String(), absAddr,
),
))
// We'll still save the object, but we need to eliminate any unknown
// values first because we can't serialize them in the state file.
// Note that this may cause set elements to be coalesced if they
// differed only by having unknown values, but we don't worry about
// that here because we're saving the value only for inspection
// purposes; the error we added above will halt the graph walk.
newVal = cty.UnknownAsNull(newVal)
}
return newVal, diags
}
func (n *readData) providerMetas(ctx EvalContext) (cty.Value, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
metaConfigVal := cty.NullVal(cty.DynamicPseudoType)
if n.ProviderMetas != nil {
if m, ok := n.ProviderMetas[n.ProviderAddr.Provider]; ok && m != nil {
// if the provider doesn't support this feature, throw an error
if (*n.ProviderSchema).ProviderMeta == nil {
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: fmt.Sprintf("Provider %s doesn't support provider_meta", n.ProviderAddr.Provider.String()),
Detail: fmt.Sprintf("The resource %s belongs to a provider that doesn't support provider_meta blocks", n.Addr),
Subject: &m.ProviderRange,
})
} else {
var configDiags tfdiags.Diagnostics
metaConfigVal, _, configDiags = ctx.EvaluateBlock(m.Config, (*n.ProviderSchema).ProviderMeta, nil, EvalDataForNoInstanceKey)
diags = diags.Append(configDiags)
}
}
}
return metaConfigVal, diags
}
// plan deals with the main part of the data resource lifecycle: either actually
// reading from the data source or generating a plan to do so.
func (n *readData) plan(ctx EvalContext) tfdiags.Diagnostics {
absAddr := n.Addr.Absolute(ctx.Path())
var diags tfdiags.Diagnostics
var configVal cty.Value
if n.ProviderSchema == nil || *n.ProviderSchema == nil {
diags = diags.Append(fmt.Errorf("provider schema not available for %s", n.Addr))
return diags
}
config := *n.Config
providerSchema := *n.ProviderSchema
schema, _ := providerSchema.SchemaForResourceAddr(n.Addr.ContainingResource())
if schema == nil {
// Should be caught during validation, so we don't bother with a pretty error here
diags = diags.Append(fmt.Errorf("provider %q does not support data source %q", n.ProviderAddr.Provider.String(), n.Addr.Resource.Type))
return diags
}
objTy := schema.ImpliedType()
priorVal := cty.NullVal(objTy)
if n.State != nil && *n.State != nil {
priorVal = (*n.State).Value
}
forEach, _ := evaluateForEachExpression(config.ForEach, ctx)
keyData := EvalDataForInstanceKey(n.Addr.Key, forEach)
var configDiags tfdiags.Diagnostics
configVal, _, configDiags = ctx.EvaluateBlock(config.Config, schema, nil, keyData)
diags = diags.Append(configDiags)
if configDiags.HasErrors() {
return diags
}
configKnown := configVal.IsWhollyKnown()
// If our configuration contains any unknown values, or we depend on any
// unknown values then we must defer the read to the apply phase by
// producing a "Read" change for this resource, and a placeholder value for
// it in the state.
if n.forcePlanRead(ctx) || !configKnown {
if configKnown {
log.Printf("[TRACE] readData.Plan: %s configuration is fully known, but we're forcing a read plan to be created", absAddr)
} else {
log.Printf("[TRACE] readData.Plan: %s configuration not fully known yet, so deferring to apply phase", absAddr)
}
proposedNewVal := objchange.PlannedDataResourceObject(schema, configVal)
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
return h.PreDiff(absAddr, states.CurrentGen, priorVal, proposedNewVal)
}))
if diags.HasErrors() {
return diags
}
// Apply detects that the data source will need to be read by the After
// value containing unknowns from PlanDataResourceObject.
*n.OutputChange = &plans.ResourceInstanceChange{
Addr: absAddr,
ProviderAddr: n.ProviderAddr,
Change: plans.Change{
Action: plans.Read,
Before: priorVal,
After: proposedNewVal,
},
}
*n.State = &states.ResourceInstanceObject{
Value: proposedNewVal,
Status: states.ObjectPlanned,
}
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
return h.PostDiff(absAddr, states.CurrentGen, plans.Read, priorVal, proposedNewVal)
}))
return diags
}
// We have a complete configuration with no dependencies to wait on, so we
// can read the data source into the state.
newVal, readDiags := n.readDataSource(ctx, configVal)
diags = diags.Append(readDiags)
if diags.HasErrors() {
return diags
}
// if we have a prior value, we can check for any irregularities in the response
if !priorVal.IsNull() {
// While we don't propose planned changes for data sources, we can
// generate a proposed value for comparison to ensure the data source
// is returning a result following the rules of the provider contract.
proposedVal := objchange.ProposedNewObject(schema, priorVal, configVal)
if errs := objchange.AssertObjectCompatible(schema, proposedVal, newVal); len(errs) > 0 {
// Resources have the LegacyTypeSystem field to signal when they are
// using an SDK which may not produce precise values. While data
// sources are read-only, they can still return a value which is not
// compatible with the config+schema. Since we can't detect the legacy
// type system, we can only warn about this for now.
var buf strings.Builder
fmt.Fprintf(&buf, "[WARN] Provider %q produced an unexpected new value for %s.",
n.ProviderAddr.Provider.String(), absAddr)
for _, err := range errs {
fmt.Fprintf(&buf, "\n - %s", tfdiags.FormatError(err))
}
log.Print(buf.String())
}
}
*n.State = &states.ResourceInstanceObject{
Value: newVal,
Status: states.ObjectReady,
}
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
return h.PostDiff(absAddr, states.CurrentGen, plans.Update, priorVal, newVal)
}))
return diags
}
// forcePlanRead determines if we need to override the usual behavior of
// immediately reading from the data source where possible, instead forcing us
// to generate a plan.
func (n *readData) forcePlanRead(ctx EvalContext) bool {
// Check and see if any depends_on dependencies have
// changes, since they won't show up as changes in the
// configuration.
changes := ctx.Changes()
for _, d := range n.dependsOn {
if d.Resource.Mode == addrs.DataResourceMode {
// Data sources have no external side effects, so they pose a need
// to delay this read. If they do have a change planned, it must be
// because of a dependency on a managed resource, in which case
// we'll also encounter it in this list of dependencies.
continue
}
for _, change := range changes.GetChangesForConfigResource(d) {
if change != nil && change.Action != plans.NoOp {
return true
}
}
}
return false
}
// apply deals with the main part of the data resource lifecycle: either
// actually reading from the data source or generating a plan to do so.
func (n *readData) apply(ctx EvalContext) tfdiags.Diagnostics {
absAddr := n.Addr.Absolute(ctx.Path())
var diags tfdiags.Diagnostics
var planned *plans.ResourceInstanceChange
if n.Planned != nil {
planned = *n.Planned
}
if n.ProviderSchema == nil || *n.ProviderSchema == nil {
diags = diags.Append(fmt.Errorf("provider schema not available for %s", n.Addr))
return diags
}
if planned != nil && planned.Action != plans.Read {
// If any other action gets in here then that's always a bug; this
// EvalNode only deals with reading.
diags = diags.Append(fmt.Errorf(
"invalid action %s for %s: only Read is supported (this is a bug in Terraform; please report it!)",
planned.Action, absAddr,
))
return diags
}
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
return h.PreApply(absAddr, states.CurrentGen, planned.Action, planned.Before, planned.After)
}))
if diags.HasErrors() {
return diags
}
config := *n.Config
providerSchema := *n.ProviderSchema
schema, _ := providerSchema.SchemaForResourceAddr(n.Addr.ContainingResource())
if schema == nil {
// Should be caught during validation, so we don't bother with a pretty error here
diags = diags.Append(fmt.Errorf("provider %q does not support data source %q", n.ProviderAddr.Provider.String(), n.Addr.Resource.Type))
return diags
}
forEach, _ := evaluateForEachExpression(config.ForEach, ctx)
keyData := EvalDataForInstanceKey(n.Addr.Key, forEach)
configVal, _, configDiags := ctx.EvaluateBlock(config.Config, schema, nil, keyData)
diags = diags.Append(configDiags)
if configDiags.HasErrors() {
return diags
}
newVal, readDiags := n.readDataSource(ctx, configVal)
diags = diags.Append(readDiags)
if diags.HasErrors() {
return diags
}
*n.State = &states.ResourceInstanceObject{
Value: newVal,
Status: states.ObjectReady,
}
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
return h.PostApply(absAddr, states.CurrentGen, newVal, diags.Err())
}))
return diags
}

32
terraform/reduce_plan.go Normal file
View File

@ -0,0 +1,32 @@
package terraform
import (
"log"
"github.com/hashicorp/terraform/addrs"
"github.com/hashicorp/terraform/plans"
)
// reducePlan takes a planned resource instance change as might be produced by
// Plan or PlanDestroy and "simplifies" it to a single atomic action to be
// performed by a specific graph node.
//
// Callers must specify whether they are a destroy node or a regular apply node.
// If the result is NoOp then the given change requires no action for the
// specific graph node calling this and so evaluation of the that graph node
// should exit early and take no action.
//
// The returned object may either be identical to the input change or a new
// change object derived from the input. Because of the former case, the caller
// must not mutate the object returned in OutChange.
func reducePlan(addr addrs.ResourceInstance, in *plans.ResourceInstanceChange, destroy bool) *plans.ResourceInstanceChange {
out := in.Simplify(destroy)
if out.Action != in.Action {
if destroy {
log.Printf("[TRACE] reducePlan: %s change simplified from %s to %s for destroy node", addr, in.Action, out.Action)
} else {
log.Printf("[TRACE] reducePlan: %s change simplified from %s to %s for apply node", addr, in.Action, out.Action)
}
}
return out
}

View File

@ -267,53 +267,20 @@ func (n *graphNodeImportStateSub) Execute(ctx EvalContext, op walkOperation) (di
}
state := n.State.AsInstanceObject()
provider, providerSchema, err := GetProvider(ctx, n.ResolvedProvider)
diags = diags.Append(err)
// Refresh
riNode := &NodeAbstractResourceInstance{
Addr: n.TargetAddr,
NodeAbstractResource: NodeAbstractResource{
ResolvedProvider: n.ResolvedProvider,
},
}
state, refreshDiags := riNode.refresh(ctx, state)
diags = diags.Append(refreshDiags)
if diags.HasErrors() {
return diags
}
// EvalRefresh
evalRefresh := &EvalRefresh{
Addr: n.TargetAddr.Resource,
ProviderAddr: n.ResolvedProvider,
Provider: &provider,
ProviderSchema: &providerSchema,
State: &state,
Output: &state,
}
diags = diags.Append(evalRefresh.Eval(ctx))
if diags.HasErrors() {
return diags
}
// Verify the existance of the imported resource
if state.Value.IsNull() {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Cannot import non-existent remote object",
fmt.Sprintf(
"While attempting to import an existing object to %s, the provider detected that no object exists with the given id. Only pre-existing objects can be imported; check that the id is correct and that it is associated with the provider's configured region or endpoint, or use \"terraform apply\" to create a new remote object for this resource.",
n.TargetAddr.Resource.String(),
),
))
return diags
}
schema, currentVersion := providerSchema.SchemaForResourceAddr(n.TargetAddr.ContainingResource().Resource)
if schema == nil {
// It shouldn't be possible to get this far in any real scenario
// without a schema, but we might end up here in contrived tests that
// fail to set up their world properly.
diags = diags.Append(fmt.Errorf("failed to encode %s in state: no resource type schema available", n.TargetAddr.Resource))
return diags
}
src, err := state.Encode(schema.ImpliedType(), currentVersion)
if err != nil {
diags = diags.Append(fmt.Errorf("failed to encode %s in state: %s", n.TargetAddr.Resource, err))
return diags
}
ctx.State().SetResourceInstanceCurrent(n.TargetAddr, src, n.ResolvedProvider)
diags = diags.Append(riNode.writeResourceInstanceState(ctx, state, nil, workingState))
return diags
}