The state is not loaded here with any marks, so we cannot rely on marks
alone for equality comparison. Compare both the state and the
configuration sensitivity before creating the OutputChange.
Since root outputs can now use the planned changes, we can directly
insert the correct applyable or destroyable node into the graph during
plan and apply, and it will remove the outputs if they are being
destroyed.
This builds on an experimental feature in the underlying cty library which
allows marking specific attribtues of an object type constraint as
optional, which in turn modifies how the cty conversion package handles
missing attributes in a source value: it will silently substitute a null
value of the appropriate type rather than returning an error.
In order to implement the experiment this commit temporarily forks the
HCL typeexpr extension package into a local internal/typeexpr package,
where I've extended the type constraint syntax to allow annotating object
type attributes as being optional using the HCL function call syntax.
If the experiment is successful -- both at the Terraform layer and in
the underlying cty library -- we'll likely send these modifications to
upstream HCL so that other HCL-based languages can potentially benefit
from this new capability.
Because it's experimental, the optional attribute modifier is allowed only
with an explicit opt-in to the module_variable_optional_attrs experiment.
We record output changes in the plan, but don't currently use them for
anything other than display. If we have a wholly known output value
stored in the plan, we should prefer that for apply in order to ensure
consistency with the planned values. This also avoids cases where
evaluation during apply cannot happen correctly, like when all resources
are being removed or we are executing a destroy.
We also need to record output Delete changes when the plan is for
destroy operation. Otherwise without a change, the apply step will
attempt to evaluate the outputs, causing errors, or leaving them in the
state with stale values.
Replace the old mock provider test functions with modern equivalents.
There were a lot of inconsistencies in how they were used, so we needed
to update a lot of tests to match the correct behavior.
If sensitivity changes, we have an update plan,
but should avoid communicating with the provider
on the apply, as the values are equal (and otherwise
a NoOp plan)
* remove unused code
I've removed the provider-specific code under registry, and unused nil
backend, and replaced a call to helper from backend/oss (the other
callers of that func are provisioners scheduled to be deprecated).
I also removed the Dockerfile, as our build process uses a different
file.
Finally I removed the examples directory, which had outdated examples
and links. There are better, actively maintained examples available.
* command: remove various unused bits
* test wasn't running
* backend: remove unused err
When the sensitivity has changed, we want to write to
state and also display to the user that the sensitivity
has changed, so make this an update action. Also
clean up some assignments, since Contains traverses the val
anyway, this is a little tighter.
Auditing the graph builder to remove unused transformers (planning does
not need to close provisioners for example), and re-order them. While
many of the transformations are commutative, using the same order
ensures the same behavior between operations when the commutative
property is lost or changed.
Outputs were not being evaluated during import, because it was not added
to the walk filter.
Remove any unnecessary walk filters from all the Execute nodes.
We do not currently need to evaluate module variables in order to
import a resource.
This will likely change once we can select the import provider
automatically, and have a more dynamic method for dispatching providers
to module instances. In the meantime we can avoid the evaluation for now
and prevent a certain class of import errors.
The change was passed into the provisioner node because the normal
NodeApplyableResourceInstance overwrites the prior state with the new
state. This however doesn't matter here, because the resource destroy
node does not do this. Also, even if the updated state were to be used
for some reason with a create provisioner, it would be the correct state
to use at that point.
This new-ish package ended up under "helper" during the 0.12 cycle for
want of some other place to put it, but in retrospect that was an odd
choice because the "helper/" tree is otherwise a bunch of legacy code from
when the SDK lived in this repository.
Here we move it over into the "internal" directory just to distance it
from the guidance of not using "helper/" packages in new projects;
didyoumean is a package we actively use as part of error message hints.
Remove marks for object compatibility tests to allow apply
to continue. Adds a block to the test provider to use
in testing, and extends the sensitivity apply test to include a block
The test for this behavior did not work, because the old mock diff
function does not work correctly. Write a PlanResourceChange function to
return a correct plan.
Allow the evaluation of resource pending deleting only during a full
destroy. With this change we can ensure deposed instances are not
evaluated under normal circumstances, but can be references when needed.
This also allows us to remove the fixup transformer that added
connections so temporary values would evaluate in the correct order when
referencing destroy nodes.
In the majority of cases, we do not want to evaluate resources that are
pending deletion since configuration references only can refer to
resources that is intended to be managed by the configuration. An
exception to that rule is when Terraform is performing a full `destroy`
operation, and providers need to evaluate existing resources for their
configuration.
The loading of the initial instance state was inadvertently skipped when
-refresh=false, causing all resources to appear to be missing from the
state during plan.
If a data source refers to a indexed managed resource, we need to
re-target that reference to the containing resource for planning. Since
data sources use the same mechanism as depends_on for managed resource
references, they can only refer to resources as a whole.
There are situations when a user may want to keep or exclude a map key
using `ignore_changes` which may not be listed directly in the
configuration. This didn't work previously because the transformation
always started off with the configuration, and would never encounter a
key if it was only present in the prior value.
* Split node_resource_abstract.go into two files, putting
NodeAbstractResourceInstance methods in their own file - it was getting
large enough to be tricky for (my) human eyeballs.
* un-exported the functions that were created as part of the EvalTree()
refactor; they did not need to be public.