This is a pretty basic attempt to turn a pair of values into an old-school
diff. It probably won't work correctly for all tests, but hopefully works
well enough that we can just update the remaining tests in-place to use
the new API directly.
We now handle impure functions by having them return an unknown value
during plan, since we can't predict what the value will be during apply.
This test was assuming the old behavior.
The provider is allowed to return a partial result if it also includes
error diagnostics. Real providers still return at least a null value in
that case due to the RPC format, but test mocks are often more sloppy.
Since the refresh walk creates a partial plan to account for objects that
are yet to be created, we need to provide at least a basic mock of the
PlanProviderChange provider method.
For now we're using the old-style "DiffFn" shim interface since that's
already available for use in other tests.
It's not possible for a normal RPC-based provider to get into this
situation because a nil value can't go over the wire, but it's easy to
cause this by not correctly configuring a provider mock during tests.
By panicking early here we produce a more helpful error message and stack
trace than we'd otherwise produce if we let this nil value escape out
into the rest of Terraform.
Significant changes to the provider interface left a lot of the
tests in a non-buildable state. This set of changes gets the
tests building again but does not attempt to make them run to
completion or pass.
After this commit, it is possible to build a test program for
the ./terraform package but it will panic during its run. That
will be addressed in subsequent commits.
Since we do our deletes using a separate graph node from all of the other
actions, and a "Replace" change implies both a delete _and_ a create, we
need to pretend at apply time that a single replace change was actually
two separate changes.
This will also early-exit eval if a destroy node finds a non-Delete change
or if an apply node finds a Delete change. These should not happen in
practice because we leave these nodes out of the graph when they are not
needed for the given action, but we do this here for robustness so as not
to have an invisible dependency between the graph builder and the eval
phase.
When we're working on a create or destroy change it's expected for one of
the values to be null. Here we mimick the pre-0.12 behavior of producing
just an empty map in that case, which the helper/schema code (now the only
caller of this shim) then ignores completely.
For PreApply hook purposes we only actually use the Delete, Create, and
Update actions, because other actions are handled in different ways than
a direct call to ApplyResourceChange.
However, if there's a bug in core that causes it to pass a different
action, it's better for us to mark it as being explicitly unknown in the
UI rather than simply defaulting to "Modifying...", which can thus obscure
the problem and make for a confusing result.
The "id" field is assumed to always exist, and must have a valid value.
Set "id" to unknown when planning new resource changes to indicate that
it will be computed.
Prior to our refactoring here, we were relying on a lucky coincidence for
correct behavior of the plan walk following a refresh in the same run:
- The refresh phase created placeholder objects in the state to represent
any resource instance pending creation, to allow the interpolator to
read attributes from them when evaluating "provider" and "data" blocks.
In effect, the refresh walk is creating a partial plan that only covers
creation actions, but was immediately discarding the actual diff entries
and storing only the planned new state.
- It happened that objects pending creation showed up in state with an
empty ID value, since that only gets assigned by the provider during
apply.
- The Refresh function concluded by calling terraform.State.Prune, which
deletes from the state any objects that have an empty ID value, which
therefore prevented these temporary objects from surviving into the
plan phase.
After refactoring, we no longer have this special ID field on instance
object state, and we instead rely on the Status field for tracking such
things. We also no longer have an explicit "prune" step on state, since
the state mutation methods themselves keep the structure pruned.
To address this, here we introduce a new instance object status "planned",
which is equivalent to having an empty ID value in the old world. We also
introduce a new method on states.SyncState that deletes from the state
any planned objects, which therefore replaces that portion of the old
State.prune operation just for this refresh use-case.
Finally, we are now expecting the expression evaluator to pull pending
objects from the planned changeset rather than from the state directly,
and so for correct results these placeholder resource creation changes
must also be reported in a throwaway changeset during the refresh walk.
The addition of states.ObjectPlanned also permits a previously-missing
safety check in the expression evaluator to prevent us from relying on the
incomplete value stored in state for a pending object, in the event that
some bug prevents the real pending object from being written into the
planned changeset.
We no longer use strings to represent addresses, so this method was a
leftover outlier from previous refactoring efforts.
At this time the result is not actually being used due to the state type
refactoring, which is a bug we'll address in a subsequent commit.
We'll now show an "update" symbol prior to the argument to this synthetic
jsonencode(...) call, for consistency with how we show nested values in
other cases and to attach a verb to any "# forces replacement".
We'll also show a special form in the case where the value seems to differ
only in whitespace, so users can understand what's going on in that
hopefully-rare situation, particularly if those whitespace-only changes
end up forcing us to replace a remote object.
Since our own syntax for primitive values is similar to that of JSON, and
since we permit automatic conversions from number and bool to string, we
must do this special JSON value diff formatting only if the value is a
JSON array or object to avoid confusing results.
Because so far we've not supported dynamically-typed complex data
structures, several providers have used strings containing JSON to stand
in for these.
In order to get a readable diff in those cases, we'll recognize situations
where old and new are both JSON and present a diff of the effective value
of the JSON, using a faux call to the jsonencode(...) function to indicate
when we've done so.
This is a bit of a "cute" heuristic, but is important at least for now
until we can migrate away from that practice of passing large JSON strings
to providers and use dynamically-typed attributes instead.
This extra comment line gives us a place to show the full resource address
(since the block header line only includes type and name) and also allows
us to explain in long form the meaning of the change icon on the following
line.
This is a light adaptation of our earlier prototype of structural diff
rendering, as a starting point for what we'll actually ship. This is not
consistent with the latest mocks, so will need some additional work before
it is ready, but integrating this allows us to at least see the plan
contents while fixing up remaining issues elsewhere.
This algorithm is the usual first step when generating diffs. This package
is a bit of a strange home for it, but since it works with changes to
cty.Value this feels more natural than any other place it could be.
We were previously tracking this as a []cty.Path, but having it turned
into a pathset on creation makes downstream use of it more convenient and
ensures that it'll obey expected invariants like not containing the same
path twice.
When presenting an error that may be a PathError, the error's path is
usually relative to some other value. If the caller is able to express
that value (or, more often, a reference to it) in HCL syntax then this
method will produce a complete expression in the error message,
concatenating any path information from the error to the end of the given
prefix string.
We're now writing the "planned new value" to OutputValue, but the data
resource nodes during refresh need to see the verbatim config value in
order to decide whether read must be deferred to the apply phase, so we'll
optionally export that here too.