Prior to our refactoring here, we were relying on a lucky coincidence for
correct behavior of the plan walk following a refresh in the same run:
- The refresh phase created placeholder objects in the state to represent
any resource instance pending creation, to allow the interpolator to
read attributes from them when evaluating "provider" and "data" blocks.
In effect, the refresh walk is creating a partial plan that only covers
creation actions, but was immediately discarding the actual diff entries
and storing only the planned new state.
- It happened that objects pending creation showed up in state with an
empty ID value, since that only gets assigned by the provider during
apply.
- The Refresh function concluded by calling terraform.State.Prune, which
deletes from the state any objects that have an empty ID value, which
therefore prevented these temporary objects from surviving into the
plan phase.
After refactoring, we no longer have this special ID field on instance
object state, and we instead rely on the Status field for tracking such
things. We also no longer have an explicit "prune" step on state, since
the state mutation methods themselves keep the structure pruned.
To address this, here we introduce a new instance object status "planned",
which is equivalent to having an empty ID value in the old world. We also
introduce a new method on states.SyncState that deletes from the state
any planned objects, which therefore replaces that portion of the old
State.prune operation just for this refresh use-case.
Finally, we are now expecting the expression evaluator to pull pending
objects from the planned changeset rather than from the state directly,
and so for correct results these placeholder resource creation changes
must also be reported in a throwaway changeset during the refresh walk.
The addition of states.ObjectPlanned also permits a previously-missing
safety check in the expression evaluator to prevent us from relying on the
incomplete value stored in state for a pending object, in the event that
some bug prevents the real pending object from being written into the
planned changeset.
We no longer use strings to represent addresses, so this method was a
leftover outlier from previous refactoring efforts.
At this time the result is not actually being used due to the state type
refactoring, which is a bug we'll address in a subsequent commit.
We'll now show an "update" symbol prior to the argument to this synthetic
jsonencode(...) call, for consistency with how we show nested values in
other cases and to attach a verb to any "# forces replacement".
We'll also show a special form in the case where the value seems to differ
only in whitespace, so users can understand what's going on in that
hopefully-rare situation, particularly if those whitespace-only changes
end up forcing us to replace a remote object.
Since our own syntax for primitive values is similar to that of JSON, and
since we permit automatic conversions from number and bool to string, we
must do this special JSON value diff formatting only if the value is a
JSON array or object to avoid confusing results.
Because so far we've not supported dynamically-typed complex data
structures, several providers have used strings containing JSON to stand
in for these.
In order to get a readable diff in those cases, we'll recognize situations
where old and new are both JSON and present a diff of the effective value
of the JSON, using a faux call to the jsonencode(...) function to indicate
when we've done so.
This is a bit of a "cute" heuristic, but is important at least for now
until we can migrate away from that practice of passing large JSON strings
to providers and use dynamically-typed attributes instead.
This extra comment line gives us a place to show the full resource address
(since the block header line only includes type and name) and also allows
us to explain in long form the meaning of the change icon on the following
line.
This is a light adaptation of our earlier prototype of structural diff
rendering, as a starting point for what we'll actually ship. This is not
consistent with the latest mocks, so will need some additional work before
it is ready, but integrating this allows us to at least see the plan
contents while fixing up remaining issues elsewhere.
This algorithm is the usual first step when generating diffs. This package
is a bit of a strange home for it, but since it works with changes to
cty.Value this feels more natural than any other place it could be.
We were previously tracking this as a []cty.Path, but having it turned
into a pathset on creation makes downstream use of it more convenient and
ensures that it'll obey expected invariants like not containing the same
path twice.
When presenting an error that may be a PathError, the error's path is
usually relative to some other value. If the caller is able to express
that value (or, more often, a reference to it) in HCL syntax then this
method will produce a complete expression in the error message,
concatenating any path information from the error to the end of the given
prefix string.
We're now writing the "planned new value" to OutputValue, but the data
resource nodes during refresh need to see the verbatim config value in
order to decide whether read must be deferred to the apply phase, so we'll
optionally export that here too.
Our state representation is not able to preserve unknown values, so it's
not suitable for retaining the transient incomplete values we produce
during planning.
Instead, we'll discard the unknown values when writing to state and have
the expression evaluator prefer an object from the plan where possible.
We still use the shape of the transient state to inform things like the
resource's "each mode", so the plan only masks the object values
themselves.
This is no longer a call into the provider, since all of the data diff
logic is standard for all data sources anyway. Instead, we just compute
the planned new value and construct a planned change from that as-is.
Previously the provider could, in principle, customize the read diff. In
practice there is no real reason to do that and the existing SDK didn't
pass that possibility through to provider code, so we can safely change
this without impacting provider compatibility.
Previously we just left these out of the plan altogether, but in the new
plan types we intentionally include change information for every resource
instance, even if no changes are actually planned, to allow alternative
plan file viewers to show what isn't changing as well as what is.
This also includes passing in the provider schema to a few more EvalNodes
that were expecting it but not getting it, in order to be able to
successfully test the implementation of EvalReadDiff here.
This codepath is going to be significantly changed before release to make
it support structural diff of the new data types, but this lets us lean on
the old renderer to produce partial output in the mean time while we
continue to work on getting things working end-to-end after the
considerable refactoring that's been going on.
Previously we would construct both provisioner and the provider objects if
either callback was set, but this is incorrect because a plugin should
actually set only one of these at a time, depending on what kind of plugin
it is.
This includes the new PathSet type, which we'll use to represent the
"requires replacement" set of attribute paths coming back from providers
during planning.
This includes a bugfix to the cty/msgpack package to ensure correct
decoding of unknown and null values.
This also includes updates to cty's dependencies.
govendor's lack of understanding of transitive versions was making it hell
to figure out some dependency hell between etcd, grpc, and protobuf. After
fighting with it for a few hours, I decided to give Go 1.11rc2 a try since
previous experiments had been promising on the 1.11 tree in master.
The dependencies all worked out first time when managed using the Go
Modules code, and so we'll run with this now to continue to make progress
though we may wish to back out of this nearer to release and return to
govendor for a while until other projects have caught up.
However, since this commit includes a vendor directory produced using Go
Modules it doesn't actually _require_ Go 1.11 to build, and instead
requires it only to make further changes to the selected versions in the
vendor dir. Go 1.10's vendoring support will still find the modules in
their expected locations within the vendor dir.