So far the output command has had a default output format intended for
human consumption and a JSON output format intended for machine
consumption.
However, until Terraform v0.14 the default output format for primitive
types happened to be _almost_ a raw string representation of the value,
and so users started using that as a more convenient way to access
primitive-typed output values from shell scripts, avoiding the need to
also use a tool like "jq" to decode the JSON.
Recognizing that primitive-typed output values are common and that
processing them with shell scripts is common, this commit introduces a new
-raw mode which is explicitly intended for that use-case, guaranteeing
that the result will always be the direct result of a string conversion
of the output value, or an error if no such conversion is possible.
Our policy elsewhere in Terraform is that we always use JSON for
machine-readable output. We adopted that policy because our other
machine-readable output has typically been complex data structures rather
than single primitive values. A special mode seems justified for output
values because it is common for root module output values to be just
strings, and so it's pragmatic to offer access to the raw value directly
rather than requiring a round-trip through JSON.
Terraform Cloud/Enterprise support a pseudo-version of "latest" for the
configured workspace Terraform version. If this is chosen, we abandon
the attempt to verify the versions are compatible, as the meaning of
"latest" cannot be predicted.
This affects both the StateMgr check (used for commands which execute
remotely) and the full version check (for local commands).
The change recently introduced to ensure that remote backend users do
not accidentally upgrade their state file needs to be disabled for all
read-only uses, including the builtin terraform_remote_state data
source.
I took some lessons learned during yesterday's marathon refactoring and
re-refactored the dataSource plan and apply to be functions on
NodeResourceAbstractInstance. Includes mild renaming to differentiate
between plan and planDataSource.
* terraforn: refactor EvalRefresh
EvalRefresh.Eval(ctx) is now Refresh(evalRefreshReqest, ctx). While none
of the inner logic of the function has changed, it now returns a
states.ResourceInstanceObject instead of updating a pointer. This is a
human-centric change, meant to make the logic flow (in the calling
functions) easier to follow.
* terraform: refactor EvalReadDataPlan and Apply
This is a very minor refactor that removes the (currently) redundant
types EvalReadDataPlan and EvalReadDataApply in favor of using
EvalReadData with a Plan and Apply functions.
This is in effect an aesthetic change; since there is no longer an
Eval() abstraction we can rename functions to make their functionality
as obvious as possible.
* terraform: refactor EvalCheckPlannedChange
EvalCheckPlannedChange was only used by NodeApplyableResourceInstance
and has been refactored into a method on that type called
checkPlannedChange.
* terraform: refactor EvalDiff.Eval
EvalDiff.Eval is now a method on NodeResourceAbstracted called Plan
which takes as a parameter an EvalPlanRequest. Instead of updating
pointers it returns a new plan and state.
I removed as many redundant fields from the original EvalDiff struct as
possible.
* terraform: refactor EvalReduceDiff
EvalReduceDiff is now reducePlan, a regular function (without a method)
that returns a value.
* terraform: refactor EvalDiffDestroy
EvalDiffDestroy.Eval is now NodeAbstractResourceInstance.PlanDestroy
which takes ctx, state and optional DeposedKey and returns a change.
I've removed the state return value since it was only ever returning a
nil state.
* terraform: refactor EvalWriteDiff
EvalWriteDiff.Eval is now NodeAbstractResourceInstance.WriteChange.
* rename files to something more logical
* terrafrom: refresh refactor, continued!
I had originally made Refresh a stand-alone function since it was
(obnoxiously) called from a graphNodeImportStateSub, but after some
(greatly appreciated) prompting in the PR I instead made it a method on
the NodeAbstractResourceInstance, in keeping with the other refactored
eval nodes, and then built a NodeAbstractResourceInstance inside import.
Since I did that I could also remove my duplicated 'writeState' code
inside graphNodeImportStateSub and use n.writeResourceInstanceState, so
double thanks!
* unexport eval methods
* re-refactor Plan, it made more sense on NodeAbstractResourceInstance. Sorry
* Remove uninformative `Eval`s from EvalReadData, consolidate to a single
file, and rename file to match function names.
* manual rebase
This is a repeated cause of confusion and questions in the community
forum, because both JSON and YAML valid syntax are hard to generate using
just string concatenation. Terraform has built-in functions for both of
these common serializations to avoid those problems, and so this will
hopefully make these better alternatives more discoverable.
When we did the earlier documentation rework for Terraform v0.12 we still
had one big "Expressions" page talking about the various operators and
constructs, and so we had to be a bit economical with the details about
some more complicated constructs in order to avoid the page becoming even
more overwhelming.
However, we've recently reorganized the language documentation again so
that the expressions section is split across several separate pages, and
that gives some freedom to go into some more detail about and show longer
examples for certain features.
My changes here are not intended to be an exhaustive rewrite but I did
try to focus on some areas I've commonly seen questions about when helping
in the community forum and elsewhere, and also to create a little more
connectivity between the different content so readers can hopefully find
what they are looking for more easily when they're not yet sure what
terminology to look for.
As of Terraform 0.13+, the get-plugins command has been
superceded by new provider installation mechanisms, and
general philosophy (providers are always installed, but
the sources may be customized). Updat the init command
to give users a warning if they are setting this flag,
to encourage them to remove it from their workflow, and
update relevant docs and docstrings as well
* terraform: refactor EvalWriteStateDeposed
EvalWriteStateDeposed is now
NodeDestroyDeposedResourceInstanceObject.writeResourceInstanceState.
Since that's the only caller I considered putting the logic directly
inline, but things are clunky enough right now that I think this is good
enough for this refactor.
* fix inaccurate log
* terraform: refactor EvalWriteState
EvalWriteState is refactored into a method on
NodeAbstractResourceInstance and renamed writeResourceInstanceState.
Import, my nemesis, gave me pause. I did not expect to find
EvalWriteState in an transform node, and so I decided to copy the
function inline rather than rethink my entire refactor for one function
that's likely to be (heavily) refactored in the future.
* terraform: refactor EvalPreApply and EvalPostApply
EvalPreApply and EvalPostApply have been refactored as methods on
NodeAbstractResourceInstance.
* terraform: remove EvalReadState and EvalReadStateDeposed
These two functions had already been re-implemented as functions on
NodeAbstractResource, so this commit finished the process of removing
the Evals and refactoring the tests.
* terraform: remove EvalRefreshLifecycle
EvalRefreshLifecycle was only used in one node,
NodePlannableResourceInstance, so the functionality has been moved
directly inline.
* terraform: remove EvalDeposeState
EvalDeposeState was only used in one function, so it has been removed
and the logic placed in-line in
NodeApplyableResourceInstance.managedResourceExecute.
* terraform: remove EvalMaybeRestoreDeposedObject
EvalMaybeRestoreDeposedObject was only used in one place, so I've
removed it in favor of in-line code.
The ignore_changes option `all` can cause computed attributes to show up
in the validation configuration, which will be rejected by the provider
SDK. Validate the config before applying the ignore_changes options.
In the future it we should probably have a way for processIgnoreChanges
to skip computed values based on the schema. Since we also want a way to
more easily query the schema for "computed-ness" to validate the
ignore_changes arguments for computed values, we can fix these at the
same time with a future change to configschema. This will most likely
require some sort of method to retrieve info from the configschema.Block
via cty.Path, which we cannot easily do right now.
Because we allow legacy providers to depart from the contract and return
changes to non-computed values, the plan response may have altered
values that were already suppressed with ignore_changes. A prime example
of this is where providers attempt to obfuscate config data by turning
the config value into a hash and storing the hash value in the state.
There are enough cases of this in existing providers that we must
accommodate the behavior for now, so for ignore_changes to work at all
on these values, we will revert the ignored values once more on the
planned state.
The cty.Transform for ignore_changes could return early when building a
map that had multiple ignored keys.
Refactor the function to try and separate the fast-path a little better,
and hopefully make it easier to follow.
When applying, we return early if only sensitivity changed between the
before and after values of the changeset. This avoids unnecessarily
invoking the provider.
Previously, we did not write the new value for a resource to the state
when this happened. The result was a permanent diff for resource updates
which only change sensitivity, as the apply step is skipped and the
state is unchanged.
This commit adds a state write to this shortcut return path, and fixes a
test for this exact case which was accidentally relying on a value diff
caused by an incorrect manual state value.
I originally drafted these docs in a context where I was relying on
GitHub's Markdown renderer, and carelessly imported them into the
Terraform website without verifying that the website's Markdown renderer
could process it. This particular quirk has bitten us before: the website
Markdown parser expects follow-on paragraphs in a list item to be indented
at least four spaces, and with less than that it ignores the leading
whitespace altogether and just understands a normal paragraph.
This change will cause the follow-on paragraphs to now correctly render
as part of the bullet points they are intended to be attached to.