So far the output command has had a default output format intended for
human consumption and a JSON output format intended for machine
consumption.
However, until Terraform v0.14 the default output format for primitive
types happened to be _almost_ a raw string representation of the value,
and so users started using that as a more convenient way to access
primitive-typed output values from shell scripts, avoiding the need to
also use a tool like "jq" to decode the JSON.
Recognizing that primitive-typed output values are common and that
processing them with shell scripts is common, this commit introduces a new
-raw mode which is explicitly intended for that use-case, guaranteeing
that the result will always be the direct result of a string conversion
of the output value, or an error if no such conversion is possible.
Our policy elsewhere in Terraform is that we always use JSON for
machine-readable output. We adopted that policy because our other
machine-readable output has typically been complex data structures rather
than single primitive values. A special mode seems justified for output
values because it is common for root module output values to be just
strings, and so it's pragmatic to offer access to the raw value directly
rather than requiring a round-trip through JSON.
As of Terraform 0.13+, the get-plugins command has been
superceded by new provider installation mechanisms, and
general philosophy (providers are always installed, but
the sources may be customized). Updat the init command
to give users a warning if they are setting this flag,
to encourage them to remove it from their workflow, and
update relevant docs and docstrings as well
* command/format: concise diff is no longer an experiment
Since state formatting goes through the "diff" printer, I have
repurposed the concise flag as a verbose flag, used only when printing
state. It's silly but it works!
* remove helper/experiment
With this experiment concluded, we no longer need helper/experiment. The
shadow experiment had not been touched in many years, so I removed all
references, and removed the package entirely. Any new experiments are
expected to be configuration experiments handled by our (other)
experiments package.
* check for the verbose flag consistently, in case we end up using it in plans in the future
Along with all of the other information we previously reported in the
"terraform version" output, we'll now include the name of the current
platform as our provider mechanisms represent it.
This is addressing a long-standing minor annoyance where we often can't
tell from an incomplete bug report which platform Terraform was running
on, and incomplete bug reporters do tend to at least include the
"terraform version" output even if they don't also include the requested
full trace log.
However, what motivated doing it _now_ is that anyone building a provider
registry or mirror needs to have some awareness of these platform
identifiers which have been, until v0.13, mostly an implementation detail.
This additional information is a small thing we can do to help registry
builders find out what the platform identifier ought to be for each of
the platforms they aim to support, even if some of them are platforms
which the Go compiler allows but which HashiCorp doesn't officially
support.
The new information is on a line of its own in the output as a pragmatic
way to avoid breaking anyone who might be using something like
$(terraform version | head -n1) to print a brief Terraform version
identifier into some logs. That's not an interface we officially support
for machine consumption, but it's easy to avoid breaking it here and so we
won't do so.
When using the enhanced remote backend, a subset of all Terraform
operations are supported. Of these, only plan and apply can be executed
on the remote infrastructure (e.g. Terraform Cloud). Other operations
run locally and use the remote backend for state storage.
This causes problems when the local version of Terraform does not match
the configured version from the remote workspace. If the two versions
are incompatible, an `import` or `state mv` operation can cause the
remote workspace to be unusable until a manual fix is applied.
To prevent this from happening accidentally, this commit introduces a
check that the local Terraform version and the configured remote
workspace Terraform version are compatible. This check is skipped for
commands which do not write state, and can also be disabled by the use
of a new command-line flag, `-ignore-remote-version`.
Terraform version compatibility is defined as:
- For all releases before 0.14.0, local must exactly equal remote, as
two different versions cannot share state;
- 0.14.0 to 1.0.x are compatible, as we will not change the state
version number until at least Terraform 1.1.0;
- Versions after 1.1.0 must have the same major and minor versions, as
we will not change the state version number in a patch release.
If the two versions are incompatible, a diagnostic is displayed,
advising that the error can be suppressed with `-ignore-remote-version`.
When this flag is used, the diagnostic is still displayed, but as a
warning instead of an error.
Commands which will not write state can assert this fact by calling the
helper `meta.ignoreRemoteBackendVersionConflict`, which will disable the
checks. Those which can write state should instead call the helper
`meta.remoteBackendVersionCheck`, which will return diagnostics for
display.
In addition to these explicit paths for managing the version check, we
have an implicit check in the remote backend's state manager
initialization method. Both of the above helpers will disable this
check. This fallback is in place to ensure that future code paths which
access state cannot accidentally skip the remote version check.
Remove chef, habitat, puppet, and salt-masterless provsioners,
which follows their deprecation. Update the documentatin for these
provisioners to clarify that they have been removed from later versions
of Terraform. Adds the fmt Make target back and updates fmtcheck script
for correctness.
When building a context, we read the dependency locks and ensure that
the provider requirements from the configuration can be satisfied.
If the configured requirements change such that the locks need to be
updated, we explain this and recommend running "terraform init".
This check is ignored for any providers which are locally marked as in
development. This includes unmanaged providers and those listed in the
provider installation `dev_overrides` block.
Core is only using the PrepareProviderConfig call for the validation
part of the method, but we should be re-validating the final config
immediately before Configure.
This change elects to not start using the PreparedConfig here, since
there is no useful reason for it at this point, and it would
introduce a functional difference between terraform releases that can be
avoided.
Previously when printing the relevant variables involved in a failed
expression evaluation we would just skip over unknown values entirely.
There are some errors, though, which are _caused by_ a value being
unknown, in which case it's helpful to show which of the inputs to that
expression were known vs. unknown so that the user can limit their further
investigation only to the unknown ones.
While here I also added a special case for sensitive values that overrides
all other display, because we don't know what about a value is sensitive
and so better to give nothing away at the expense of a slightly less
helpful error message.
For this version of Terraform and forward, we no longer refuse to read
compatible state files written by future versions of Terraform. This is
a commitment that any changes to the semantics or format of the state
file after this commit will require a new state file version 5.
The result of this is that users of this Terraform version will be able
to share remote state with users of future versions, and all users will
be able to read and write state. This will be true until the next major
state file version is required.
This does not affect users of previous versions of Terraform, which will
continue to refuse to read state written by later versions.
The short description of our commands (as shown in the main help output
from "terraform") was previously very inconsistent, using different
tense/mood for different commands. Some of the commands were also using
some terminology choices inconsistent with how we currently talk about
the related ideas in our documentation.
Here I've tried to add some consistency by first rewriting them all in
the imperative mood (except the ones that just are just subcommand
groupings), and tweaking some of the terminology to hopefully gel better
with how we present similar ideas in our recently-updated docs.
While working on this I inevitably spotted some similar inconsistencies
in the longer-form help output of some of the commands. I've not reviewed
all of these for consistency, but I did update some where the wording
was either left inconsstent with the short form changes I'd made or
where the prose stood out to me as particularly inconsistent with our
current usual documentation language style.
All of this is subjective, so I expect we'll continue to tweak these over
time as we continue to develop our documentation writing style based on
user questions and feedback.
Now that hclog can independently set levels on related loggers, we can
separate the log levels for different subsystems in terraform.
This adds the new environment variables, `TF_LOG_CORE` and
`TF_LOG_PROVIDER`, which each take the same set of log level arguments,
and only applies to logs from that subsystem. This means that setting
`TF_LOG_CORE=level` will not show logs from providers, and
`TF_LOG_PROVIDER=level` will not show logs from core. The behavior of
`TF_LOG` alone does not change.
While it is not necessarily needed since the default is to disable logs,
there is also a new level argument of `off`, which reflects the
associated level in hclog.
Use a separate log sink to always capture trace logs for the panicwrap
handler to write out in a crash log.
This requires creating a log file in the outer process and passing that
path to the child process to log to.
If you run the e2etests locally and use a configured plugin_cache_dir,
the test will leave a bad directory behind in your cache dir that causes
later `init`s to fail. To circumvent this, pass an explicity-empty CLI
config file.
This is a nicety for local developers and not necessarily required, but
it happens to me often enough that I'd like to fix it. It's probably not
a *bad* idea to pass an explicit cli config to all e2etests, honestly,
but this is the only one that causes active problems so I limited this
PR to that one test.
Here's the error which occurs on subsequent `init` if this test is run on a
machine that uses a plugin cache dir:
2020/10/13 10:41:05 [TRACE] providercache.fillMetaCache: error while scanning directory /Users/mildwonkey/.terraform.d/plugin-cache: failed to read metadata about /Users/mildwonkey/.terraform.d/plugin-cache/example.com/awesomecorp/happycloud/1.2.0/darwin_amd64: stat /Users/mildwonkey/.terraform.d/plugin-cache/example.com/awesomecorp/happycloud/1.2.0/darwin_amd64: no such file or directory
If a configuration requires a partial provider version (with some parts
unspecified), Terraform considers this as a constrained-to-zero version.
For example, a version constraint of 1.2 will result in an attempt to
install version 1.2.0, even if 1.2.1 is available.
When writing the dependency locks file, we previously would write 1.2.*,
as this is the in-memory representation of 1.2. This would then cause an
error on re-reading the locks file, as this is not a valid constraint
format.
Instead, we now explicitly convert the constraint to its zero-filled
representation before writing the locks file. This ensures that it
correctly round-trips.
Because this change is made in getproviders.VersionConstraintsString, it
also affects the output of the providers sub-command.
Use a single log writer instance for all std library logging.
Setup the std log writer in the logging package, and remove boilerplate
from test packages.