* ADD CLI option position for force-unlock command
* Update force-unlock.html.markdown
Made a change to also include the missing [DIR]
Co-authored-by: Petros Kolyvas <petros@hashicorp.com>
If a change exists for a resource instance,
the After value is returned, however, this value
will not have its marks as it as been encoded.
This Marks the return value so the marks follow
that resource reference.
These were initially introduced as functions with "encode" and "decode"
prefixes, but that doesn't match with our existing convention of putting
the encoding format first so that the encode and decode functions will
group together in a alphabetically-ordered function list.
"text" is not really a defined serialization format, but it's a short word
that hopefully represents well enough what these functions are aiming to
encode and decode, while being consistent with existing functions like
jsonencode/jsondecode, yamlencode/yamldecode, etc.
The "base64" at the end here is less convincing because there is precedent
for that modifier to appear both at the beginning and the end in our
existing function names. I chose to put it at the end here because that
seems to be our emergent convention for situations where the base64
encoding is a sort of secondary modifier alongside the primary purpose
of the function, as we see with "filebase64". (base64gzip is an exception
here, but it seems outvoted by the others.)
Previously this codepath was generating a confusing message in the absense
of any symlinks, because filepath.EvalSymlinks returns a successful result
if the target isn't a symlink.
Now we'll emit the log line only if filepath.EvalSymlinks returns a
result that's different in a way that isn't purely syntactic (which
filepath.Clean would "fix").
The new message is a little more generic because technically we've not
actually ensured that a difference here was caused by a symlink and so
we shouldn't over-promise and generate something potentially misleading.
If you run the e2etests locally and use a configured plugin_cache_dir,
the test will leave a bad directory behind in your cache dir that causes
later `init`s to fail. To circumvent this, pass an explicity-empty CLI
config file.
This is a nicety for local developers and not necessarily required, but
it happens to me often enough that I'd like to fix it. It's probably not
a *bad* idea to pass an explicit cli config to all e2etests, honestly,
but this is the only one that causes active problems so I limited this
PR to that one test.
Here's the error which occurs on subsequent `init` if this test is run on a
machine that uses a plugin cache dir:
2020/10/13 10:41:05 [TRACE] providercache.fillMetaCache: error while scanning directory /Users/mildwonkey/.terraform.d/plugin-cache: failed to read metadata about /Users/mildwonkey/.terraform.d/plugin-cache/example.com/awesomecorp/happycloud/1.2.0/darwin_amd64: stat /Users/mildwonkey/.terraform.d/plugin-cache/example.com/awesomecorp/happycloud/1.2.0/darwin_amd64: no such file or directory
ioutil.TempFile has a special case where an empty string for its dir
argument is interpreted as a request to automatically look up the system
temporary directory, which is commonly /tmp .
We don't want that behavior here because we're specifically trying to
create the temporary file in the same directory as the file we're hoping
to replace. If the file gets created in /tmp then it might be on a
different device and thus the later atomic rename won't work.
Instead, we'll add our own special case to explicitly use "." when the
given filename is in the current working directory. That overrides the
special automatic behavior of ioutil.TempFile and thus forces the
behavior we need.
This hadn't previously mattered for earlier callers of this code because
they were creating files in subdirectories, but this codepath was failing
for the dependency lock file due to it always being created directly
in the current working directory.
Unfortunately since this is a picky implementation detail I couldn't find
a good way to write a unit test for it without considerable refactoring.
Instead, I verified manually that the temporary filename wasn't in /tmp on
my Linux system, and hope that the comment inline will explain this
situation well enough to avoid an accidental regression in future
maintenence.
* Add note to upgrade guide about provider sensitivity
Now that sensitivity follows attributes providers mark
as sensitive, add this note to the upgrade guide.
Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>
Because ignore_changes configuration can refer to resource arguments
which are assigned sensitive values, we need to unmark the resource
object before processing.
If a configuration requires a partial provider version (with some parts
unspecified), Terraform considers this as a constrained-to-zero version.
For example, a version constraint of 1.2 will result in an attempt to
install version 1.2.0, even if 1.2.1 is available.
When writing the dependency locks file, we previously would write 1.2.*,
as this is the in-memory representation of 1.2. This would then cause an
error on re-reading the locks file, as this is not a valid constraint
format.
Instead, we now explicitly convert the constraint to its zero-filled
representation before writing the locks file. This ensures that it
correctly round-trips.
Because this change is made in getproviders.VersionConstraintsString, it
also affects the output of the providers sub-command.
The Legacy SDK cannot handle missing strings from objects in sets, and
will insert an empty string when planning the missing value. This
subverts the `couldHaveUnknownBlockPlaceholder` check, and causes
errors when `dynamic` is used with NestingSet blocks.
We don't have a separate codepath to handle the internals of
AssertObjectCompatible differently for the legacy SDK, but we can treat
empty strings as null strings within set objects to avoid the failed
assertions.
Use a single log writer instance for all std library logging.
Setup the std log writer in the logging package, and remove boilerplate
from test packages.
Core was previously ignoring JSON-encoded dynamic values, but these are
technically supported, so we must either error or accept the value.
Since we already have the decoder for Json state, it's minimal effort to
support this on all plugin methods too.
This change also gives providers an easy way to implement the
UpgradeResourceState method. The obvious implementation of returning the same
JSON-encoded value has tripped up a few providers not using the legacy
SDK already, and we should have at least indicated that the value was
being lost.
A cost estimation error does not actually stop a run, so the run was continuing in the background after the cli exits, causing confusion. This change matches the UI behavior.
For normal provider installation we want to associate each provider with
a selected version number and find a suitable package for that version
that conforms to the official hashes for that release.
Those requirements are very onerous for a provider developer currently
testing a not-yet-released build, though. To allow for that case this new
CLI configuration feature allows overriding specific providers to refer
to give local filesystem directories.
Any provider overridden in this way is not subject to the usual
restrictions about selected versions or checksum conformance, and
activating an override won't cause any changes to the selections recorded
in the lock file because it's intended to be a temporary setting for one
developer only.
This is, in a sense, a spiritual successor of an old capability we had to
override specific plugins in the CLI configuration file. There were
some vestiges of that left in the main package and CLI config package
but nothing has actually been honoring them for several versions now and
so this commit removes them to avoid confusion with the new mechanism.
Core was previously ignoring JSON-encoded dynamic values, but these are
technically supported, so we must either error or accept the value.
Since we already have the decoder for Json state, it's minimal effort to
support this on all plugin methods too.
This change also gives providers an easy way to implement the
UpgradeResourceState method. The obvious implementation of returning the same
JSON-encoded value has tripped up a few providers not using the legacy
SDK already, and we should have at least indicated that the value was
being lost.
If the provisioner configuration includes sensitive values, it's a
reasonable assumption that we should suppress its log output. Obvious
examples where this makes sense include echoing a secret to a file using
local-exec or remote-exec.
This commit adds tests for both logging output from provisioners with
non-sensitive configuration, and suppressing logs for provisioners with
sensitive values in configuration.
Note that we do not suppress logs if connection info contains sensitive
information, as provisioners should not be logging connection
information under any circumstances.
If provisioner configuration or connection info includes sensitive
values, we need to unmark them before calling the provisioner. Failing
to do so causes serialization to error.
Unlike resources, we do not need to capture marked paths here, so we
just discard the marks.