The "acceptable hashes" for a package is a set of hashes that the upstream
source considers to be good hashes for checking whether future installs
of the same provider version are considered to match this one.
Because the acceptable hashes are a package authentication concern and
they already need to be known (at least in part) to implement the
authenticators, here we add AcceptableHashes as an optional extra method
that an authenticator can implement.
Because these are hashes chosen by the upstream system, the caller must
make its own determination about their trustworthiness. The result of
authentication is likely to be an input to that, for example by
distrusting hashes produced by an authenticator that succeeds but doesn't
report having validated anything.
This is the pre-existing hashing scheme that was initially built for
releases.hashicorp.com and then later reused for the provider registry
protocol, which takes a SHA256 hash of the official distribution .zip file
and formats it as lowercase hex.
This is a non-ideal hash scheme because it works only for
PackageLocalArchive locations, and so we can't verify package directories
on local disk against such hashes. However, the registry protocol is now
a compatibility constraint and so we're going to need to support this
hashing scheme for the foreseeable future.
* Add creation test and simplify in-place test
* Add deletion test
* Start adding marking from state
Start storing paths that should be marked
when pulled out of state. Implements deep
copy for attr paths. This commit also includes some
comment noise from investigations, and fixing the diff test
* Fix apply stripping marks
* Expand diff tests
* Basic apply test
* Update comments on equality checks to clarify current understanding
* Add JSON serialization for sensitive paths
We need to serialize a slice of cty.Path values to be used to re-mark
the sensitive values of a resource instance when loading the state file.
Paths consist of a list of steps, each of which may be either getting an
attribute value by name, or indexing into a collection by string or
number.
To serialize these without building a complex parser for a compact
string form, we render a nested array of small objects, like so:
[
[
{ type: "get_attr", value: "foo" },
{ type: "index", value: { "type": "number", "value": 2 } }
]
]
The above example is equivalent to a path `foo[2]`.
* Format diffs with map types
Comparisons need unmarked values to operate on,
so create unmarked values for those operations. Additionally,
change diff to cover map types
* Remove debugging printing
* Fix bug with marking non-sensitive values
When pulling a sensitive value from state,
we were previously using those marks to remark
the planned new value, but that new value
might *not* be sensitive, so let's not do that
* Fix apply test
Apply was not passing the second state
through to the third pass at apply
* Consistency in checking for length of paths vs inspecting into value
* In apply, don't mark with before paths
* AttrPaths test coverage for DeepCopy
* Revert format changes
Reverts format changes in format/diff for this
branch so those changes can be discussed on a separate PR
* Refactor name of AttrPaths to AttrSensitivePaths
* Rename AttributePaths/attributePaths for naming consistency
Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>
In order to save any changes to lifecycle options, we need to record
those changes during refresh, otherwise they would only be updated when
there is a change in the resource to be applied.
Go modules are well understood and supported now, and since our build
pipeline no longer uses the vendored packages, we can remove the extra
overhead of maintaining these files.
This evaluation was required when refresh ran in a separate walk and
managed resources were only partly handled by configuration. Now that we
have the correct dependency information available when refreshing
configured resources, we can update their state accordingly. Since
orphaned resources are not refreshed, they can retain their stored
dependencies for correct ordering.
This also prevents users from introducing cycles with nodes they can't
"see", since only orphaned nodes will retain their stored dependencies,
and the remaining nodes will be updated according to the configuration.
The bot seems to currently be running into some operational problems that are
creating noise for provider development teams by potentially migrating issues
multiple times.
This is just a tactical change to stop the annoying symptoms right now, to
give some time to figure out what's actually going on here.
Despite not requiring the configuration for any other reason, the taint
subcommand should not execute if the required_version constraints cannot
be met. Doing so can result in an undesirable state file upgrade.
This adds a test for GetInputVariable, and includes
a variable with a "sensitive" attribute in configuration,
to test that that value is marked as sensitive
Previous deprecations only included direct assignment of template-only
expressions to arguments. That is, this was not deprecated:
locals {
foo = ["${var.foo}"]
}
This commit uses hclsyntax.VisitAll to detect and show deprecations for
all template-only expressions, no matter how deep they are in a given
expression.
The providers schema command is using the Config.ProviderTypes method,
which had not been kept up to date with the changes to provider
requirements detection made in Config.ProviderRequirements. This
resulted in any currently-unused providers being omitted from the
output.
This commit changes the ProviderTypes method to use the same underlying
logic as ProviderRequirements, which ensures that `required_providers`
blocks are taken into account.
Includes an integration test case to verify that this fixes the provider
schemas command bug.
Now that we don't have to handle data sources that may or may not have
been updated during a refresh phase, and the plan phase can save the
data source to the refreshed state, we can remove a lot of the logic
involved in detecting whether the data source needs to be planned or
not.
When there is no separate refresh phase, we always must attempt to read
the data source during planning, and the only conditions are based on
having a known configuration, and not having any dependencies on which
we're waiting. If the data source is read during plan, we can now save
that directly to the refreshed state, and don't need to smuggle the
value as a change to be saved during apply.