random_shuffle takes a list of strings and returns a new list with the
same items in a random permutation.
Optionally allows the result list to be a different length than the
input list. A shorter result than input results in some items being
excluded. A longer result than input results in some items being
repeated, but never more often than the number of input items.
This resource generates a cryptographically-strong set of bytes and
provides them as base64, hexadecimal and decimal string representations.
It is intended to be used for generating unique ids for resources
elsewhere in the configuration, and thus the "keepers" would be set to
any ForceNew attributes of the target resources, so that a new id is
generated each time a new resource is generated.
This provider will have logical resources that allow Terraform to "manage"
randomness as a resource, producing random numbers on create and then
retaining the outcome in the state so that it will remain consistent
until something explicitly triggers generating new values.
Managing randomness in this way allows configurations to do things like
random distributions and ids without causing "perma-diffs".
Provider nodes interpolate their config during the input walk, but this
is very early and so it's pretty likely that any resources referenced are
entirely absent from the state.
As a special case then, we tolerate the normally-fatal case of having
an entirely missing resource variable so that the input walk can complete,
albeit skipping the providers that have such interpolations.
If these interpolations end up still being unresolved during refresh
(e.g. because the config references a resource that hasn't been created
yet) then we will catch that error on the refresh pass, or indeed on the
plan pass if -refresh=false is used.
A companion to the null_resource resource, this is here primarily to
enable manual quick testing of data sources workflows without depending
on any external services.
The "inputs" map gets copied to the computed "outputs" map on read,
"rand" gives a random number to exercise cases with constantly-changing
values (an anti-pattern!), and "has_computed_default" is settable in
config but computed if not set.
Internally a data source read is represented as a creation diff for the
resource, but in the UI we'll show it as a distinct icon and color so that
the user can more easily understand that these operations won't affect
any real infrastructure.
Unfortunately by the time we get to formatting the plan in the UI we
only have the resource names to work with, and can't get at the original
resource mode. Thus we're forced to infer the resource mode by exploiting
knowledge of the naming scheme.
New resources logically don't have "old values" for their attributes, so
showing them as updates from the empty string is misleading and confusing.
Instead, we'll skip showing the old value in a creation diff.
Data resources don't have ids when they refresh, so we'll skip showing the
"(ID: ...)" indicator for these. Showing it with no id makes it look
like something is broken.
Since the data resource lifecycle contains no steps to deal with tainted
instances, we must make sure that they never get created.
Doing this out in the command layer is not the best, but this is currently
the only layer that has enough information to make this decision and so
this simple solution was preferred over a more disruptive refactoring,
under the assumption that this taint functionality eventually gets
reworked in terms of StateFilter anyway.
The ResourceAddress struct grows a new "Mode" field to match with
Resource, and its parser learns to recognize the "data." prefix so it
can set that field.
Allows -target to be applied to data sources, although that is arguably
not a very useful thing to do. Other future uses of resource addressing,
like the state plumbing commands, may be better uses of this.
Previously they would get left behind in the state because we had no
support for planning their destruction. Now we'll create a "destroy" plan
and act on it by just producing an empty state on apply, thus ensuring
that the data resources don't get left behind in the state after
everything else is gone.
The handling of data "orphans" is simpler than for managed resources
because the only thing we need to deal with is our own state, and the
validation pass guarantees that by the time we get to refresh or apply
the instance state is no longer needed by any other resources and so
we can safely drop it with no fanfare.
This implements the main behavior of data resources, including both the
early read in cases where the configuration is non-computed and the split
plan/apply read for cases where full configuration can't be known until
apply time.
The key difference between data and managed resources is in their
respective lifecycles. Now the expanded resource EvalTree switches on
the resource mode, generating a different lifecycle for each mode.
For this initial change only managed resources are implemented, using the
same implementation as before; data resources are no-ops. The data
resource implementation will follow in a subsequent change.
data resources are a separate namespace of resources than managed
resources, so we need to call a different provider method depending on
what mode of resource we're visiting.
Managed resources use ValidateResource, while data resources use
ValidateDataSource, since at the provider level of abstraction each
provider has separate sets of resources and data sources respectively.
Once a data resource gets into the state, the state system needs to be
able to parse its id to match it with resources in the configuration.
Since data resources live in a separate namespace than managed resources,
the extra "mode" discriminator is required to specify which namespace
we're talking about, just like we do in the resource configuration.
For backward compatibility we will continue to support using the data
sources that were formerly logical resources as resources for the moment,
but we want to warn the user about it since this support is likely to
be removed in future.
This is done by adding a new "deprecation message" feature to
schema.Resource, but for the moment this is done as an internal feature
(not usable directly by plugins) so that we can collect additional
use-cases and design a more general interface before creating a
compatibility constraint.
As a first example of a real-world data source, the pre-existing
terraform_remote_state resource is adapted to be a data source. The
original resource is shimmed to wrap the data source for backward
compatibility.
Historically we've had some "read-only" and "logical" resources. With the
addition of the data source concept these will gradually become data
sources, but we need to retain backward compatibility with existing
configurations that use the now-deprecated resources.
This shim is intended to allow us to easily create a resource from a
data source implementation. It adjusts the schema as needed and adds
stub Create and Delete implementations.
This would ideally also produce a deprecation warning whenever such a
shimmed resource is used, but the schema system doesn't currently have
a mechanism for resource-specific validation, so that remains just a TODO
for the moment.
In the "schema" layer a Resource is just any "thing" that has a schema
and supports some or all of the CRUD operations. Data sources introduce
a new use of Resource to represent read-only resources, which require
some different InternalValidate logic.
This is a breaking change to the ResourceProvider interface that adds the
new operations relating to data sources.
DataSources, ValidateDataSource, ReadDataDiff and ReadDataApply are the
data source equivalents of Resources, Validate, Diff and Apply (respectively)
for managed resources.
The diff/apply model seems at first glance a rather strange workflow for
read-only resources, but implementing data resources in this way allows them
to fit cleanly into the standard plan/apply lifecycle in cases where the
configuration contains computed arguments and thus the read must be deferred
until apply time.
Along with breaking the interface, we also fix up the plugin client/server
and helper/schema implementations of it, which are all of the callers
used when provider plugins use helper/schema. This would be a breaking
change for any provider plugin that directly implements the provider
interface, but no known plugins do this and it is not recommended.
At the helper/schema layer the implementer sees ReadDataApply as a "Read",
as opposed to "Create" or "Update" as in the managed resource Apply
implementation. The planning mechanics are handled entirely within
helper/schema, so that complexity is hidden from the provider implementation
itself.
This allows ${data.TYPE.NAME.FIELD} interpolation syntax at the
configuration level, though since there is no special handling of them
in the core package this currently just acts as an alias for
${TYPE.NAME.FIELD}.
This allows the config loader to read "data" blocks from the config and
turn them into DataSource objects.
This just reads the data from the config file. It doesn't validate the
data nor do anything useful with it.
Previously resources were assumed to always support the full set of
create, read, update and delete operations, and Terraform's resource
management lifecycle.
Data sources introduce a new kind of resource that only supports the
"read" operation. To support this, a new "Mode" field is added to
the Resource concept within the config layer, which can be set to
ManagedResourceMode (to indicate the only mode previously possible) or
DataResourceMode (to indicate that only "read" is supported).
To support both managed and data resources in the tests, the
stringification of resources in config_string.go is adjusted slightly
to use the Id() method rather than the unusual type[name] serialization
from before, causing a simple mechanical adjustment to the loader tests'
expected result strings.
As requested in #4822, add support for a KMS Key ID (ARN) for Db
Instance
```
make testacc TEST=./builtin/providers/aws
TESTARGS='-run=TestAccAWSDBInstance_kmsKey' 2>~/tf.log
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /vendor/)
TF_ACC=1 go test ./builtin/providers/aws -v
-run=TestAccAWSDBInstance_kmsKey -timeout 120m
=== RUN TestAccAWSDBInstance_basic
--- PASS: TestAccAWSDBInstance_basic (587.37s)
=== RUN TestAccAWSDBInstance_kmsKey
--- PASS: TestAccAWSDBInstance_kmsKey (625.31s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 1212.684s
```