Commit Graph

10631 Commits

Author SHA1 Message Date
Mitchell Hashimoto a992860b8d
providers/aws: key_pair import 2016-05-16 10:13:20 -07:00
Mitchell Hashimoto 4e3488afb8
providers/aws: customer gateway import 2016-05-16 10:03:57 -07:00
Mitchell Hashimoto 2a30178413
providers/aws: flow log import 2016-05-16 10:03:57 -07:00
Mitchell Hashimoto f6b77a6c02
providers/aws: import network ACLs 2016-05-16 10:03:57 -07:00
Mitchell Hashimoto 2d5745328b
providers/aws: import main route table association 2016-05-16 10:03:57 -07:00
Mitchell Hashimoto ab7b5dab2d
providers/aws: route tables import assocations 2016-05-16 10:03:57 -07:00
Mitchell Hashimoto a1035804d4
providers/aws: route table import should ignore default rule 2016-05-16 10:03:57 -07:00
Mitchell Hashimoto 08b7f67227
providers/aws: route table import 2016-05-16 10:03:57 -07:00
Mitchell Hashimoto a4e48b19c0
providers/aws ENI import 2016-05-16 10:03:57 -07:00
Mitchell Hashimoto 9cdbed11ff
providers/aws: ebs volume and autoscaling group 2016-05-16 10:03:57 -07:00
Mitchell Hashimoto 884980da1a
providers/aws: instance, nat, internet gateway 2016-05-16 10:03:57 -07:00
Mitchell Hashimoto 830708a882
providers/aws: elb 2016-05-16 10:03:57 -07:00
Mitchell Hashimoto 91938cf55f
providers/aws: resource aws_subnet import 2016-05-16 10:03:57 -07:00
Mitchell Hashimoto b75d5bb46d
providers/aws: vpc dhcp options 2016-05-16 10:03:57 -07:00
Mitchell Hashimoto da353c3637
aws/aws_vpc: import 2016-05-16 10:03:57 -07:00
Mitchell Hashimoto 420e13d2f2
providers/aws: eip uses passthrough importstate 2016-05-16 10:03:57 -07:00
Mitchell Hashimoto b7d4767dd6
helper/schema: pass through import state func 2016-05-16 10:03:57 -07:00
clint shryock b9d0e14d2a provider/aws: Update Lambda tests for more random names 2016-05-16 10:31:46 -05:00
Aki Hänninen fce7aa483d Add version_id attribute for aws_s3_bucket_object (#6677) 2016-05-16 08:49:59 -05:00
James Nugent ffcf6cf6f7 Merge pull request #6675 from atward/build_check
Don't assume dev platform was built
2016-05-15 23:26:12 -05:00
Adam Ward 4a29be7b50 Don't assume dev platform was built 2016-05-16 10:00:44 +10:00
Joe Topjian 76e1a8ffb4 Update CHANGELOG.md 2016-05-14 21:56:18 -05:00
Joe Topjian f69d95d01f Merge pull request #6579 from Fodoj/reassociate-fip-on-update
provider/openstack: Reassociate FIP on network changes
2016-05-14 21:55:16 -05:00
Joe Topjian 94d27ba3ea Update CHANGELOG.md 2016-05-14 20:46:01 -05:00
Joe Topjian b53c74a9ae Merge pull request #6279 from ZZelle/support-client-cert
provider/openstack: Support client certificates
2016-05-14 20:44:57 -05:00
Martin Atkins feafc94dde website: docs for the "random" provider 2016-05-14 16:49:10 -07:00
Martin Atkins eec6c88fd2 provider/random: random_shuffle resource
random_shuffle takes a list of strings and returns a new list with the
same items in a random permutation.

Optionally allows the result list to be a different length than the
input list. A shorter result than input results in some items being
excluded. A longer result than input results in some items being
repeated, but never more often than the number of input items.
2016-05-14 16:48:45 -07:00
Martin Atkins b1de477691 provider/random: random_id resource
This resource generates a cryptographically-strong set of bytes and
provides them as base64, hexadecimal and decimal string representations.
It is intended to be used for generating unique ids for resources
elsewhere in the configuration, and thus the "keepers" would be set to
any ForceNew attributes of the target resources, so that a new id is
generated each time a new resource is generated.
2016-05-14 15:26:42 -07:00
Martin Atkins 3e34ddbf38 New "random" provider, representing randomness
This provider will have logical resources that allow Terraform to "manage"
randomness as a resource, producing random numbers on create and then
retaining the outcome in the state so that it will remain consistent
until something explicitly triggers generating new values.

Managing randomness in this way allows configurations to do things like
random distributions and ids without causing "perma-diffs".
2016-05-14 15:26:38 -07:00
Martin Atkins 453fc505f4 core: Tolerate missing resource variables during input walk
Provider nodes interpolate their config during the input walk, but this
is very early and so it's pretty likely that any resources referenced are
entirely absent from the state.

As a special case then, we tolerate the normally-fatal case of having
an entirely missing resource variable so that the input walk can complete,
albeit skipping the providers that have such interpolations.

If these interpolations end up still being unresolved during refresh
(e.g. because the config references a resource that hasn't been created
yet) then we will catch that error on the refresh pass, or indeed on the
plan pass if -refresh=false is used.
2016-05-14 09:25:03 -07:00
Martin Atkins f95dccf1b3 provider/null: null_data_source data source
A companion to the null_resource resource, this is here primarily to
enable manual quick testing of data sources workflows without depending
on any external services.

The "inputs" map gets copied to the computed "outputs" map on read,
"rand" gives a random number to exercise cases with constantly-changing
values (an anti-pattern!), and "has_computed_default" is settable in
config but computed if not set.
2016-05-14 08:26:37 -07:00
Martin Atkins 2ca10ad962 command: Show data source reads differently in plans
Internally a data source read is represented as a creation diff for the
resource, but in the UI we'll show it as a distinct icon and color so that
the user can more easily understand that these operations won't affect
any real infrastructure.

Unfortunately by the time we get to formatting the plan in the UI we
only have the resource names to work with, and can't get at the original
resource mode. Thus we're forced to infer the resource mode by exploiting
knowledge of the naming scheme.
2016-05-14 08:26:37 -07:00
Martin Atkins bfee4b0295 command: don't show old values for create diffs in plan
New resources logically don't have "old values" for their attributes, so
showing them as updates from the empty string is misleading and confusing.

Instead, we'll skip showing the old value in a creation diff.
2016-05-14 08:26:37 -07:00
Martin Atkins 5d27a5b3e2 command: Show id only when refreshing managed resources
Data resources don't have ids when they refresh, so we'll skip showing the
"(ID: ...)"  indicator for these. Showing it with no id makes it look
like something is broken.
2016-05-14 08:26:37 -07:00
Martin Atkins 60c24e3319 command: Prevent data resources from being tainted
Since the data resource lifecycle contains no steps to deal with tainted
instances, we must make sure that they never get created.

Doing this out in the command layer is not the best, but this is currently
the only layer that has enough information to make this decision and so
this simple solution was preferred over a more disruptive refactoring,
under the assumption that this taint functionality eventually gets
reworked in terms of StateFilter anyway.
2016-05-14 08:26:37 -07:00
Martin Atkins 61ab8bf39a core: ResourceAddress supports data resources
The ResourceAddress struct grows a new "Mode" field to match with
Resource, and its parser learns to recognize the "data." prefix so it
can set that field.

Allows -target to be applied to data sources, although that is arguably
not a very useful thing to do. Other future uses of resource addressing,
like the state plumbing commands, may be better uses of this.
2016-05-14 08:26:36 -07:00
Martin Atkins afc7ec5ac0 core: Destroy data resources with "terraform destroy"
Previously they would get left behind in the state because we had no
support for planning their destruction. Now we'll create a "destroy" plan
and act on it by just producing an empty state on apply, thus ensuring
that the data resources don't get left behind in the state after
everything else is gone.
2016-05-14 08:26:36 -07:00
Martin Atkins 4d50f22a23 core: separate lifecycle for data resource "orphans"
The handling of data "orphans" is simpler than for managed resources
because the only thing we need to deal with is our own state, and the
validation pass guarantees that by the time we get to refresh or apply
the instance state is no longer needed by any other resources and so
we can safely drop it with no fanfare.
2016-05-14 08:26:36 -07:00
Martin Atkins 36054470e4 core: lifecycle for data resources
This implements the main behavior of data resources, including both the
early read in cases where the configuration is non-computed and the split
plan/apply read for cases where full configuration can't be known until
apply time.
2016-05-14 08:26:36 -07:00
Martin Atkins 1da560b653 core: Separate resource lifecycle for data vs. managed resources
The key difference between data and managed resources is in their
respective lifecycles. Now the expanded resource EvalTree switches on
the resource mode, generating a different lifecycle for each mode.

For this initial change only managed resources are implemented, using the
same implementation as before; data resources are no-ops. The data
resource implementation will follow in a subsequent change.
2016-05-14 08:26:36 -07:00
Martin Atkins c1315b3f09 core: EvalValidate calls appropriate validator for resource mode
data resources are a separate namespace of resources than managed
resources, so we need to call a different provider method depending on
what mode of resource we're visiting.

Managed resources use ValidateResource, while data resources use
ValidateDataSource, since at the provider level of abstraction each
provider has separate sets of resources and data sources respectively.
2016-05-14 08:26:36 -07:00
Martin Atkins 844e1abdd3 core: ResourceStateKey understands resource modes
Once a data resource gets into the state, the state system needs to be
able to parse its id to match it with resources in the configuration.
Since data resources live in a separate namespace than managed resources,
the extra "mode" discriminator is required to specify which namespace
we're talking about, just like we do in the resource configuration.
2016-05-14 08:26:36 -07:00
Martin Atkins 64f2651204 website: Initial documentation about data sources
This will undoubtedly evolve as implementation continues, but this is some
initial documentation based on the design doc.
2016-05-14 08:26:36 -07:00
Martin Atkins 6cd22a4c9a helper/schema: emit warning when using data source resource shim
For backward compatibility we will continue to support using the data
sources that were formerly logical resources as resources for the moment,
but we want to warn the user about it since this support is likely to
be removed in future.

This is done by adding a new "deprecation message" feature to
schema.Resource, but for the moment this is done as an internal feature
(not usable directly by plugins) so that we can collect additional
use-cases and design a more general interface before creating a
compatibility constraint.
2016-05-14 08:26:36 -07:00
Martin Atkins 3eb4a89104 provider/terraform: remote state resource becomes a data source
As a first example of a real-world data source, the pre-existing
terraform_remote_state resource is adapted to be a data source. The
original resource is shimmed to wrap the data source for backward
compatibility.
2016-05-14 08:26:36 -07:00
Martin Atkins fb262d0dbe helper/schema: shim for making data sources act like resources
Historically we've had some "read-only" and "logical" resources. With the
addition of the data source concept these will gradually become data
sources, but we need to retain backward compatibility with existing
configurations that use the now-deprecated resources.

This shim is intended to allow us to easily create a resource from a
data source implementation. It adjusts the schema as needed and adds
stub Create and Delete implementations.

This would ideally also produce a deprecation warning whenever such a
shimmed resource is used, but the schema system doesn't currently have
a mechanism for resource-specific validation, so that remains just a TODO
for the moment.
2016-05-14 08:26:36 -07:00
Martin Atkins 6a468dcd83 helper/schema: Resource can be writable or not
In the "schema" layer a Resource is just any "thing" that has a schema
and supports some or all of the CRUD operations. Data sources introduce
a new use of Resource to represent read-only resources, which require
some different InternalValidate logic.
2016-05-14 08:26:36 -07:00
Martin Atkins 0e0e3d73af core: New ResourceProvider methods for data resources
This is a breaking change to the ResourceProvider interface that adds the
new operations relating to data sources.

DataSources, ValidateDataSource, ReadDataDiff and ReadDataApply are the
data source equivalents of Resources, Validate, Diff and Apply (respectively)
for managed resources.

The diff/apply model seems at first glance a rather strange workflow for
read-only resources, but implementing data resources in this way allows them
to fit cleanly into the standard plan/apply lifecycle in cases where the
configuration contains computed arguments and thus the read must be deferred
until apply time.

Along with breaking the interface, we also fix up the plugin client/server
and helper/schema implementations of it, which are all of the callers
used when provider plugins use helper/schema. This would be a breaking
change for any provider plugin that directly implements the provider
interface, but no known plugins do this and it is not recommended.

At the helper/schema layer the implementer sees ReadDataApply as a "Read",
as opposed to "Create" or "Update" as in the managed resource Apply
implementation. The planning mechanics are handled entirely within
helper/schema, so that complexity is hidden from the provider implementation
itself.
2016-05-14 08:26:36 -07:00
Martin Atkins 718cdda77b config: Parsing of data.TYPE.NAME.FIELD variables
This allows ${data.TYPE.NAME.FIELD} interpolation syntax at the
configuration level, though since there is no special handling of them
in the core package this currently just acts as an alias for
${TYPE.NAME.FIELD}.
2016-05-14 08:26:35 -07:00
Martin Atkins 860140074f config: Data source loading
This allows the config loader to read "data" blocks from the config and
turn them into DataSource objects.

This just reads the data from the config file. It doesn't validate the
data nor do anything useful with it.
2016-05-14 08:26:35 -07:00