A "Layer" is a particular service that forms part of the infrastructure for
a set of applications. Some layers are application servers and others are
pure infrastructure, like MySQL servers or load balancers.
Although the AWS API only has one type called "Layer", it actually has
a number of different "soft" types that each have slightly different
validation rules and extra properties that are packed into the Attributes
map.
To make the validation rule differences explicit in Terraform, and to make
the Terraform structure more closely resemble the OpsWorks UI than its
API, we use a separate resource type per layer type, with the common code
factored out into a shared struct type.
"Stack" is the root concept in OpsWorks, and acts as a container for a number
of different "layers" that each provide some service for an application.
A stack isn't very interesting on its own, but it needs to be created before
any layers can be created.
Here we add an OpsWorks client instance to the central client bundle and
establish a new documentation section, both of which will be fleshed out in
subsequent commits that add some OpsWorks resources.
For those accustomed to running commands via a shell it may not be clear
why this argument is a list and what the elements of that list should be.
Hopefully giving an example will help people understand what is expected.
This is in response to the misunderstanding discovered in #3011.
There isn't any precedent for abbreviating words in the interpolation
function names, and it may not be clear to all users what "enc" and "dec"
are short for, so instead we'll prefer to spell out the whole words for
improved readability.
An earlier version of the provider implementation accepted
key_material_file instead of key_material. This was updated in the
resource-specific docs but not in this provider-wide example.
The existing 404 page didn't quite fit in with the style of the rest of the site.
Fixes this by adding a layout value for use during rendering, and adds some nicer markup.
* 'master' of github.com:hashicorp/terraform:
Update CHANGELOG.md
Changing the ElastiCache Cluster configuration_engine to be on the cluster, not on the cache nodes
Adding configuration endpoint to the elasticache cluster nodes
When launching a new RDS instance in a VPC-default AWS account, trying to control which VPC the new RDS instance lands in is not apparent from the parameters available.
The following works:
```
resource "aws_db_subnet_group" "foo" {
name = "foo"
description = "DB Subnet for foo"
subnet_ids = ["${aws_subnet.foo_1a.id}", "${aws_subnet.foo_1b.id}"]
}
resource "aws_db_instance" "bar" {
...
db_subnet_group_name = "${aws_db_subnet_group.foo.name}"
...
}
```
Hopefully this doc update will help others
AWS provides three different ways to create AMIs that each have different
inputs, but once they are complete the same management operations apply.
Thus these three resources each have a different "Create" implementation
but then share the same "Read", "Update" and "Delete" implementations.
The Elasticache API accepts a mixed-case subnet name on create, but
normalizes it to lowercase before storing it. When retrieving a subnet,
the name is treated as case-sensitive, so the lowercase version must be
used.
Given that case within subnet names is not significant, the new StateFunc
on the name attribute causes the state to reflect the lowercase version
that the API uses, and changes in case alone will not show as a diff.
Given that we must look up subnet names in lower case, we set the
instance id to be a lowercase version of the user's provided name. This
then allows a later Refresh call to succeed even if the user provided
a mixed-case name.
Previously users could work around this by just avoiding putting uppercase
letters in the name, but that is often inconvenient if e.g. the name is
being constructed from variables defined elsewhere that may already have
uppercase letters present.
Common metadata state is now stored
Optimistic locking support added to common_metadata
Revisions to keys in project metadata are now reflected in the project state
Wrote tests for project metadata (all pass)
Relaxed test conditions to work on projects with extra keys
Added documentation for project metadata
- Added a retry loop for attaching disks as this something was tried to
fast when the VM was still booting
- Fix issue #3033
- Update docs for latest updates and done some minor refactoring
(styling)