It was a mistake to switched fully to `==` when activating waiting for
capacity on updates in #3947. Users that didn't set `min_elb_capacity ==
desired_capacity` and instead treated it as an actual "minimum" would
see timeouts for every create, since their target numbers would never be
reached exactly.
Here, we fix that regression by restoring the minimum waiting behavior
during creates.
In order to preserve all the stated behavior, I had to split out
different criteria for create and update, criteria which are now
exhaustively unit tested.
The set of fields that affect capacity waiting behavior has become a bit
of a mess. Next major release I'd like to rework all of these into a
more consistently named block of config. For now, just getting the
behavior correct and documented.
(Also removes all the fixed names from the ASG tests as I was hitting
collision issues running them over here.)
Fixes#4792
This function returns -1 for negative numbers, 0 for 0 and 1 for positive numbers.
Useful when you need to set a value for the first resource and a different value for the rest of the resources.
Example: `${element(split(",", var.r53_failover_policy), signum(count.index))}`
This commit adds support for declaring variable types in Terraform
configuration. Historically, the type has been inferred from the default
value, defaulting to string if no default was supplied. This has caused
users to devise workarounds if they wanted to declare a map but provide
values from a .tfvars file (for example).
The new syntax adds the "type" key to variable blocks:
```
variable "i_am_a_string" {
type = "string"
}
variable "i_am_a_map" {
type = "map"
}
```
This commit does _not_ extend the type system to include bools, integers
or floats - the only two types available are maps and strings.
Validation is performed if a default value is provided in order to
ensure that the default value type matches the declared type.
In the case that a type is not declared, the old logic is used for
determining the type. This allows backwards compatiblity with previous
Terraform configuration.
Adds the `TF_SKIP_REMOTE_TESTS` env var to be used in cases where the
`http.Get()` smoke test passes but the network is not able to service
the needs of the tests.
Fixes#4421
This means that terraform commands like `plan`, `apply`, `show`, and
`graph` will expand all modules by default.
While modules-as-black-boxes is still very true in the conceptual design
of modules, feedback on this behavior has consistently suggested that
users would prefer to see more verbose output by default.
The `-module-depth` flag and env var are retained to allow output to be
optionally limited / summarized by these commands.
In most cases private keys are used to produce certs and cert requests,
but there are some less-common cases where the PEM-formatted keypair is
used alone. The public_key_pem attribute supports such cases.
This also includes a public_key_openssh attribute, which allows this
resource to be used to generate temporary OpenSSH credentials, so that
e.g. a Terraform configuration could generate its own keypair to use
with the aws_key_pair resource. This has the same caveats as all cases
where we generate private keys in Terraform, but could be useful for
temporary/throwaway environments where the state either doesn't live for
long or is stored securely.
This builds on work started by Simarpreet Singh in #4441 .
This adds acceptance tests for specifying extra hosts on Docker
containers. It also renames the repeating block from `hosts` to `host`,
which reads more naturally in the schema when multiple instances of the
block are declared.
This allows specification of the profile for the shared credentials
provider for AWS to be specified in Terraform configuration. This is
useful if defining providers with aliases, or if you don't want to set
environment variables. Example:
$ aws configure --profile this_is_dog
... enter keys
$ cat main.tf
provider "aws" {
profile = "this_is_dog"
# Optionally also specify the path to the credentials file
shared_credentials_file = "/tmp/credentials"
}
This is equivalent to specifying AWS_PROFILE or
AWS_SHARED_CREDENTIALS_FILE in the environment.
The comment on first line of the code example is 82 characters long
and is cut on the 80-th character when viewed online. The second line
contains only two letters "on" without # in front.
The comment is displayed on two lines anyway, it is better if it is split to
two lines of less than 80 characters.
When spinning up from a snapshot or a read replica, these fields are
now optional:
* allocated_storage
* engine
* password
* username
Some validation logic is added to make these fields required when
starting a database from scratch.
The documentation is updated accordingly.