Add HCL syntax highlighting for everything but providers

This commit is contained in:
Seth Vargo 2017-04-05 11:29:27 -04:00
parent 34c553a42b
commit 7110d15f19
No known key found for this signature in database
GPG Key ID: C921994F9C27E0FF
46 changed files with 422 additions and 452 deletions

View File

@ -14,7 +14,7 @@ section. After configuring a backend, it has to be
Below, we show a complete example configuring the "consul" backend:
```
```hcl
terraform {
backend "consul" {
address = "demo.consul.io"
@ -78,7 +78,7 @@ basic encryption on disk so that values are at least not plaintext.
When using partial configuration, Terraform requires at a minimum that
an empty backend configuration is in the Terraform files. For example:
```
```hcl
terraform {
backend "consul" {}
}

View File

@ -21,7 +21,7 @@ It will likely end in `/artifactory`.
## Example Configuration
```
```hcl
terraform {
backend "artifactory" {
username = "SheldonCooper"
@ -35,7 +35,7 @@ terraform {
## Example Referencing
```
```hcl
data "terraform_remote_state" "foo" {
backend = "artifactory"
config {

View File

@ -19,7 +19,7 @@ and generate new token in the
## Example Configuration
```
```hcl
terraform {
backend "atlas" {
name = "bigbang/example"
@ -33,7 +33,7 @@ Note that for the access token we recommend using a
## Example Referencing
```
```hcl
data "terraform_remote_state" "foo" {
backend = "atlas"
config {

View File

@ -30,7 +30,6 @@ Note that for the access credentials we recommend using a
## Example Referencing
```hcl
# setup remote state data source
data "terraform_remote_state" "foo" {
backend = "azure"
config {

View File

@ -16,7 +16,7 @@ This backend supports [state locking](/docs/state/locking.html).
## Example Configuration
```
```hcl
terraform {
backend "consul" {
address = "demo.consul.io"
@ -30,7 +30,7 @@ Note that for the access credentials we recommend using a
## Example Referencing
```
```hcl
data "terraform_remote_state" "foo" {
backend = "consul"
config {

View File

@ -14,7 +14,7 @@ Stores the state in [etcd](https://coreos.com/etcd/) at a given path.
## Example Configuration
```
```hcl
terraform {
backend "etcd" {
path = "path/to/terraform.tfstate"
@ -25,7 +25,7 @@ terraform {
## Example Referencing
```
```hcl
data "terraform_remote_state" "foo" {
backend = "etcd"
config {

View File

@ -14,7 +14,7 @@ Stores the state as a given key in a given bucket on [Google Cloud Storage](http
## Example Configuration
```
```hcl
terraform {
backend "gcs" {
bucket = "tf-state-prod"
@ -27,7 +27,6 @@ terraform {
## Example Referencing
```hcl
# setup remote state data source
data "terraform_remote_state" "foo" {
backend = "gcs"
config {
@ -37,7 +36,6 @@ data "terraform_remote_state" "foo" {
}
}
# read value from data source
resource "template_file" "bar" {
template = "${greeting}"

View File

@ -16,7 +16,7 @@ State will be fetched via GET, updated via POST, and purged with DELETE.
## Example Usage
```
```hcl
terraform {
backend "http" {
address = "http://myrest.api.com"
@ -26,7 +26,7 @@ terraform {
## Example Referencing
```
```hcl
data "terraform_remote_state" "foo" {
backend = "http"
config {

View File

@ -15,7 +15,7 @@ state using system APIs, and performs operations locally.
## Example Configuration
```
```hcl
terraform {
backend "local" {
path = "relative/path/to/terraform.tfstate"
@ -25,7 +25,7 @@ terraform {
## Example Reference
```
```hcl
data "terraform_remote_state" "foo" {
backend = "local"

View File

@ -14,7 +14,7 @@ Stores the state as an artifact in [Manta](https://www.joyent.com/manta).
## Example Configuration
```
```hcl
terraform {
backend "manta" {
path = "random/path"
@ -28,7 +28,7 @@ Note that for the access credentials we recommend using a
## Example Referencing
```
```hcl
data "terraform_remote_state" "foo" {
backend = "manta"
config {

View File

@ -21,7 +21,7 @@ on the S3 bucket to allow for state recovery in the case of accidental deletions
## Example Configuration
```
```hcl
terraform {
backend "s3" {
bucket = "mybucket"
@ -43,7 +43,7 @@ To make use of the S3 remote state we can use the
[`terraform_remote_state` data
source](/docs/providers/terraform/d/remote_state.html).
```
```hcl
data "terraform_remote_state" "foo" {
backend = "s3"
config {

View File

@ -14,7 +14,7 @@ Stores the state as an artifact in [Swift](http://docs.openstack.org/developer/s
## Example Configuration
```
```hcl
terraform {
backend "swift" {
path = "terraform-state"
@ -27,7 +27,7 @@ Note that for the access credentials we recommend using a
## Example Referencing
```
```hcl
data "terraform_remote_state" "foo" {
backend = "swift"
config {

View File

@ -45,7 +45,7 @@ final command is outputted unless an error occurs earlier.
An example is shown below:
```
```shell
$ echo "1 + 5" | terraform console
6
```

View File

@ -46,10 +46,9 @@ The output of `terraform graph` is in the DOT format, which can
easily be converted to an image by making use of `dot` provided
by GraphViz:
```
```shell
$ terraform graph | dot -Tpng > graph.png
```
Here is an example graph output:
![Graph Example](graph-example.png)

View File

@ -89,7 +89,7 @@ As a working example, if you're importing AWS resources and you have a
configuration file with the contents below, then Terraform will configure
the AWS provider with this file.
```
```hcl
variable "access_key" {}
variable "secret_key" {}
@ -108,7 +108,7 @@ may not be valid.
This example will import an AWS instance:
```
```shell
$ terraform import aws_instance.foo i-abcd1234
```
@ -116,6 +116,6 @@ $ terraform import aws_instance.foo i-abcd1234
The example below will import an AWS instance into a module:
```
```shell
$ terraform import module.foo.aws_instance.bar i-abcd1234
```

View File

@ -87,7 +87,7 @@ If the value contains an equal sign (`=`), it is parsed as a `key=value` pair.
The format of this flag is identical to the `-var` flag for plan, apply,
etc. but applies to configuration keys for backends. For example:
```
```shell
$ terraform init \
-backend-config 'address=demo.consul.io' \
-backend-config 'path=newpath'

View File

@ -36,7 +36,7 @@ The command-line flags are all optional. The list of available flags are:
These examples assume the following Terraform output snippet.
```ruby
```hcl
output "lb_address" {
value = "${aws_alb.web.public_dns}"
}
@ -48,20 +48,20 @@ output "instance_ips" {
To list all outputs:
```text
```shell
$ terraform output
```
To query for the DNS address of the load balancer:
```text
```shell
$ terraform output lb_address
my-app-alb-1657023003.us-east-1.elb.amazonaws.com
```
To query for all instance IP addresses:
```text
```shell
$ terraform output instance_ips
test = [
54.43.114.12,
@ -74,6 +74,6 @@ To query for a particular value in a list, use `-json` and a JSON
command-line parser such as [jq](https://stedolan.github.io/jq/).
For example, to query for the first instance's IP address:
```text
```shell
$ terraform output -json instance_ips | jq '.value[0]'
```

View File

@ -105,9 +105,8 @@ The variable values can be updated using the `-overwrite` flag or via
the [Atlas website](https://atlas.hashicorp.com). An example of updating
just a single variable `foo` is shown below:
```
```shell
$ terraform push -var 'foo=bar' -overwrite foo
...
```
Both the `-var` and `-overwrite` flag are required. The `-var` flag

View File

@ -26,7 +26,7 @@ already.
Atlas configuration looks like the following:
```
```hcl
atlas {
name = "mitchellh/production-example"
}
@ -51,7 +51,7 @@ set defaults, then use the command-line flags of the
The full syntax is:
```
```text
atlas {
name = VALUE
}

View File

@ -36,17 +36,19 @@ already.
A data source configuration looks like the following:
```
// Find the latest available AMI that is tagged with Component = web
```hcl
# Find the latest available AMI that is tagged with Component = web
data "aws_ami" "web" {
filter {
name = "state"
values = ["available"]
}
filter {
name = "tag:Component"
values = ["web"]
}
most_recent = true
}
```
@ -65,7 +67,7 @@ Each data instance will export one or more attributes, which can be
interpolated into other resources using variables of the form
`data.TYPE.NAME.ATTR`. For example:
```
```hcl
resource "aws_instance" "web" {
ami = "${data.aws_ami.web.id}"
instance_type = "t1.micro"
@ -78,13 +80,12 @@ Similarly to [resources](/docs/configuration/resources.html), the
`provider` meta-parameter can be used where a configuration has
multiple aliased instances of the same provider:
```
```hcl
data "aws_ami" "web" {
provider = "aws.west"
// etc...
# ...
}
```
See the "Multiple Provider Instances" documentation for resources

View File

@ -12,13 +12,13 @@ description: |-
If set to any value, enables detailed logs to appear on stderr which is useful for debugging. For example:
```
```shell
export TF_LOG=TRACE
```
To disable, either unset it or set it to empty. When unset, logging will default to stderr. For example:
```
```shell
export TF_LOG=
```
@ -28,7 +28,7 @@ For more on debugging Terraform, check out the section on [Debugging](/docs/inte
This specifies where the log should persist its output to. Note that even when `TF_LOG_PATH` is set, `TF_LOG` must be set in order for any logging to be enabled. For example, to always write the log to the directory you're currently running terraform from:
```
```shell
export TF_LOG_PATH=./terraform.log
```
@ -38,7 +38,7 @@ For more on debugging Terraform, check out the section on [Debugging](/docs/inte
If set to "false" or "0", causes terraform commands to behave as if the `-input=false` flag was specified. This is used when you want to disable prompts for variables that haven't had their values specified. For example:
```
```shell
export TF_INPUT=0
```
@ -46,7 +46,7 @@ export TF_INPUT=0
When given a value, causes terraform commands to behave as if the `-module-depth=VALUE` flag was specified. By setting this to 0, for example, you enable commands such as [plan](/docs/commands/plan.html) and [graph](/docs/commands/graph.html) to display more compressed information.
```
```shell
export TF_MODULE_DEPTH=0
```
@ -56,7 +56,7 @@ For more information regarding modules, check out the section on [Using Modules]
Environment variables can be used to set variables. The environment variables must be in the format `TF_VAR_name` and this will be checked last for a value. For example:
```
```shell
export TF_VAR_region=us-west-1
export TF_VAR_ami=ami-049d8641
export TF_VAR_alist='[1,2,3]'
@ -95,7 +95,7 @@ requiring remote network connectivity. The unit tests make an attempt to
automatically detect when connectivity is unavailable and skip the relevant
tests, but by setting this variable you can force these tests to be skipped.
```
```shell
export TF_SKIP_REMOTE_TESTS=1
make test
```

View File

@ -74,8 +74,6 @@ interpolate the current index in a multi-count resource. For more
information on `count`, see the [resource configuration
page](/docs/configuration/resources.html).
<a id="path-variables"></a>
#### Path information
The syntax is `path.TYPE`. TYPE can be `cwd`, `module`, or `root`.
@ -90,12 +88,11 @@ The syntax is `terraform.FIELD`. This variable type contains metadata about
the currently executing Terraform run. FIELD can currently only be `env` to
reference the currently active [state environment](/docs/state/environments.html).
<a id="conditionals"></a>
## Conditionals
Interpolations may contain conditionals to branch on the final value.
```
```hcl
resource "aws_instance" "web" {
subnet = "${var.env == "production" ? var.prod_subnet : var.dev_subnet}"
}
@ -103,7 +100,9 @@ resource "aws_instance" "web" {
The conditional syntax is the well-known ternary operation:
```text
CONDITION ? TRUEVAL : FALSEVAL
```
The condition can be any valid interpolation syntax, such as variable
access, a function call, or even another conditional. The true and false
@ -119,7 +118,7 @@ The support operators are:
A common use case for conditionals is to enable/disable a resource by
conditionally setting the count:
```
```hcl
resource "aws_instance" "vpn" {
count = "${var.something ? 1 : 0}"
}
@ -129,7 +128,6 @@ In the example above, the "vpn" resource will only be included if
"var.something" evaluates to true. Otherwise, the VPN resource will
not be created at all.
<a id="functions"></a>
## Built-in Functions
Terraform ships with built-in functions. Functions are called with the
@ -347,7 +345,6 @@ The supported built-in functions are:
of the key used to encrypt their initial password, you might use:
`zipmap(aws_iam_user.users.*.name, aws_iam_user_login_profile.users.*.key_fingerprint)`.
<a id="templates"></a>
## Templates
Long strings can be managed using templates.
@ -358,7 +355,7 @@ computed `rendered` attribute containing the result.
A template data source looks like:
```
```hcl
data "template_file" "example" {
template = "$${hello} $${world}!"
vars {
@ -383,7 +380,7 @@ details on template usage, please see the
Here is an example that combines the capabilities of templates with the interpolation
from `count` to give us a parameterized template, unique to each resource instance:
```
```hcl
variable "count" {
default = 2
}
@ -396,41 +393,42 @@ variable "hostnames" {
}
data "template_file" "web_init" {
// here we expand multiple template_files - the same number as we have instances
# Expand multiple template files - the same number as we have instances
count = "${var.count}"
template = "${file("templates/web_init.tpl")}"
vars {
// that gives us access to use count.index to do the lookup
# that gives us access to use count.index to do the lookup
hostname = "${lookup(var.hostnames, count.index)}"
}
}
resource "aws_instance" "web" {
// ...
# ...
count = "${var.count}"
// here we link each web instance to the proper template_file
# Link each web instance to the proper template_file
user_data = "${element(data.template_file.web_init.*.rendered, count.index)}"
}
```
With this, we will build a list of `template_file.web_init` data sources which we can
use in combination with our list of `aws_instance.web` resources.
With this, we will build a list of `template_file.web_init` data sources which
we can use in combination with our list of `aws_instance.web` resources.
<a id="math"></a>
## Math
Simple math can be performed in interpolations:
```
```hcl
variable "count" {
default = 2
}
resource "aws_instance" "web" {
// ...
# ...
count = "${var.count}"
// tag the instance with a counter starting at 1, ie. web-001
# Tag the instance with a counter starting at 1, ie. web-001
tags {
Name = "${format("web-%03d", count.index + 1)}"
}
@ -446,7 +444,7 @@ Operator precedences is the standard mathematical order of operations:
*Multiply* (`*`), *Divide* (`/`), and *Modulo* (`%`) have precedence over
*Add* (`+`) and *Subtract* (`-`). Parenthesis can be used to force ordering.
```
```text
"${2 * 4 + 3 * 3}" # computes to 17
"${3 * 3 + 2 * 4}" # computes to 17
"${2 * (4 + 3) * 3}" # computes to 42

View File

@ -21,7 +21,7 @@ already.
Module configuration looks like the following:
```
```hcl
module "consul" {
source = "github.com/hashicorp/consul/terraform/aws"
servers = 5
@ -54,7 +54,7 @@ lists and maps.
The full syntax is:
```
```text
module NAME {
source = SOURCE_URL
@ -64,6 +64,6 @@ module NAME {
where `CONFIG` is:
```
```text
KEY = VALUE
```

View File

@ -27,7 +27,7 @@ already.
A simple output configuration looks like the following:
```ruby
```hcl
output "address" {
value = "${aws_instance.db.public_dns}"
}
@ -38,7 +38,7 @@ DNS address of the Terraform-defined AWS instance named "db". It
is possible to export complex data types like maps and strings as
well:
```ruby
```hcl
output "addresses" {
value = ["${aws_instance.web.*.public_dns}"]
}
@ -54,22 +54,21 @@ the output variable.
Within the block (the `{ }`) is configuration for the output.
These are the parameters that can be set:
* `value` (required) - The value of the output. This can be a string, list,
or map. This usually includes an interpolation since outputs that are
static aren't usually useful.
- `value` (required) - The value of the output. This can be a string, list, or
map. This usually includes an interpolation since outputs that are static
aren't usually useful.
* `depends_on` (list of strings) - Explicit dependencies that this
output has. These dependencies will be created before this
output value is processed. The dependencies are in the format of
`TYPE.NAME`, for example `aws_instance.web`.
- `depends_on` (list of strings) - Explicit dependencies that this output has.
These dependencies will be created before this output value is processed. The
dependencies are in the format of `TYPE.NAME`, for example `aws_instance.web`.
* `sensitive` (optional, boolean) - See below.
- `sensitive` (optional, boolean) - See below.
## Syntax
The full syntax is:
```ruby
```text
output NAME {
value = VALUE
}
@ -80,7 +79,7 @@ output NAME {
Outputs can be marked as containing sensitive material by setting the
`sensitive` attribute to `true`, like this:
```ruby
```hcl
output "sensitive" {
sensitive = true
value = VALUE
@ -93,8 +92,9 @@ displayed in place of their value.
### Limitations of Sensitive Outputs
* The values of sensitive outputs are still stored in the Terraform
state, and available using the `terraform output` command, so cannot be
relied on as a sole means of protecting values.
* Sensitivity is not tracked internally, so if the output is interpolated in
- The values of sensitive outputs are still stored in the Terraform state, and
available using the `terraform output` command, so cannot be relied on as a
sole means of protecting values.
- Sensitivity is not tracked internally, so if the output is interpolated in
another module into a resource, the value will be displayed.

View File

@ -35,7 +35,7 @@ Terraform configurations.
If you have a Terraform configuration `example.tf` with the contents:
```
```hcl
resource "aws_instance" "web" {
ami = "ami-408c7f28"
}
@ -43,7 +43,7 @@ resource "aws_instance" "web" {
And you created a file `override.tf` with the contents:
```
```hcl
resource "aws_instance" "web" {
ami = "foo"
}

View File

@ -29,7 +29,7 @@ already.
A provider configuration looks like the following:
```
```hcl
provider "aws" {
access_key = "foo"
secret_key = "bar"
@ -45,13 +45,11 @@ Multiple provider blocks can be used to configure multiple providers.
Terraform matches providers to resources by matching two criteria.
Both criteria must be matched for a provider to manage a resource:
* They must share a common prefix. Longest matching prefixes are
tried first. For example, `aws_instance` would choose the
`aws` provider.
- They must share a common prefix. Longest matching prefixes are tried first.
For example, `aws_instance` would choose the `aws` provider.
* The provider must report that it supports the given resource
type. Providers internally tell Terraform the list of resources
they support.
- The provider must report that it supports the given resource type. Providers
internally tell Terraform the list of resources they support.
Within the block (the `{ }`) is configuration for the resource.
The configuration is dependent on the type, and is documented
@ -68,7 +66,7 @@ To define multiple provider instances, repeat the provider configuration
multiple times, but set the `alias` field and name the provider. For
example:
```
```hcl
# The default provider
provider "aws" {
# ...
@ -77,7 +75,6 @@ provider "aws" {
# West coast region
provider "aws" {
alias = "west"
region = "us-west-2"
}
```
@ -85,7 +82,7 @@ provider "aws" {
After naming a provider, you reference it in resources with the `provider`
field:
```
```hcl
resource "aws_instance" "foo" {
provider = "aws.west"
@ -101,7 +98,7 @@ is used (the provider configuration with no `alias` set). The value of the
The full syntax is:
```
```text
provider NAME {
CONFIG ...
[alias = ALIAS]
@ -110,7 +107,7 @@ provider NAME {
where `CONFIG` is:
```
```text
KEY = VALUE
KEY {

View File

@ -23,7 +23,7 @@ already.
A resource configuration looks like the following:
```
```hcl
resource "aws_instance" "web" {
ami = "ami-408c7f28"
instance_type = "t1.micro"
@ -41,64 +41,56 @@ configuration is dependent on the type, and is documented for each
resource type in the
[providers section](/docs/providers/index.html).
<a id="meta-parameters"></a>
### Meta-parameters
There are **meta-parameters** available to all resources:
* `count` (int) - The number of identical resources to create.
This doesn't apply to all resources. For details on using variables in
conjunction with count, see [Using Variables with
`count`](#using-variables-with-count) below.
- `count` (int) - The number of identical resources to create. This doesn't
apply to all resources. For details on using variables in conjunction with
count, see [Using Variables with `count`](#using-variables-with-count) below.
~> **NOTE:** Modules don't currently support the `count` parameter.
-> Modules don't currently support the `count` parameter.
* `depends_on` (list of strings) - Explicit dependencies that this
resource has. These dependencies will be created before this
resource. For syntax and other details, see the section below on
[explicit dependencies](#explicit-dependencies).
- `depends_on` (list of strings) - Explicit dependencies that this resource has.
These dependencies will be created before this resource. For syntax and other
details, see the section below on [explicit
dependencies](#explicit-dependencies).
* `provider` (string) - The name of a specific provider to use for
this resource. The name is in the format of `TYPE.ALIAS`, for example,
`aws.west`. Where `west` is set using the `alias` attribute in a
provider. See [multiple provider instances](#multi-provider-instances).
- `provider` (string) - The name of a specific provider to use for this
resource. The name is in the format of `TYPE.ALIAS`, for example, `aws.west`.
Where `west` is set using the `alias` attribute in a provider. See [multiple
provider instances](#multi-provider-instances).
* `lifecycle` (configuration block) - Customizes the lifecycle
behavior of the resource. The specific options are documented
below.
- `lifecycle` (configuration block) - Customizes the lifecycle behavior of the
resource. The specific options are documented below.
The `lifecycle` block allows the following keys to be set:
* `create_before_destroy` (bool) - This flag is used to ensure
the replacement of a resource is created before the original
instance is destroyed. As an example, this can be used to
create an new DNS record before removing an old record.
- `create_before_destroy` (bool) - This flag is used to ensure the replacement
of a resource is created before the original instance is destroyed. As an
example, this can be used to create an new DNS record before removing an old
record.
* `prevent_destroy` (bool) - This flag provides extra protection against the
destruction of a given resource. When this is set to `true`, any plan
that includes a destroy of this resource will return an error message.
~> Resources that utilize the `create_before_destroy` key can only
depend on other resources that also include `create_before_destroy`.
Referencing a resource that does not include `create_before_destroy`
will result in a dependency graph cycle.
<a id="ignore-changes"></a>
- `prevent_destroy` (bool) - This flag provides extra protection against the
destruction of a given resource. When this is set to `true`, any plan that
includes a destroy of this resource will return an error message.
* `ignore_changes` (list of strings) - Customizes how diffs are evaluated for
resources, allowing individual attributes to be ignored through changes.
As an example, this can be used to ignore dynamic changes to the
resource from external resources. Other meta-parameters cannot be ignored.
- `ignore_changes` (list of strings) - Customizes how diffs are evaluated for
resources, allowing individual attributes to be ignored through changes. As
an example, this can be used to ignore dynamic changes to the resource from
external resources. Other meta-parameters cannot be ignored.
~> **NOTE on create\_before\_destroy and dependencies:** Resources that utilize
the `create_before_destroy` key can only depend on other resources that also
include `create_before_destroy`. Referencing a resource that does not include
`create_before_destroy` will result in a dependency graph cycle.
~> **NOTE on ignore\_changes:** Ignored attribute names can be matched by their
name, not state ID. For example, if an `aws_route_table` has two routes defined
and the `ignore_changes` list contains "route", both routes will be ignored.
~> Ignored attribute names can be matched by their name, not state ID.
For example, if an `aws_route_table` has two routes defined and the
`ignore_changes` list contains "route", both routes will be ignored.
Additionally you can also use a single entry with a wildcard (e.g. `"*"`)
which will match all attribute names. Using a partial string together with a
wildcard (e.g. `"rout*"`) is **not** supported.
<a id="timeouts"></a>
which will match all attribute names. Using a partial string together
with a wildcard (e.g. `"rout*"`) is **not** supported.
### Timeouts
@ -113,14 +105,15 @@ them in their configuration.
Example overwriting the `create` and `delete` timeouts:
```
```hcl
resource "aws_db_instance" "timeout_example" {
allocated_storage = 10
engine = "mysql"
engine_version = "5.6.17"
instance_class = "db.t1.micro"
name = "mydb"
[...]
# ...
timeouts {
create = "60m"
@ -134,8 +127,6 @@ attempting to configure the timeout for a Resource that does not support
Timeouts, or overwriting a specific action that the Resource does not specify as
an option, will result in an error. Valid units of time are `s`, `m`, `h`.
<a id="explicit-dependencies"></a>
### Explicit Dependencies
Terraform ensures that dependencies are successfully created before a
@ -158,8 +149,8 @@ be allowed to determine dependencies automatically.
The syntax of `depends_on` is a list of resources and modules:
* Resources are `TYPE.NAME`, such as `aws_instance.web`.
* Modules are `module.NAME`, such as `module.foo`.
- Resources are `TYPE.NAME`, such as `aws_instance.web`.
- Modules are `module.NAME`, such as `module.foo`.
When a resource depends on a module, _everything_ in that module must be
created before the resource is created.
@ -167,7 +158,7 @@ created before the resource is created.
An example of a resource depending on both a module and resource is shown
below. Note that `depends_on` can contain any number of dependencies:
```
```hcl
resource "aws_instance" "web" {
depends_on = ["aws_instance.leader", "module.vpc"]
}
@ -179,8 +170,6 @@ scenario by having your resources depend only on what they explicitly use.
Please think carefully before you use `depends_on` to determine if Terraform
could automatically do this a better way.
<a id="connection-block"></a>
### Connection block
Within a resource, you can optionally have a **connection block**.
@ -196,8 +185,6 @@ but other data must be specified by the user.
The full list of settings that can be specified are listed on
the [provisioner connection page](/docs/provisioners/connection.html).
<a id="provisioners"></a>
### Provisioners
Within a resource, you can specify zero or more **provisioner
@ -213,8 +200,6 @@ provide more specific connection info for a specific provisioner.
An example use case might be to use a different user to log in
for a single provisioner.
<a id="using-variables-with-count"></a>
## Using Variables With `count`
When declaring multiple instances of a resource using [`count`](#count), it is
@ -228,7 +213,7 @@ For example, here's how you could create three [AWS
Instances](/docs/providers/aws/r/instance.html) each with their own
static IP address:
```
```hcl
variable "instance_ips" {
default = {
"0" = "10.11.12.100"
@ -244,8 +229,6 @@ resource "aws_instance" "app" {
}
```
<a id="multi-provider-instances"></a>
## Multiple Provider Instances
By default, a resource targets the provider based on its type. For example
@ -257,7 +240,7 @@ a provider that is configured multiple times to support multiple regions, etc.
To target another provider, set the `provider` field:
```
```hcl
resource "aws_instance" "foo" {
provider = "aws.west"
@ -269,7 +252,7 @@ The value of the field should be `TYPE` or `TYPE.ALIAS`. The `ALIAS` value
comes from the `alias` field value when configuring the
[provider](/docs/configuration/providers.html).
```
```hcl
provider "aws" {
alias = "west"
@ -283,7 +266,7 @@ If no `provider` field is specified, the default provider is used.
The full syntax is:
```
```text
resource TYPE NAME {
CONFIG ...
[count = COUNT]
@ -299,7 +282,7 @@ resource TYPE NAME {
where `CONFIG` is:
```
```text
KEY = VALUE
KEY {
@ -309,7 +292,7 @@ KEY {
where `LIFECYCLE` is:
```
```text
lifecycle {
[create_before_destroy = true|false]
[prevent_destroy = true|false]
@ -319,7 +302,7 @@ lifecycle {
where `CONNECTION` is:
```
```text
connection {
KEY = VALUE
...
@ -328,7 +311,7 @@ connection {
where `PROVISIONER` is:
```
```text
provisioner NAME {
CONFIG ...

View File

@ -3,13 +3,15 @@ layout: "docs"
page_title: "Configuration Syntax"
sidebar_current: "docs-config-syntax"
description: |-
The syntax of Terraform configurations is custom. It is meant to strike a balance between human readable and editable as well as being machine-friendly. For machine-friendliness, Terraform can also read JSON configurations. For general Terraform configurations, however, we recommend using the Terraform syntax.
The syntax of Terraform configurations is custom. It is meant to strike a
balance between human readable and editable as well as being machine-friendly.
For machine-friendliness, Terraform can also read JSON configurations. For
general Terraform configurations, however, we recommend using the Terraform
syntax.
---
# Configuration Syntax
<a id="hcl"></a>
The syntax of Terraform configurations is called [HashiCorp Configuration
Language (HCL)](https://github.com/hashicorp/hcl). It is meant to strike a
balance between human readable and editable as well as being machine-friendly.
@ -21,7 +23,7 @@ syntax.
Here is an example of Terraform's HCL syntax:
```
```hcl
# An AMI
variable "ami" {
description = "the AMI to use"
@ -80,13 +82,15 @@ such as the "resource" and "variable" in the example above. These
sections are similar to maps, but visually look better. For example,
these are nearly equivalent:
```
```hcl
variable "ami" {
description = "the AMI to use"
}
```
# is equal to:
is equal to:
```hcl
variable = [{
"ami": {
"description": "the AMI to use",

View File

@ -19,7 +19,7 @@ already.
Terraform configuration looks like the following:
```
```hcl
terraform {
required_version = "> 0.7.0"
}
@ -55,11 +55,14 @@ The value of this configuration is a comma-separated list of constraints.
A constraint is an operator followed by a version, such as `> 0.7.0`.
Constraints support the following operations:
* `=` (or no operator): exact version equality
* `!=`: version not equal
* `>`, `>=`, `<`, `<=`: version comparison, where "greater than" is
a larger version number.
* `~>`: pessimistic constraint operator. Example: for `~> 0.9`, this means
- `=` (or no operator): exact version equality
- `!=`: version not equal
- `>`, `>=`, `<`, `<=`: version comparison, where "greater than" is a larger
version number
- `~>`: pessimistic constraint operator. Example: for `~> 0.9`, this means
`>= 0.9, < 1.0`. Example: for `~> 0.8.4`, this means `>= 0.8.4, < 0.9`
For modules, a minimum version is recommended, such as `> 0.8.0`. This
@ -70,7 +73,7 @@ the consumer flexibility to use newer versions.
The full syntax is:
```
```text
terraform {
required_version = VALUE
}

View File

@ -22,7 +22,7 @@ already.
A variable configuration looks like the following:
```
```hcl
variable "key" {
type = "string"
}
@ -54,23 +54,20 @@ throughout the Terraform configuration.
Within the block (the `{ }`) is configuration for the variable.
These are the parameters that can be set:
* `type` (optional) - If set this defines the type of the variable.
Valid values are `string`, `list`, and `map`. If this field is omitted, the
variable type will be inferred based on the `default`. If no `default` is
provided, the type is assumed to be `string`.
- `type` (optional) - If set this defines the type of the variable. Valid values
are `string`, `list`, and `map`. If this field is omitted, the variable type
will be inferred based on the `default`. If no `default` is provided, the type
is assumed to be `string`.
* `default` (optional) - This sets a default value for the variable.
If no default is provided, the variable is considered required and
Terraform will error if it is not set. The default value can be any of the
data types Terraform supports. This is covered in more detail below.
- `default` (optional) - This sets a default value for the variable. If no
default is provided, the variable is considered required and Terraform will
error if it is not set. The default value can be any of the data types
Terraform supports. This is covered in more detail below.
* `description` (optional) - A human-friendly description for
the variable. This is primarily for documentation for users
using your Terraform configuration. A future version of Terraform
will expose these descriptions as part of some Terraform CLI
command.
------
- `description` (optional) - A human-friendly description for the variable. This
is primarily for documentation for users using your Terraform configuration. A
future version of Terraform will expose these descriptions as part of some
Terraform CLI command.
-> **Note**: Default values can be strings, lists, or maps. If a default is
specified, it must match the declared type of the variable.
@ -80,7 +77,7 @@ specified, it must match the declared type of the variable.
String values are simple and represent a basic key to value
mapping where the key is the variable name. An example is:
```
```hcl
variable "key" {
type = "string"
default = "value"
@ -89,7 +86,7 @@ variable "key" {
A multi-line string value can be provided using heredoc syntax.
```
```hcl
variable "long_key" {
type = "string"
default = <<EOF
@ -106,7 +103,7 @@ for some values that change depending on some external pivot.
A common use case for this is mapping cloud images to regions.
An example:
```
```hcl
variable "images" {
type = "map"
default = {
@ -120,7 +117,7 @@ variable "images" {
A list can also be useful to store certain variables. For example:
```
```hcl
variable "users" {
type = "list"
default = ["admin", "ubuntu"]
@ -135,7 +132,7 @@ page.
The full syntax is:
```
```text
variable NAME {
[type = TYPE]
[default = DEFAULT]
@ -145,7 +142,7 @@ variable NAME {
where `DEFAULT` is:
```
```text
VALUE
[
@ -166,13 +163,13 @@ silently converted to string types. The implications of this are subtle and
should be completely understood if you plan on using boolean values.
It is instead recommended you avoid using boolean values for now and use
explicit strings. A future version of Terraform will properly support
booleans and using the current behavior could result in backwards-incompatibilities
in the future.
explicit strings. A future version of Terraform will properly support booleans
and using the current behavior could result in backwards-incompatibilities in
the future.
For a configuration such as the following:
```
```hcl
variable "active" {
default = false
}
@ -182,15 +179,14 @@ The false is converted to a string `"0"` when running Terraform.
Then, depending on where you specify overrides, the behavior can differ:
* Variables with boolean values in a `tfvars` file will likewise be
converted to "0" and "1" values.
- Variables with boolean values in a `tfvars` file will likewise be converted to
"0" and "1" values.
* Variables specified via the `-var` command line flag will be literal
strings "true" and "false", so care should be taken to explicitly use
"0" or "1".
- Variables specified via the `-var` command line flag will be literal strings
"true" and "false", so care should be taken to explicitly use "0" or "1".
* Variables specified with the `TF_VAR_` environment variables will
be literal string values, just like `-var`.
- Variables specified with the `TF_VAR_` environment variables will be literal
string values, just like `-var`.
A future version of Terraform will fully support first-class boolean
types which will make the behavior of booleans consistent as you would
@ -199,8 +195,8 @@ expect. This may break some of the above behavior.
When passing boolean-like variables as parameters to resource configurations
that expect boolean values, they are converted consistently:
* "1", "true", "t" all become `true`
* "0", "false", "f" all become `false`
- "1", "true", "t" all become `true`
- "0", "false", "f" all become `false`
The behavior of conversion above will likely not change in future
Terraform versions. Therefore, simply using string values rather than
@ -214,13 +210,13 @@ is the value of the variable.
For example, given the configuration below:
```
```hcl
variable "image" {}
```
The variable can be set via an environment variable:
```
```shell
$ TF_VAR_image=foo terraform apply
```
@ -229,7 +225,7 @@ Maps and lists can be specified using environment variables as well using
For a list variable like so:
```
```hcl
variable "somelist" {
type = "list"
}
@ -237,13 +233,13 @@ variable "somelist" {
The variable could be set like so:
```
```shell
$ TF_VAR_somelist='["ami-abc123", "ami-bcd234"]' terraform plan
```
Similarly, for a map declared like:
```
```hcl
variable "somemap" {
type = "map"
}
@ -251,14 +247,12 @@ variable "somemap" {
The value can be set like this:
```
```shell
$ TF_VAR_somemap='{foo = "bar", baz = "qux"}' terraform plan
```
## Variable Files
<a id="variable-files"></a>
Variables can be collected in files and passed all at once using the
`-var-file=foo.tfvars` flag.
@ -271,13 +265,15 @@ Variables files use HCL or JSON to define variable values. Strings, lists or
maps may be set in the same manner as the default value in a `variable` block
in Terraform configuration. For example:
```
```hcl
foo = "bar"
xyz = "abc"
somelist = [
"one",
"two",
]
somemap = {
foo = "bar"
bax = "qux"
@ -286,13 +282,13 @@ somemap = {
The `-var-file` flag can be used multiple times per command invocation:
```
terraform apply -var-file=foo.tfvars -var-file=bar.tfvars
```shell
$ terraform apply -var-file=foo.tfvars -var-file=bar.tfvars
```
-> **Note**: Variable files are evaluated in the order in which they are specified
on the command line. If a variable is defined in more than one variable file,
the last value specified is effective.
-> **Note**: Variable files are evaluated in the order in which they are
specified on the command line. If a variable is defined in more than one
variable file, the last value specified is effective.
### Variable Merging
@ -301,21 +297,21 @@ overridden. Map values are always merged.
For example, if you set a variable twice on the command line:
```
terraform apply -var foo=bar -var foo=baz
```shell
$ terraform apply -var foo=bar -var foo=baz
```
Then the value of `foo` will be `baz` since it was the last value seen.
However, for maps, the values are merged:
```
terraform apply -var 'foo={foo="bar"}' -var 'foo={bar="baz"}'
```shell
$ terraform apply -var 'foo={foo="bar"}' -var 'foo={bar="baz"}'
```
The resulting value of `foo` will be:
```
```shell
{
foo = "bar"
bar = "baz"
@ -332,20 +328,20 @@ Both these files have the variable `baz` defined:
_foo.tfvars_
```
```hcl
baz = "foo"
```
_bar.tfvars_
```
```hcl
baz = "bar"
```
When they are passed in the following order:
```
terraform apply -var-file=foo.tfvars -var-file=bar.tfvars
```shell
$ terraform apply -var-file=foo.tfvars -var-file=bar.tfvars
```
The result will be that `baz` will contain the value `bar` because `bar.tfvars`

View File

@ -17,9 +17,8 @@ be able to do this.
Using `terraform import` is simple. An example is shown below:
```
```shell
$ terraform import aws_instance.bar i-abcd1234
...
```
The above command imports an AWS instance with the given ID to the

View File

@ -16,8 +16,6 @@ To persist logged output you can set `TF_LOG_PATH` in order to force the log to
If you find a bug with Terraform, please include the detailed log by using a service such as gist.
<a id="interpreting-a-crash-log"></a>
## Interpreting a Crash Log
If Terraform ever crashes (a "panic" in the Go runtime), it saves a log file
@ -35,7 +33,7 @@ backtrace immediately following. So the first thing to do is to search the file
for `panic: `, which should jump you right to this message. It will look
something like this:
```
```text
panic: runtime error: invalid memory address or nil pointer dereference
goroutine 123 [running]:
@ -61,7 +59,7 @@ created by net/rpc.(*Server).ServeCodec
The key part of this message is the first two lines that involve `hashicorp/terraform`. In this example:
```
```text
github.com/hashicorp/terraform/builtin/providers/aws.resourceAwsSomeResourceCreate(...)
/opt/gopath/src/github.com/hashicorp/terraform/builtin/providers/aws/resource_aws_some_resource.go:123 +0x123
```

View File

@ -20,7 +20,7 @@ However, when you upgrade you will need to manually delete old plugins from disk
If you don't do this you will see an error message like the following:
```
```text
[WARN] /usr/local/bin/terraform-provisioner-file overrides an internal plugin for file-provisioner.
If you did not expect to see this message you will need to remove the old plugin.
See https://www.terraform.io/docs/internals/plugins.html

View File

@ -18,7 +18,7 @@ Therefore, you can enter the source of any module, satisfy any required variable
Within a folder containing Terraform configurations, create a subfolder called `child`. In this subfolder, make one empty `main.tf` file. Then, back in the root folder containing the `child` folder, add this to one of your Terraform configuration files:
```
```hcl
module "child" {
source = "./child"
}
@ -37,7 +37,7 @@ Inputs of a module are [variables](/docs/configuration/variables.html) and outpu
Let's add a variable and an output to our `child` module.
```
```hcl
variable "memory" {}
output "received" {
@ -49,7 +49,7 @@ This will create a required variable, `memory`, and then an output, `received`,
You can then configure the module and use the output like so:
```
```hcl
module "child" {
source = "./child"
@ -69,7 +69,7 @@ It is sometimes useful to embed files within the module that aren't Terraform co
In these cases, you can't use a relative path, since paths in Terraform are generally relative to the working directory from which Terraform was executed. Instead, you want to use a module-relative path. To do this, you should use the [path interpolated variables](/docs/configuration/interpolation.html).
```
```hcl
resource "aws_instance" "server" {
# ...

View File

@ -31,7 +31,7 @@ Each is documented further below.
The easiest source is the local file path. For maximum portability, this should be a relative file path into a subdirectory. This allows you to organize your Terraform configuration into modules within one repository, for example:
```
```hcl
module "consul" {
source = "./consul"
}
@ -43,7 +43,7 @@ Updates for file paths are automatic: when "downloading" the module using the [g
Terraform will automatically recognize GitHub URLs and turn them into a link to the specific Git repository. The syntax is simple:
```
```hcl
module "consul" {
source = "github.com/hashicorp/example"
}
@ -51,7 +51,7 @@ module "consul" {
Subdirectories within the repository can also be referenced:
```
```hcl
module "consul" {
source = "github.com/hashicorp/example//subdir"
}
@ -59,7 +59,7 @@ module "consul" {
These will fetch the modules using HTTPS. If you want to use SSH instead:
```
```hcl
module "consul" {
source = "git@github.com:hashicorp/example.git//subdir"
}
@ -77,7 +77,7 @@ If you need Terraform to be able to fetch modules from private GitHub repos on a
First, create a [machine user](https://developer.github.com/guides/managing-deploy-keys/#machine-users) on GitHub with read access to the private repo in question, then embed this user's credentials into the `source` parameter:
```
```hcl
module "private-infra" {
source = "git::https://MACHINE-USER:MACHINE-PASS@github.com/org/privatemodules//modules/foo"
}
@ -89,7 +89,7 @@ module "private-infra" {
Terraform will automatically recognize BitBucket URLs and turn them into a link to the specific Git or Mercurial repository, for example:
```
```hcl
module "consul" {
source = "bitbucket.org/hashicorp/consul"
}
@ -97,7 +97,7 @@ module "consul" {
Subdirectories within the repository can also be referenced:
```
```hcl
module "consul" {
source = "bitbucket.org/hashicorp/consul//subdir"
}
@ -111,7 +111,7 @@ BitBucket URLs will require that Git or Mercurial is installed on your system, d
Generic Git repositories are also supported. The value of `source` in this case should be a complete Git-compatible URL. Using generic Git repositories requires that Git is installed on your system.
```
```hcl
module "consul" {
source = "git://hashicorp.com/consul.git"
}
@ -119,7 +119,7 @@ module "consul" {
You can also use protocols such as HTTP or SSH to reference a module, but you'll have specify to Terraform that it is a Git module, by prefixing the URL with `git::` like so:
```
```hcl
module "consul" {
source = "git::https://hashicorp.com/consul.git"
}
@ -135,7 +135,7 @@ The URLs for Git repositories support the following query parameters:
* `ref` - The ref to checkout. This can be a branch, tag, commit, etc.
```
```hcl
module "consul" {
source = "git::https://hashicorp.com/consul.git?ref=master"
}
@ -145,7 +145,7 @@ module "consul" {
Generic Mercurial repositories are supported. The value of `source` in this case should be a complete Mercurial-compatible URL. Using generic Mercurial repositories requires that Mercurial is installed on your system. You must tell Terraform that your `source` is a Mercurial repository by prefixing it with `hg::`.
```
```hcl
module "consul" {
source = "hg::http://hashicorp.com/consul.hg"
}
@ -155,7 +155,7 @@ URLs for Mercurial repositories support the following query parameters:
* `rev` - The rev to checkout. This can be a branch, tag, commit, etc.
```
```hcl
module "consul" {
source = "hg::http://hashicorp.com/consul.hg?rev=default"
}
@ -172,8 +172,8 @@ Terraform then looks for the resulting module URL in the following order:
2. Terraform will look for a `<meta>` tag with the name of `terraform-get`, for example:
```
<meta name=“terraform-get” content="github.com/hashicorp/example" />
```html
<meta name="terraform-get” content="github.com/hashicorp/example" />
```
### S3 Bucket
@ -189,7 +189,7 @@ Here are a couple of examples.
Using the `s3` protocol.
```
```hcl
module "consul" {
source = "s3::https://s3-eu-west-1.amazonaws.com/consulbucket/consul.zip"
}
@ -197,7 +197,7 @@ module "consul" {
Or directly using the bucket's URL.
```
```hcl
module "consul" {
source = "consulbucket.s3-eu-west-1.amazonaws.com/consul.zip"
}
@ -215,4 +215,3 @@ archive formats:
* zip
* gz
* bz2

View File

@ -9,7 +9,7 @@ description: Using modules in Terraform is very similar to defining resources.
Using modules in Terraform is very similar to defining resources:
```
```shell
module "consul" {
source = "github.com/hashicorp/consul/terraform/aws"
servers = 3
@ -26,8 +26,9 @@ The existence of the above configuration will tell Terraform to create the resou
You can instantiate a module multiple times.
```
```hcl
# my_buckets.tf
module "assets_bucket" {
source = "./publish_bucket"
name = "assets"
@ -38,7 +39,8 @@ module "media_bucket" {
name = "media"
}
```
```
```hcl
# publish_bucket/bucket-and-cloudfront.tf
variable "name" {} # this is the input parameter of the module
@ -65,9 +67,8 @@ are documented in the [Module sources documentation](/docs/modules/sources.html)
Prior to running any Terraform command with a configuration that uses modules, you'll have to [get](/docs/commands/get.html) the modules. This is done using the [get command](/docs/commands/get.html).
```
```shell
$ terraform get
...
```
This command will download the modules if they haven't been already.
@ -85,7 +86,7 @@ Additionally, because these map directly to variables, module configuration can
Modules can also specify their own [outputs](/docs/configuration/outputs.html). These outputs can be referenced in other places in your configuration, for example:
```
```hcl
resource "aws_instance" "client" {
ami = "ami-408c7f28"
instance_type = "t1.micro"
@ -99,8 +100,8 @@ Just like resources, this will create a dependency from the `aws_instance.client
To use module outputs via command line you have to specify the module name before the variable, for example:
```
terraform output -module=consul server_availability_zone
```shell
$ terraform output -module=consul server_availability_zone
```
## Plans and Graphs
@ -109,15 +110,11 @@ Commands such as the [plan command](/docs/commands/plan.html) and [graph command
For example, with a configuration similar to what we've built above, here is what the graph output looks like by default:
<div class="center">
![Terraform Expanded Module Graph](docs/module_graph_expand.png)
</div>
If instead we set `-module-depth=0`, the graph will look like this:
<div class="center">
![Terraform Module Graph](docs/module_graph.png)
</div>
Other commands work similarly with modules. Note that the `-module-depth` flag is purely a formatting flag; it doesn't affect what modules are created or not.
@ -125,8 +122,8 @@ Other commands work similarly with modules. Note that the `-module-depth` flag i
The [taint command](/docs/commands/taint.html) can be used to _taint_ specific resources within a module:
```
terraform taint -module=salt_master aws_instance.salt_master
```shell
$ terraform taint -module=salt_master aws_instance.salt_master
```
It is currently not possible to taint an entire module.

View File

@ -41,7 +41,7 @@ are defined is `~/.terraformrc` for Unix-like systems and
An example that configures a new provider is shown below:
```
```hcl
providers {
privatecloud = "/path/to/privatecloud"
}
@ -74,7 +74,7 @@ the road.
With the directory made, create a `main.go` file. This project will
be a binary so the package is "main":
```
```golang
package main
import (

View File

@ -70,7 +70,7 @@ This structure implements the `ResourceProvider` interface. We
recommend creating this structure in a function to make testing easier
later. Example:
```
```golang
func Provider() *schema.Provider {
return &schema.Provider{
...
@ -100,7 +100,7 @@ As part of the unit tests, you should call `InternalValidate`. This is used
to verify the structure of the provider and all of the resources, and reports
an error if it is invalid. An example test is shown below:
```
```golang
func TestProvider(t *testing.T) {
if err := Provider().(*schema.Provider).InternalValidate(); err != nil {
t.Fatalf("err: %s", err)
@ -118,7 +118,7 @@ These resources are put into the `ResourcesMap` field of the provider
structure. Again, we recommend creating functions to instantiate these.
An example is shown below.
```
```golang
func resourceComputeAddress() *schema.Resource {
return &schema.Resource {
...
@ -211,7 +211,7 @@ subsequent `terraform apply` fixes this resource.
Most of the time, partial state is not required. When it is, it must be
specifically enabled. An example is shown below:
```
```golang
func resourceUpdate(d *schema.ResourceData, meta interface{}) error {
// Enable partial state mode
d.Partial(true)

View File

@ -8,24 +8,25 @@ description: |-
# Chef Provisioner
The `chef` provisioner installs, configures and runs the Chef Client on a remote resource. The `chef` provisioner supports both `ssh`
and `winrm` type [connections](/docs/provisioners/connection.html).
The `chef` provisioner installs, configures and runs the Chef Client on a remote
resource. The `chef` provisioner supports both `ssh` and `winrm` type
[connections](/docs/provisioners/connection.html).
## Requirements
The `chef` provisioner has some prerequisites for specific connection types:
* For `ssh` type connections, `cURL` must be available on the remote host.
* For `winrm` connections, `PowerShell 2.0` must be available on the remote host.
- For `ssh` type connections, `cURL` must be available on the remote host.
- For `winrm` connections, `PowerShell 2.0` must be available on the remote host.
Without these prerequisites, your provisioning execution will fail.
## Example usage
```
# Start a initial chef run on a resource
```hcl
resource "aws_instance" "web" {
# ...
provisioner "chef" {
attributes_json = <<-EOF
{

View File

@ -11,16 +11,17 @@ description: |-
Many provisioners require access to the remote resource. For example,
a provisioner may need to use SSH or WinRM to connect to the resource.
Terraform uses a number of defaults when connecting to a resource, but these
can be overridden using a `connection` block in either a `resource` or `provisioner`.
Any `connection` information provided in a `resource` will apply to all the
provisioners, but it can be scoped to a single provisioner as well. One use case
is to have an initial provisioner connect as the `root` user to setup user accounts, and have
subsequent provisioners connect as a user with more limited permissions.
Terraform uses a number of defaults when connecting to a resource, but these can
be overridden using a `connection` block in either a `resource` or
`provisioner`. Any `connection` information provided in a `resource` will apply
to all the provisioners, but it can be scoped to a single provisioner as well.
One use case is to have an initial provisioner connect as the `root` user to
setup user accounts, and have subsequent provisioners connect as a user with
more limited permissions.
## Example usage
```
```hcl
# Copies the file as the root user using SSH
provisioner "file" {
source = "conf/myapp.conf"

View File

@ -14,7 +14,7 @@ supports both `ssh` and `winrm` type [connections](/docs/provisioners/connection
## Example usage
```
```hcl
resource "aws_instance" "web" {
# ...

View File

@ -14,7 +14,7 @@ bootstrap a resource, cleanup before destroy, run configuration management, etc.
Provisioners are added directly to any resource:
```
```hcl
resource "aws_instance" "web" {
# ...
@ -82,7 +82,7 @@ file.
Example of multiple provisioners:
```
```hcl
resource "aws_instance" "web" {
# ...
@ -102,14 +102,14 @@ By default, provisioners that fail will also cause the Terraform apply
itself to error. The `on_failure` setting can be used to change this. The
allowed values are:
* `"continue"` - Ignore the error and continue with creation or destruction.
- `"continue"` - Ignore the error and continue with creation or destruction.
* `"fail"` - Error (the default behavior). If this is a creation provisioner,
- `"fail"` - Error (the default behavior). If this is a creation provisioner,
taint the resource.
Example:
```
```hcl
resource "aws_instance" "web" {
# ...

View File

@ -8,21 +8,22 @@ description: |-
# local-exec Provisioner
The `local-exec` provisioner invokes a local executable after a resource
is created. This invokes a process on the machine running Terraform, not on
the resource. See the `remote-exec` [provisioner](/docs/provisioners/remote-exec.html)
to run commands on the resource.
The `local-exec` provisioner invokes a local executable after a resource is
created. This invokes a process on the machine running Terraform, not on the
resource. See the `remote-exec`
[provisioner](/docs/provisioners/remote-exec.html) to run commands on the
resource.
Note that even though the resource will be fully created when the provisioner is run,
there is no guarantee that it will be in an operable state - for example system services
such as `sshd` may not be started yet on compute resources.
Note that even though the resource will be fully created when the provisioner is
run, there is no guarantee that it will be in an operable state - for example
system services such as `sshd` may not be started yet on compute resources.
## Example usage
```
# Join the newly created machine to our Consul cluster
```hcl
resource "aws_instance" "web" {
# ...
provisioner "local-exec" {
command = "echo ${aws_instance.web.private_ip} >> private_ips.txt"
}
@ -37,4 +38,3 @@ The following arguments are supported:
as a relative path to the current working directory or as an absolute path.
It is evaluated in a shell, and can use environment variables or Terraform
variables.

View File

@ -22,12 +22,11 @@ graph.
## Example usage
```
# Bootstrap a cluster after all its instances are up
```hcl
resource "aws_instance" "cluster" {
count = 3
// ...
# ...
}
resource "null_resource" "cluster" {
@ -58,4 +57,3 @@ In addition to all the resource configuration available, `null_resource` support
* `triggers` - A mapping of values which should trigger a rerun of this set of
provisioners. Values are meant to be interpolated references to variables or
attributes of other resources.

View File

@ -17,10 +17,10 @@ provisioner supports both `ssh` and `winrm` type [connections](/docs/provisioner
## Example usage
```
# Run puppet and join our Consul cluster
```hcl
resource "aws_instance" "web" {
# ...
provisioner "remote-exec" {
inline = [
"puppet apply",
@ -53,7 +53,7 @@ upload the script with the
[file provisioner](/docs/provisioners/file.html)
and then use `inline` to call it. Example:
```
```hcl
resource "aws_instance" "web" {
# ...

View File

@ -38,7 +38,7 @@ to switch environments you can use `terraform env select`, etc.
For example, creating an environment:
```
```text
$ terraform env new bar
Created and switched to environment "bar"!
@ -62,7 +62,7 @@ Referencing the current environment is useful for changing behavior based
on the environment. For example, for non-default environments, it may be useful
to spin up smaller cluster sizes. You can do this:
```
```hcl
resource "aws_instance" "example" {
count = "${terraform.env == "default" ? 5 : 1}"
@ -73,7 +73,7 @@ resource "aws_instance" "example" {
Another popular use case is using the environment as part of naming or
tagging behavior:
```
```hcl
resource "aws_instance" "example" {
tags { Name = "web - ${terraform.env}" }