Merge pull request #1605 from hashicorp/terraform-remote-docs

website: update getting started with TF remote & how TF fits into the HashiCorp ecosystem
This commit is contained in:
Mitchell Hashimoto 2015-04-22 08:04:02 +02:00
commit 54945f4c81
5 changed files with 111 additions and 6 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 143 KiB

View File

@ -60,10 +60,7 @@ resources, Terraform will destroy in the proper order.
## Next
You now know how to create, modify, and destroy infrastructure.
With these building blocks, you can effectively experiment with
any part of Terraform.
You now know how to create, modify, and destroy infrastructure
from a local machine.
Next, we move on to features that make Terraform configurations
slightly more useful: [variables, resource dependencies, provisioning,
and more](/intro/getting-started/dependencies.html).
Next, we learn how to [use Terraform remotely and the associated benefits](/intro/getting-started/remote.html).

View File

@ -0,0 +1,70 @@
---
layout: "intro"
page_title: "Terraform Remote"
sidebar_current: "gettingstarted-remote"
description: |-
We've now seen how to build, change, and destroy infrastructure from a local machine. However, you can use Atlas by HashiCorp to run Terraform remotely to version and audit the history of your infrastructure.
---
# Why Use Terraform Remotely?
We've now seen how to build, change, and destroy infrastructure
from a local machine. This is great for testing and development,
however in production environments it is more responsible to run
Terraform remotely and store a master Terraform state remotely.
[Atlas](https://atlas.hashicorp.com/?utm_source=oss&utm_medium=getting-started&utm_campaign=terraform)
is HashiCorp's solution for Terraform remote runs and
infrastructure version control. Running Terraform
in Atlas allows teams to easily version, audit, and collaborate
on infrastructure changes. Each proposed change generates
a Terraform plan which can be reviewed and collaborated on as a team.
When a proposed change is accepted, the Terraform logs are stored
in Atlas, resulting in a linear history of infrastructure states to
help with auditing and policy enforcement. Additional benefits to
running Terraform remotely include moving access
credentials off of developer machines and releasing local machines
from long-running Terraform processes.
# How to Use Terraform Remotely
You can learn how to use Terraform remotely with our [interactive tutorial](https://atlas.hashicorp.com/tutorial/terraform/?utm_source=oss&utm_medium=getting-started&utm_campaign=terraform)
or you can follow the outlined steps below.
First, configure [Terraform remote state storage](/docs/commands/remote.html)
with the command:
```
$ terraform remote config -backend-config="name=ATLAS_USERNAME/getting-started"
```
Replace `ATLAS_USERNAME` with your Atlas username. If you don't have one, you can
[create an account here](https://atlas.hashicorp.com/account/new?utm_source=oss&utm_medium=getting-started&utm_campaign=terraform).
Next, [push](/docs/commands/push.html) your Terraform configuration to Atlas with:
```
$ terraform push -name="ATLAS_USERNAME/getting-started"
```
This will automatically trigger a `terraform plan`, which you can
review in the [Environments tab in Atlas](https://atlas.hashicorp.com/environments).
If the plan looks correct, hit "Confirm & Apply" to execute the
infrastructure changes.
# Version Control for Infrastructure
Running Terraform in Atlas creates a complete history of
infrastructure changes, a sort of version control
for infrastructure. Similar to application version control
systems such as Git or Subversion, this makes changes to
infrastructure an auditable, repeatable,
and collaborative process. With so much relying on the
stability of your infrastructure, version control is a
responsible choice for minimizing downtime.
## Next
You now know how to create, modify, destroy, version, and
collaborate on infrastructure. With these building blocks,
you can effectively experiment with any part of Terraform.
Next, we move on to features that make Terraform configurations
slightly more useful: [variables, resource dependencies, provisioning,
and more](/intro/getting-started/dependencies.html).

View File

@ -0,0 +1,30 @@
---
layout: "intro"
page_title: "Terraform and the HashiCorp Ecosystem"
sidebar_current: "hashicorp-ecosystem"
description: |-
Learn how Terraform fits in with the rest of the HashiCorp ecosystem of tools
---
# Terraform and the HashiCorp Ecosystem
HashiCorp is the creator of the open source projects Vagrant, Packer, Terraform, Serf, and Consul, and the commercial product Atlas. Terraform is just one piece of the ecosystem HashiCorp has built to make application delivery a versioned, auditable, repeatable, and collaborative process. To learn more about our beliefs on the qualities of the modern datacenter and responsible application delivery, read [The Atlas Mindset: Version Control for Infrastructure](https://hashicorp.com/blog/atlas-mindset.html/?utm_source=terraform&utm_campaign=HashicorpEcosystem).
If you are using Terraform to create, combine, and modify infrastructure, its likely that you are using base images to configure that infrastructure. Packer is our tool for building those base images, such as AMIs, OpenStack images, Docker containers, and more.
Below are summaries of HashiCorps open source projects and a graphic showing how Atlas connects them to create a full application delivery workflow.
# HashiCorp Ecosystem
![Atlas Workflow](docs/atlas-workflow.png)
[Atlas](https://atlas.hashicorp.com/?utm_source=terraform&utm_campaign=HashicorpEcosystem) is HashiCorp's only commercial product. It unites Packer, Terraform, and Consul to make application delivery a versioned, auditable, repeatable, and collaborative process.
[Packer](https://packer.io/?utm_source=terraform&utm_campaign=HashicorpEcosystem) is a HashiCorp tool for creating machine images and deployable artifacts such as AMIs, OpenStack images, Docker containers, etc.
[Terraform](https://terraform.io/?utm_source=terraform&utm_campaign=HashicorpEcosystem) is a HashiCorp tool for creating, combining, and modifying infrastructure. In the Atlas workflow Terraform reads from the artifact registry and provisions infrastructure.
[Consul](https://consul.io/?utm_source=terraform&utm_campaign=HashicorpEcosystem) is a HashiCorp tool for service discovery, service registry, and health checks. In the Atlas workflow Consul is configured at the Packer build stage and identifies the service(s) contained in each artifact. Since Consul is configured at the build phase with Packer, when the artifact is deployed with Terraform, it is fully configured with dependencies and service discovery pre-baked. This greatly reduces the risk of an unhealthy node in production due to configuration failure at runtime.
[Serf](https://serfdom.io/?utm_source=terraform&utm_campaign=HashicorpEcosystem) is a HashiCorp tool for cluster membership and failure detection. Consul uses Serfs gossip protocol as the foundation for service discovery.
[Vagrant](https://www.vagrantup.com/?utm_source=terraform&utm_campaign=HashicorpEcosystem) is a HashiCorp tool for managing development environments that mirror production. Vagrant environments reduce the friction of developing a project and reduce the risk of unexpected behavior appearing after deployment. Vagrant boxes can be built in parallel with production artifacts with Packer to maintain parity between development and production.

View File

@ -50,6 +50,10 @@
<a href="/intro/getting-started/destroy.html">Destroy Infrastructure</a>
</li>
<li<%= sidebar_current("gettingstarted-remote") %>>
<a href="/intro/getting-started/remote.html">Terraform Remote</a>
</li>
<li<%= sidebar_current("gettingstarted-deps") %>>
<a href="/intro/getting-started/dependencies.html">Resource Dependencies</a>
</li>
@ -96,6 +100,10 @@
</li>
</ul>
</li>
<li<%= sidebar_current("hashicorp-ecosystem") %>>
<a href="/intro/hashicorp-ecosystem.html">Terraform and the HashiCorp Ecosystem</a>
</li>
</ul>
</div>
<% end %>