This change allows the Habitat supervisor service name to be
configurable. Currently it is hard coded to `hab-supervisor`.
Signed-off-by: Nolan Davidson <ndavidson@chef.io>
Since an early version of Terraform, the `destroy` command has always
had the `-force` flag to allow an auto approval of the interactive
prompt. 0.11 introduced `-auto-approve` as default to `false` when using
the `apply` command.
The `-auto-approve` flag was introduced to reduce ambiguity of it's
function, but the `-force` flag was never updated for a destroy.
People often use wrappers when automating commands in Terraform, and the
inconsistency between `apply` and `destroy` means that additional logic
must be added to the wrappers to do similar functions. Both commands are
more or less able to run with similar syntax, and also heavily share
their code.
This commit updates the command in `destroy` to use the `-auto-approve` flag
making working with the Terraform CLI a more consistent experience.
We leave in `-force` in `destroy` for the time-being and flag it as
deprecated to ensure a safe switchover period.
When writing an example for a submodule, the example should be placed in
`examples/{example name}` instead of
`modules/{module name}/examples/{example name}`.
We have outgrown the single flat list presentation of providers due to the shear number now present, so we'll move here to a model where the providers are split into a number of categories that each contain a smaller list.
The full list is still included in the body of the main index page for quick access via search, but the categories make for a more accessible navbar for those who are just browsing.
Triton Manta allows an account other than the main triton account to be used via RBAC.
Here we expose the SDC_USER / TRITON_USER options to the backend so that a user can be specified.
Our prevailing writing style is to place punctuation outside of quotes, since in many contexts Terraform itself treats punctuation within quotes as significant and so it can be confusing to use punctuation in quotes in our prose.
* add catagory files
* try new source path
* cleaning up formatting
* fixin
* add all providers to providers index page
* add descriptions
* add link to form and first two providers
* small edits
* small edits
* small changes
* add community providers and decription edit from marketing
* add some lines to improve design
* fix typos
First successful run with private origin and HAB_AUTH_TOKEN set
Update struct, schema, and decodeConfig names to more sensible versions
Cleaned up formatting
Update habitat provisioner docs
Remove unused unitstring
Users commonly ask how the S3 backend can be used in an organization that
splits its infrastructure across many AWS accounts.
We've traditionally shied away from making specific recommendations here
because we can't possibly anticipate the different standards and
regulations that constrain each user. This new section attempts to
describe one possible approach that works well with Terraform's workflow,
with the goal that users make adjustments to it taking into account their
unique needs.
Since we are intentionally not being prescriptive here -- instead
considering this just one of many approaches -- it deviates from our usual
active writing style in several places to avoid giving the impression that
these are instructions to be followed exactly, which in some cases
requires the use of passive voice even though that is contrary to our
documentation style guide. For similar reasons, this section is also
light on specific code examples, since we do not wish to encourage users
to just copy-paste the examples without thinking through the consequences.
* Verify discovery works without trailing slash on discovery URL
* Update registry API docs with browse and search endpoints
* Add sample request/responses
* Add comment to test to indicate expecations
* Fix typo
* Remove trailing slash weirdness
It's important to match the version of local Terraform with the remote Terraform version in Terraform Enterprise when using the "terraform push" command, or else the uploaded configuration package may not be compatible.
This is a genre of invalid output expression that we've seen quite
commonly while testing with 0.11.0-rc1, so we'll call it out specifically
in the upgrade guide and suggest how to fix it.
The differences between the implicit and explicit modes of passing
provider configurations between modules are significant enough to warrant
giving these approaches different names and describing them separately.
This also includes documentation of the current limitation discussed in
#16612, where explicit passing requires a proxy configuration block even
for a _default_ provider configuration, because that limitation is being
accepted for the 0.11.0 release to limit scope.
We are recommending that as of 0.11 all provider configurations be placed
in the root module and, where necessary, be explicitly passed down via
a providers map to customize which configurations are seen by each
child module.
This new section attempts to guide users through such refactoring in the
common case where a child module defines its own provider configuration
based on a value passed in an input variable, and then uses that as
some context to link to the more detailed docs to help those who have
more complex configurations.
The initial pass of this section had some remaining ambiguities, so this
is a second revision that attempts to use terminology more consistently
and to not some additional behaviors that were not described in the
initial version.
We've historically been somewhat inconsistent in how we refer to the
type of object defined by "variable" blocks in configuration. Parts of
our documentation refer to them as "input variables" or just "variables",
while our implementation refers to them as "user variables".
Since Terraform Registry is now also referring to these as "Inputs", here
we standardize on "Input Variable" as the fully-qualified name for this
concept, with "variable" being a shorthand for this where context is
obvious. Outside of this context, anything that can be referred to in
an interpolation expression is generically known as a "variable", with
Input Variables being just one kind, specified by the "var." prefix.
While this terminology shift is not critical yet, it will become more
important as we start to document the new version of the configuration
language so we can use the generic meaning of "variable" there.
The bulk of the text on this page hasn't been revised for some time and
so parts of it were using non-idiomatic terminology or not defining terms
at all.
The main goal of this revision is to standardize on the following terms:
- "provider configuration" refers to a specific provider block in config,
as a distinct idea from the provider _itself_, which is a singleton.
- "Default" vs. "additional" provider configurations, distinguishing
those without and with "alias" arguments respectively. These are named
here so that we can use this terminology to describe the different
behaviors of each for the purposes of provider inheritance between
modules.
The "terraform" provider was previously split out into its own repository,
but that turned out to be a mistake due to how tightly it depends on
aspects of Terraform Core.
Here we prepare to bring it back into the core repository by reorganizing
the directory layout to conform with what's expected there.
This allows the user to customize the location where Terraform stores
the files normally placed in the ".terraform" subdirectory, if e.g. the
current working directory is not writable.
This is a significant rework of the Modules getting started guide to be
in terms of the Consul module available via the Terraform Registry. This
allows us to introduce the registry as part of the tutorial, and also
gives us some auto-generated documentation to link to as context for
the tutorial.
This new module is designed pretty differently than the one we formerly
used, and in particular it doesn't expose any server addresses for the
created Consul cluster -- due to using an auto-scaling group -- and thus
we're forced to use the (arguably-less-compelling) auto-scaling group name
for our output example.
Now includes more complete information on usage of private registries and
updates the module registry API documentation to include the new
version-enumeration endpoint along with describing Terraform's discovery
protocol.
The modules mechanism has changed quite a bit for version 0.11 and so
although simple usage remains broadly compatible there are some
significant changes in the behavior of more complex modules.
Since large parts of this were rewritten anyway, I also took the
opportunity to do some copy-editing to make the prose on this page more
consistent with our usual editorial voice and to wrap the long
lines.
In the 0.10 release we added an opt-in mode where Terraform would prompt
interactively for confirmation during apply. We made this opt-in to give
those who wrap Terraform in automation some time to update their scripts
to explicitly opt out of this behavior where appropriate.
Here we switch the default so that a "terraform apply" with no arguments
will -- if it computes a non-empty diff -- display the diff and wait for
the user to type "yes" in similar vein to the "terraform destroy" command.
This makes the commonly-used "terraform apply" a safe workflow for
interactive use, so "terraform plan" is now mainly for use in automation
where a separate planning step is used. The apply command remains
non-interactive when given an explicit plan file.
The previous behavior -- though not recommended -- can be obtained by
explicitly setting the -auto-approve option on the apply command line,
and indeed that is how all of the tests are updated here so that they can
continue to run non-interactively.
This PR changes manta from being a legacy remote state client to a new backend type. This also includes creating a simple lock within manta
This PR also unifies the way the triton client is configured (the schema) and also uses the same env vars to set the backend up
It is important to note that if the remote state path does not exist, then the backend will create that path. This means the user doesn't need to fall into a chicken and egg situation of creating the directory in advance before interacting with it
This resurrects the previously documented but unused "project" option.
This option is required to create buckets (so they are associated with the
right cloud project) but not to access the buckets later on (because their
names are globally unique).
* Remove the (unused) "project" option.
* Mark the "credentials" option as optional; document behavior when
unset.
* Mark the "path" option as deprecated (was: legacy) to match
Terraform's terminology.
Since we don't currently auto-install provisioner plugins this is
currently placed on the providers documentation page and referred to as
the "Provider Plugin Cache". In future this mechanism may also apply to
provisioners, in which case we'll figure out at that point where better
to place this information so it can be referenced from both the provider
and provisioner documentation pages.
This mechanism for configuring plugins is now deprecated, since it's not
capable of declaring plugin versions. Instead, we recommend just placing
plugins into a particular directory, which is now documented on the
main providers documentation page and linked from the more detailed docs
on plugins in general.
Previously we described inline here where to put the .terraformrc file,
but now we have a separate page all about this file that gives us more
room to describe in more detail where the file is placed and what else it
can do.
This function takes a map of lists of strings and inverts it so that
the string values become keys and the keys become items within the
corresponding lists.
In #15884 we adjusted the plan output to give an explicit command to run
to apply a plan, whereas before this command was just alluded to in the
prose.
Since releasing that, we've got good feedback that it's confusing to
include such instructions when Terraform is running in a workflow
automation tool, because such tools usually abstract away exactly what
commands are run and require users to take different actions to
proceed through the workflow.
To accommodate such environments while retaining helpful messages for
normal CLI usage, here we introduce a new environment variable
TF_IN_AUTOMATION which, when set to a non-empty value, is a hint to
Terraform that it isn't being run in an interactive command shell and
it should thus tone down the "next steps" messaging.
The documentation for this setting is included as part of the "...in
automation" guide since it's not generally useful in other cases. We also
intentionally disclaim comprehensive support for this since we want to
avoid creating an extreme number of "if running in automation..."
codepaths that would increase the testing matrix and hurt maintainability.
The focus is specifically on the output of the three commands we give in
the automation guide, which at present means the following two situations:
* "terraform init" does not include the final paragraphs that suggest
running "terraform plan" and tell you in what situations you might need
to re-run "terraform init".
* "terraform plan" does not include the final paragraphs that either
warn about not specifying "-out=..." or instruct to run
"terraform apply" with the generated plan file.
Previously we just assumed the reader was familiar with the idea of a
graph but didn't explain it.
Since graphs are an implementation detail of Terraform, rather than
essential information needed for new users, this revises the introduction
text to talk only about _dependencies_, which we assume the user is
familiar with as a more practical concept.
Additionally, Paul Hinze did a great talk on how Terraform uses graphs
at HashiConf 2016 which is good additional content for our existing
"Graph Internals" page, which includes a concise explanation of the
basics of graph theory.
In #15870 we got good feedback that it'd be more useful to have the
various filename-accepting arguments on this provisioner instead accept
strings that represent the contents of such files, so that they can be
generated from elsewhere in the Terraform config.
This change does not achieve that, but it does make room for doing this
later by renaming "minion_config" to "minion_config_file" so that we
can later add a "minion_config" option alongside that takes the file
content, and deprecate "minion_config_file".
Ideally we'd just implement the requested change immediately, but
unfortunately the release schedule doesn't have time for this so this is
a pragmatic change to allow us to make the full requested change at a
later date without backward incompatibilities.
This change is safe because the salt-masterless provisioner has not yet
been included in a release at the time of this commit.
Previously the -upgrade option was covered only on the "terraform init" usage page. It seems also worth mentioning in the main docs on provider versioning, since we're already explaining here other mechanics of the versioning/constraints system.
Terraform modules encapsulate their resources, and dependencies can only
be expressed through outputs, which wasn't clear to me in the existing
documentation. I'm hoping a small change will make that more explicit.
This escapes all characters that might have a special interpretation when embedded into a portion of a URL, including slashes, equals signs and ampersands.
Since Terraform's internals are not 8-bit clean (it assumes UTF-8
strings), we can't implement raw gzip directly. We're going to add
support where it makes sense for passing data to attributes as
base64 so that the result of this function can be used.
* update plugin/provider to make clear this section isn't needed for regular use
* add some links and notes about getting started
* remove the mention of binaries... I 'm not sure it's needed yet
* 'Installing Terraform Providers' section
* sometimes I can't words good
* move the 'installing providers' block
* cleanup of terms
* copy that update to plugins/provider too
The backend has been renamed. Using the old name in the config will
trigger a deprecation warning, but the implementation and the
documentation is the same.
Added locking support via blob leasing (requires that an empty state is
created before any lock can be acquired.
Added support for "environments" in much the same way as the S3 backend.
Fix the -state and -state-out wording to be consistent with other
commands. Remove the erroneous reference to remote state in the website
version of the flag description.
The docs did not mention that it is possible to provide overrides for specific
plugins by placing them into a `terraform.d/plugins/os_arch/` directory inside
the working dir.
Closes#15727.
This restores the earlier behavior of the first positional argument to
terraform init in 0.9, but as a command line option.
The positional argument was removed to improve consistency with other
commands that take a working directory as their first positional argument.
It was originally intended that this functionality would return in a
later release along with some other general improvements to Terraform's
module handling, but we're introducing here an interim solution that
uses the existing module source concept, to allow for easier porting of
workflows that previously depended on the automatic copy behavior.
In a future release this feature may change again as the module
improvements design firms up, but we expect it to be broadly compatible
with this temporary state.
The "terraform init" command has a lot of different functionality now,
making it hard to follow all of the options in the previous presentation.
Instead, here we describe each of the steps and its associated options
separately, hopefully making it easier to understand what each option
relates to.
In addition, much of the detail around backend partial configuration is
factored out into the backend configuration page, where it seems more
"at home"; previously it felt hard to follow exactly how partial
configuration would be used, due to the information on it being split over
two different pages.
This is documented for all other Hashicorp products using this service but
was missed for Terraform. This serves as a disclosure of the fact that
Terraform reaches out to a Hashicorp service, an explanation of the
purpose of that request, and instructions on how to disable it in
environments where it is inappropriate or cannot be supported due to a
firewall or other connectivity restrictions.
Based on feedback from #15569 that the previous example was too abstract
and did not give enough context about what each of the different arguments
mean and how they generalize to other resource types.
The intent here is just to introduce some initial docs on our recommended
way to develop plugins in the same GOPATH as Terraform itself. The
documentation in this area needs some more fundamental rework as it is
rather outdated and mis-organized, but that's outside the scope of what
this change is trying to achieve.
This changed close to the release of beta1 to use underscores as the
separator and to use a lower-case "v" to avoid any issues on
case-insensitive filesystems.
A common reason to want to use `terraform plan` is to have a chance to
review and confirm a plan before running it. If in fact that is the
only reason you are running plan, this new `terraform apply -auto-approve=false`
flag provides an easier alternative to
P=$(mktemp -t plan)
terraform refresh
terraform plan -refresh=false -out=$P
terraform apply $P
rm $P
The flag defaults to true for now, but in a future version of Terraform it will
default to false.
Error loading Terraform: Error downloading modules: error downloading 'ssh://git@bitbucket.org/acme/foo.git?bar': /usr/bin/git exited with 128: Cloning into '.terraform/modules/yadayada'...
invalid command syntax.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
This guide covers assorted best practices and caveats for running
Terraform within orchestration tools and other automation. It provides
general examples and guidance, with the intent that this advice can be
adapted by the reader to a concrete implementation within a selected
orchestration tool.
This guide is based both on our in-house experience with Terraform
Enterprise and on in-house solutions we are aware of in certain
organizations.
Previously the behavior for -target when given a module address was to
target only resources directly within that module, ignoring any resources
defined in child modules.
This behavior turned out to be counter-intuitive, since users expected
the -target address to be interpreted hierarchically.
We'll now use the new "Contains" function for addresses, which provides
a hierarchical "containment" concept that is more consistent with user
expectations. In particular, it allows module.foo to match
module.foo.module.bar.aws_instance.baz, where before that would not have
been true.
Since Contains isn't commutative (unlike Equals) this requires some
special handling for targeting specific indices. When given an argument
like -target=aws_instance.foo[0], the initial graph construction (for
both plan and refresh) is for the resource nodes from configuration, which
have not yet been expanded to separate indexed instances. Thus we need
to do the first pass of TargetsTransformer in mode where indices are
ignored, with the work then completed by the DynamicExpand method which
re-applies the TargetsTransformer in index-sensitive mode.
This is a breaking change for anyone depending on the previous behavior
of -target, since it will now select more resources than before. There is
no way provided to obtain the previous behavior. Eventually we may support
negative targeting, which could then combine with positive targets to
regain the previous behavior as an explicit choice.
We are replacing this terminology. The old command continues to work for
compatibility, but is deprecated. The docs should reflect the
currently-recommended form.
We are moving away from using the term "environment" to describe separate
named states for a single config, using "workspace" instead. The old
attribute name remains supported for backward compatibility, but is
marked as deprecated.
"environment" is a very overloaded term, so here we prefer to use the
term "working directory" to talk about a local directory where operations
are executed on a given Terraform configuration.
This form of "terraform init" is vestigial at this point and being phased
out in 0.10. Something similar may return in a later version for
installing modules from a more formal module library, but for now we are
advising to use git manually to simplify the UX for "terraform init".
Previously we encouraged users to import a resource and _then_ write the
configuration block for it. This ordering creates lots of risk, since
for various reasons users can end up subsequently running Terraform
without any configuration in place, which then causes Terraform to want
to destroy the resource that was imported.
Now we invert this and require a minimal configuration block be written
first. This helps ensure that the user ends up with a correlated resource
config and state, protecting against any inconsistency caused by typos.
This addresses #11835.
This will be fleshed out later as part of more holistic documentation for
the new provider plugin separation, but this is some minimal documentation
for just this subcommand.
* Data Source support for Resource Group
* Better message for mismatching locations.
* Reuse existing read code
* Adds documentation
* Adds test
* Adds a function for composing ID strings
* Change location to computed.
* Move to v2 client in vendor directory
* Move to v2 api and project IDs for environments
* add host label support to registration command
* Update go-rancher/catalog
* Allow go-rancher to handle URL versioning
This is a separate resource that serves a similar purpose to the
propagating_vgws argument on aws_route_table, but allows route
propagations to be created independently of the route table, which in
turn allows the VPN gateway to be created after the route table it will
contribute to, possibly in a separate Terraform module.
To make this work, propagating_vgws on aws_route_table is now marked
as Computed, meaning that it won't try to delete any existing propagation
edges if there is no setting for it in configuration at all. This allows
the user to choose whether to use the argument or the separate resource,
though using both together will not work, as explained in the docs.
* Update overview/API links for storage_bucket_objects, and acls for both buckets and objects.
* Minor formatting changes to google_storage_bucket and acl docs.
* Updated outdated custom ACL information and fixed grammar.
* Added support for public IP data source. Tested manually.
* WIP: Update to implementation, basic test added.
* WIP: Updates to implementation, basic test added.
* WIP: Added support for idle timeout
* Completed implementation and basic test
* Added documentation.
* Updated the example so it makes a little more sense.
* Add task_parameters support to aws_ssm_maintenance_window_task
task_parameters weren't supported yet. This adds support for them. It
also corrects a documentation typo in the maintenance_window resource.
* Respond to internal feedback
* New SSM Parameter resource
Can be used for creating parameters in AWS' SSM Parameter Store that can then be used by other applications that have access to AWS and necessary IAM permissions.
* Add docs for new SSM Parameter resource
* Code Review and Bug Hunt and KMS Key
- Addressed all issues in #14043
- Added ForceNew directive to type
- Added the ability to specify a KMS key for encryption and decryption
* Add SSM Parameter Data Source
* Fix bad merge
* Fix SSM Parameter Integration Tests
* docs/aws: Fix typo in SSM sidebar link
* Make os_profile optional #11147
* Test for optional os_profile and fix resourceArmVirtualMachineRead
* Updating to match other optionally-required fields
Datadog does not explicitly document which graph types are available, but when you use the GUI to generate the graph and select the JSON tab to inspect said graph, you will see that the available timeboard graph type names are singular, not plural.
* provider/aws: Update Lightsail supported regions
This commit complements (#14621)[https://github.com/hashicorp/terraform/pull/14621] and (#14685)[https://github.com/hashicorp/terraform/pull/14685].
* Revert "provider/aws: Update Lightsail supported regions"
This reverts commit 545c3d6e6e7a9b665542ecc3b5e4d857faac749b.
* This commit complements #14621 and #14685.
* Link to AWS docs instead of listing regions
Instead of explicitly listing supported Lightsail regions in the docs,
we now link to the Lightsail docs.
* Updating the Sku field to be optional
* Making the Sku optional
* Ensuring we check for a 404 to mark a successful deletion
* Upping the size of the internal data disk
* Randomizing the Local Network Gateway tests
* Fixing a bug in Local Network Gateway's where the deletion wouldn't be detected
This fixes the missing `id` attribute on the documentation. The attribute exists if called via `"${aws_elastic_beanstalk_environment.myapp-environment.id}"`, but is just not documented.
Should not be cherry-picked to the `stable-website` branch. The next Terraform deploy will include the latest changes to the OPC provider, and this updated documentation for the next point release.
* vendor: Add gophercloud/routerinsertion package and update
gophercloud/firewall to support router insertion
* provider/openstack: Add support for associating
`openstack_fw_firewall_v1` resources with router(s).
Added `associated_routers` and `no_routers` arguments.
* website: Add documentation for `associated_routers`and `no_routers` arguments on `openstack_fw_firewall_v1` resource.
* provider/openstack: Add `AddValueSpecs` function and refactor existing
uses.
* fix gitlab naming
seems like some github stuff was not renamed
* gitlab is using group or user instead of organisations
* add namespace_id to gitlab_project documentation
* it's not possible to manage group members
* Fix doc bug. Spell `collation` like `lc_collate`.
* Whitespace nit in error message
* Use %q as the format verb for error messages in postgresql_database resource messages.
* REVOKE the `GRANT` given to the connection user when creating a database.
For `ROLE`s who have been delegated `CREATEDB` privileges and are not a
superuser, in order for them to `CREATE DATABASE` they need to be a member
of the `ROLE` who will be `OWNER` for the new database. Once the
`CREATE DATABASE` is complete, `REVOKE` the `GRANT` that was given to role
so that the user who ran the `CREATE DATABASE` looses all privileges to the
target database (unless of course they're a superuser).
Fixes a regression introduced in #11452
* Delegated DBA ROLEs can now fix OWNER drift for PostgreSQL databases.
Uses the helper functions introduced in #11452
* provider/aws: Add data source for aws_elasticache_cluster
Fixes: #11445
* provider/aws: Add acceptance tests for aws_elasticache_cluster data source
* provider/aws: Add documentation for the aws_elasticache_cluster datasource
Add dynamodb_table and deprecation notice on lock_table. Add missing
parameters for the S3 backends: assume_role_policy, external_id,
and session_name.
* provider:openstack Add support provider network
* revert vendor file changes
* vendor: Updating Gophercloud for OpenStack Provider
* create provider network if parameter has segments
* segments is not computed resource
* extract to generate []provider.Segment
* change segmentstion id type
* provider/gitlab: Fix documentation copypasta
The original provider and docs were copied from the github provider, one
bit of copy paste slipped unmissed.
* provider/gitlab: Document `gitlab_project#id`
* provider/gitlab: Document `gitlab_project#namespace_id`
* provider/gitlab: Add fuller demonstration to the provider page
Following in the style of other provider pages, add a worked example
showing off all of the available resources offered by the gitlab
provider.
* provider/gitlab: Correct sample for gitlab_project
* The resource name should be consistent.
The HCL declares the terraform_remote_state with a resource name of foo. But the example invocation uses network which is incorrect.
* Foo > Network so this is a proper example.
A while back `atlas_artifact` was switched from being a `resource` to a `data` provider. When you use the examples suggested in the Terraform Enterprise docs, the Terraform cli shows a deprecation warning and provides an old url to the new data provider docs.
There are some complimentary doc updates in the Terraform Enterprise/Atlas repo.
* vendor: Updating Gophercloud for OpenStack Provider
* provider/openstack: Enable Security Group Updates
This commit enables security group names and descriptions to
be updated without causing a recreate.
* Update news section with April 4 webinar video
* Use YAML data file for news; add webinar registration CTA
* Update news section with Google Cloud webinar post-event info
* Exposing moid value from vm resource
moid value is needed by NSX resources, like security tag, when we attached security tags to a VMs, so needed before we commit NSX provider.
* fixing gofmt issue
* Updating docs regarding new exported moid attribute.
* vendor: Update go-gitlab to master@e6c11e
Update go-gitlab to master@e6c11e. This brings in UpdateGroup in
addition to fuller management of other attributes.
* provider/gitlab: Add `gitlab_group` resource
This adds a gitlab_group resource.
This combined with #14483 will allow you to create projects in a
group.
* Update sources.html.markdown
Moduels not updating was really annoying, should add this documentation in to increase usability of the feature.
* Update sources.html.markdown
* provider/gitlab: add `gitlab_deploy_key`
Here we extend the gitlab provider further by adding a `gitlab_deploy_key`
resource. This resource allows management of a projects deploy
keys.
* provider/gitlab: Do not test `gitlab_deploy_key` `can_push`
Here we remove the testing of the `can_push` attribute. This makes the
tests less comprehensive, but will allow them to work with the current
release of gitlab-ce.
This change is staged as a distinct commit so it can be easily
dropped/reverted once gitlab MR !11607 has reached a released state.
https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/11607
* provider/gitlab: Update docs for gitlab_deploy_key/can_push
Note that the can_push attribute of gitlab_deploy_key doesn't currently
work. This note can be removed once
https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/11607 is merged
and in general circulation.
system volumes on scaleway can't easily be modified - instead one has to create
a new image with the desired system volume size. This is way out of scope of
terraform - see https://community.online.net/t/expanding-lssd/907/2 for steps on
how to build a new image.
the `scaleway_server` `volume` attribute should only be used if you want to
attach additional volumes to a server which will share the lifetime of the
server, e.g. they will be destroyed once the server is shut down.
To have volumes which outlive the attached server one should use
`scaleway_volume` and `scaleway_volume_attachement` instead.
Prior to Terraform 0.7, lists in Terraform were just a shallow abstraction
on top of strings with a magic delimiter between items. Wrapping a single
string in brackets in the configuration was Terraform's prompt that it
needed to split the string on that delimiter during interpolation.
In 0.7, when first-class lists were added, this convention was preserved
by flattening lists-of-lists by one level when they were encountered in
configuration. However, there was an oversight in that change where it
did not correctly handle the case where the inner list was unknown.
In #14135 we removed some code that was flattening partially-unknown lists
into fully-unknown (untyped) values. This inadvertently exposed the missed
case from the previous paragraph, causing issues for list-wrapped splat
expressions with unknown members. While this worked fine for resources,
due to some fixup done inside helper/schema, this did not work for other
interpolation contexts such as module blocks.
Various attempts to fix this up and restore the flattening behavior
selectively were unsuccessful, due to a proliferation of assumptions all
over the core code that would be too risky to change just to fix this bug.
This change, then, takes the different approach of removing the
requirement that splats be presented inside list brackets. This
requirement didn't make much sense anymore anyway, since no other
list-returning expression had this constraint and so the rest of Terraform
was already successfully dealing with both cases.
This leaves us with two different scenarios:
- For resource arguments, existing normalization code in helper/schema
does its own flattening that preserves compatibility with the common
practice of using bracketed splats. This change proves this with a test
within the "test" provider that exercises the whole Terraform core and
helper/schema stack that assigns bracketed splats to list and set
attributes.
- For arguments in other blocks, such as in module callsites, the
interpolator's own flattening behavior applies to known lists,
preserving compatibility with configurations from before
partially-computed splats were possible, but those wishing to use
partially-computed splats are required to drop the surrounding brackets.
This is less concerning because this scenario was introduced only in
0.9.5, so the scope for breakage is limited to those who adopted this
new feature quickly after upgrading.
As of this commit, the recommendation is to stop using brackets around
splats but the old form continues to be supported for backward
compatibility. In a future _major_ version of Terraform we will probably
phase out this legacy form to improve consistency, but for now both
forms are acceptable at the expense of some (pre-existing) weird behavior
when _actual_ lists-of-lists are used.
This addresses #14521 by officially adopting the suggested workaround of
dropping the brackets around the splat. However, it doesn't yet allow
passing of a partially-unknown list between modules: that still violates
assumptions in Terraform's core, so for the moment partially-unknown lists
work only within a _single_ interpolation expression, and cannot be
passed around between expressions. Until more holistic work is done to
improve Terraform's type handling, passing a partially-unknown splat
through to a module will result in a fully-unknown list emerging on
the other side, just as was the case before #14135; this change just
addresses the fact that this was failing with an error in 0.9.5.
* Support importing google_sql_user
* Updated documentation to reflect that passwords are not retrieved.
* Added additional documentation detailing use.
* Removed unneeded d.setId() line from GoogleSqlUser Read method.
* Changed an errors.New() call to fmt.Errorf().
* Migrate schemas of existing GoogleSqlUser resources.
* Remove explicitly setting 'id' property
* Added google_sql_user to importability page.
* Changed separator to '/' from '.' and updated tests + debug messages.
* Add Network Alias configuration with network options
* Handle case where there's no network option
* Handle use case where network option is not available
* Handle use case where network option is not available
* Network alias only on user defined network
* Update documentation for docker provider on network aliases
* Remove unused variable
* Update documentation
* add unit test for docker container network
* fix unit test for docker container network
The tests did pass, but that was because they only tested part of the changes. By using the `schema.TestResourceDataRaw` function the schema and config are better tested and so they pointed out a problem with the schema of the Chef provisioner.
The `Elem` fields did not have a `*schema.Schema` but a `schema.Schema` and in an `Elem` schema only the `Type` field may (and must) be set. Any other fields like `Optional` are not allowed here.
Next to fixing that problem I also did a little refactoring and cleaning up. Mainly making the `ProvisionerS` private (`provisioner`) and removing the deprecated fields.
* Document source block for archive_file data source.
* Add example for archive_file source block.
* Capitalize Optional/Required for consistency with majority of provider docs.
* core/providersplit: Split OPC Provider to separate repo
As we march towards Terraform 0.10.0, we are going to start building the
terraform providers as separate binaries - this will allow us to
continually release them. Before we go to 0.10.0, we need to be able to
continue building providers in the same manner, therefore, we have
hardcoded the path of the provider in the generate-plugins.go file
The interim solution will require us to vendor the opc provider and any
child dependencies, but when we get to 0.10.0, we will no longer have to
do this - the core will auto download the plugin binary. The plugin
package will have it's own dependencies vendored as well.
* core/providersplit: Removing the builtin version of OPC provider
* core/providersplit: Vendoring the OPC plugin
* core/providersplit: update internal plugin list
* core/providersplit: remove unused govendor item
Correctly sets the attribute `ip_address` in the `opc_compute_ip_address_reservation` resource.
Also updates documentation for the `ip_address_pool` attribute.
```
$ make testacc TEST=./builtin/providers/opc TESTARGS="-run=TestAccOPCIPAddressReservation_Basic"
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/05/16 10:15:53 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/opc -v -run=TestAccOPCIPAddressReservation_Basic -timeout 120m
=== RUN TestAccOPCIPAddressReservation_Basic
--- PASS: TestAccOPCIPAddressReservation_Basic (22.60s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/opc 22.604s
```
The author made this mistake at the beginning. With the original sample, you can't create `aws_appautoscaling_policy` properly. No threshold data in it. It is hard to troubleshoot this issue because there is no error to run the sample with `metric_interval_lower_bound = 0'
The existing "tag" field on autoscaling groups is very limited in that it
cannot be used in conjunction with interpolation preventing from adding
dynamic tag entries.
Other AWS resources don't have this restriction on tags because they work
directly on the map type.
AWS autoscaling groups on the other hand have an additional field
"propagate_at_launch" which is not usable with a pure map type.
This fixes it by introducing an additional field called "tags" which
allows specifying a list of maps. This preserves the possibility to
declare tags as with the "tag" field but additionally allows to
construct lists of maps using interpolation syntax.
* Adds ExpressRoute circuit documentation
* Adds tests and doc improvements
* Code for basic Express Route Circuit support
* Use the built-in validation helper
* Added ignoreCaseDiffSuppressFunc to a few fields
* Added more information to docs
* Touchup
* Moving SKU properties into a set.
* Updates doc
* A bit more tweaks
* Switch to Sprintf for test string
* Updating the acceptance test name for consistency
* Added new evaluation_delay field
Added new evaluation_delay parameter to pass it through the datadog monitor api
* Changed tests for new evaluation_delay field
* changed documentation
* added vmss with managed disk support
* Update vmss docs
* update vmss test
* added vmss managed disk import test
* update vmss tests
* remove unused test resources
* reverting breaking changes on storage_os_disk and storage_image_reference
* updated vmss tests and documentation
* updated vmss flatten osdisk
* updated vmss resource and import test
* update name in vmss osdisk
* update vmss test to include a blank name
* update vmss test to include a blank name
* Add resource
* Add tests
* Add documentation
* Fix invalid comment
* Remove MinItems
* Add newline
* Store expected ID and format
* Add import note
* expiration_time can be computed if dataset has an expiration_time set
* Handle 404 using new check function
Added support for provisioning a native redis cluster elasticache replication group.
A new TypeSet attribute `cluster_mode` has been added. It requires the following
fields:
- `replicas_per_node_group` - The number of replica nodes in each node group
- `num_node_groups` - The number of node groups for this Redis replication group
Notes:
- `automatic_failover_enabled` must be set to true.
- `number_cache_clusters` is now a optional and computed field. If `cluster_mode` is set
its value will be computed as:
```num_node_groups + num_node_groups * replicas_per_node_group```
Below is a sample config:
resource "aws_elasticache_replication_group" "bar" {
replication_group_id = "tf-redis-cluser"
replication_group_description = "test description"
node_type = "cache.t2.micro"
port = 6379
parameter_group_name = "default.redis3.2.cluster.on"
automatic_failover_enabled = true
cluster_mode {
replicas_per_node_group = 1
num_node_groups = 2
}
}
Add a data source for listing available versions for Container Engine
clusters or retrieving the latest available version.
This is mostly to support our tests for specifying a version for cluster
creation; the withVersion test has been updated to use the data source,
meaning it will stop failing on us as new versions get released.
Fixes: #14217
We now check to make sure that we have the correct number of parts when
we have split the resource ID. It isn't an elegant fix but it works as
expected. Also added some more documentation about what is required to
actually construct the Id needed for import
* provider/aws: Add support for aws_ssm_maintenance_window
Fixes: #14027
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSSSMMaintenanceWindow_basic'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/04/29 13:38:19 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSSSMMaintenanceWindow_basic -timeout 120m
=== RUN TestAccAWSSSMMaintenanceWindow_basic
--- PASS: TestAccAWSSSMMaintenanceWindow_basic (51.69s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 51.711s
```
* provider/aws: Add documentation for aws_ssm_maintenance_window
* provider/aws: Add support for aws_ssm_maintenance_window_target
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSSSMMaintenanceWindowTarget'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/04/29 16:38:22 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSSSMMaintenanceWindowTarget -timeout 120m
=== RUN TestAccAWSSSMMaintenanceWindowTarget_basic
--- PASS: TestAccAWSSSMMaintenanceWindowTarget_basic (34.68s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 34.701s
```
* provider/aws: Adding the documentation for aws_ssm_maintenance_window_target
* provider/aws: Add support for aws_ssm_maintenance_window_task
* provider/aws: Documentation for aws_ssm_maintenance_window_task
This is a fix for PR #11040. The code here lowercases the name and/or prefix before sending it to the AWS API and the terraform state. This means the state will match the actual resource name and be able to converge the diff.
- config.clientCompute.Routers
- peer fields renamed
- more consistent logging
- better handling of SetId for error handling
- function for router locks
- test configs as functions
- simplify exists logic
- use getProject, getRegion logic on acceptance tests
- CheckDestroy for peers an interfaces
- dynamic router name for tunnel test
- extra fields for BgpPeer
- resource documentation
Here we add a new resource type `gitlab_project_hook`. It allows for
management of custom hooks for a gitlab project.
This is a relatively simple resource as a project hook is a simple
association between a project, and a url to hit when one of the flagged
events occurs on that project.
Hooks (called Webhooks in some user documentation, but simply Hooks
in the api documentation) are covered here for users
https://docs.gitlab.com/ce/user/project/integrations/webhooks.html and
in the API documentation at
https://docs.gitlab.com/ce/api/projects.html#hooks
The current google_compute_url_map resource already supports backend
buckets out of the box: Just pass the self_link of the backend buckets
as you would pass the self_link of a backend service.
This adds some example code as well.
When trying to run the example code with the most recent terraform
master you get the following errors:
```
2 error(s) occurred:
* google_compute_backend_service.home: "region": [REMOVED] region has been removed as it was never used
* google_compute_backend_service.login: "region": [REMOVED] region has been removed as it was never used
```
Hence remove it from the documentation.
There's a pre-existing Terraform presence on both Server Fault and
Stack Overflow that seem to be generating good questions and answers, so
here we'll try to send users to the right sites to ask questions and
encourage them to be good participants in those communities.
* Document the data source interpolation usage
Also briefly mentions how to use counts and splat syntax as with resources to further document the usage of counts for data sources (see https://github.com/hashicorp/terraform/pull/14143).
* Output -> Attribute
As per feedback.
* added emr security configurations
* gofmt after rebase
* provider/aws: Update EMR Cluster to support Security Configuration
* update test to create key
* update docs
We've supported GOOGLE_APPLICATION_CREDENTIALS as an environment
variable (it comes free with our OAuth2 client) but it has never been
documented. Documenting it now to resolve#12626.
This commit adds an option to skip TLS verification of the Triton
endpoint, which can be useful for private or temporary installations not
using a certificate signed by a trusted root CA.
Fixes#13722.
With the FQDN specified, it throws error:
```
1 error(s) occurred:
* azurerm_container_service.test: "agent_pool_profile.0.fqdn": this field cannot be set
```
Update our docs for `google_compute_forwarding_rule` to clarify that the
`ip_address` field expects a literal IP address and will not accept the
`self_link` property of a `google_compute_address` resource.
Prompted by #13375
* Make dnsimple_records importable
terraform 0.7 supports importing a resource into the local state, and
this adds that feature to the dnsimple_record resource.
Unfortunately, the DNSimple v1 API requires a domain name and record ID
to fetch a record, so the import command accepts both pieces of data as
a slash-delimted string like so:
terraform import dnsimple_record.test example.com/1234
* add an acceptance test for importing a dnsimple_record
Fixes: #13173
We now tag at instance creation and introduced `volume_tags` that can be
set so that all devices created on instance creation will receive those
tags
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSInstance_volumeTags' 2 ↵ ✚ ✭
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/04/26 06:30:48 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSInstance_volumeTags -timeout 120m
=== RUN TestAccAWSInstance_volumeTags
--- PASS: TestAccAWSInstance_volumeTags (214.31s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 214.332s
```
As a follow up to #13844, this pull request sorts the AMIs and snapshots returned from the aws_ami_ids and aws_ebs_snapshot_ids data sources, respectively.
When TerraForm is used to configure and deploy infrastructure
applications that require dozens templated files, such as Kubernetes, it
becomes extremely burdensome to template them individually: each of them
requires a data source block as well as an upload/export (file
provisioner, AWS S3, ...).
Instead, this commit introduces a mean to template an entire folder of
files (recursively), that can then be treated as a whole by any provider
or provisioner that support directory inputs (such as the
file provisioner, the archive provider, ...).
This does not intend to make TerraForm a full-fledged templating system
as the templating grammar and capabilities are left unchanged. This only
aims at improving the user-experience of the existing templating
provider by significantly reducing the overhead when several files are
to be generated - without forcing the users to rely on external tools
when these templates stay simple and that their generation in TerraForm
is justified.
This is the minimal amount of work needed to be able to create a list of a subset of subnet IDs in a VPC, allowing people to loop through them easily when creating EC2 instances or provide a list straight to an ELB.