* Importing v7.0.1 of the SDK
* Configuring the Container Service Client
* Scaffolding the Container Service resource
* Scaffolding the Website documentation
* Completing the documentation
* Acceptance Tests for Kubernetes Azure Container Service
* DCOS / Swarm tests
* Parsing values back from the API properly
* Fixing the test
* Service Principal can be optional. Because of course it can.
* Validation for the Container Service Count's
* Updating the docs
* Updating the field required values
* Making the documentation more explicit
* Fixing the build
* Examples for DCOS and Swarm
* Removing storage_uri for now
* Making the SSH Key required as per the docs
* Resolving the merge conflicts
* Removing the unused error's
* Adding Hash's to the schema's
* Switching out the provider registration
* Fixing the hash definitions
* Updating keydata to match
* Client Secret is sensitive
* List -> Set
* Using the first item for the diagnostic_profile
* Helps if you actually update the type
* Updating the docs to include the Computed fields
* Fixing comments / removing redundant optional checks
* Removing the FQDN's from the examples
* Moving the Container resources together
* Add aws dms vendoring
* Add aws dms endpoint resource
* Add aws dms replication instance resource
* Add aws dms replication subnet group resource
* Add aws dms replication task resource
* Fix aws dms resource go vet errors
* Review fixes: Add id validators for all resources. Add validator for endpoint engine_name.
* Add aws dms resources to importability list
* Review fixes: Add aws dms iam role dependencies to test cases
* Review fixes: Adjustments for handling input values
* Add aws dms replication subnet group tagging
* Fix aws dms subnet group doesn't use standard error for resource not found
* Missed update of aws dms vendored version
* Add aws dms certificate resource
* Update aws dms resources to force new for immutable attributes
* Fix tests failing on subnet deletion by adding explicit dependencies. Combine import tests with basic tests to cut down runtime.
* Added Step Function Activity & Step Function State Machine
* Added SFN State Machine documentation
* Added aws_sfn_activity & documentation
* Allowed import of sfn resources
* Added more checks on tests, fixed documentation
* Handled the update case of a SFN function (might be already deleting)
* Removed the State Machine import test file
* Fixed the eventual consistency of the read after delete for SFN functions
Implementing vpc_peering_connection_accept.
Additions from @ewbankkit:
Rename 'aws_vpc_peering_connection_accept' to 'aws_vpc_peering_connection_accepter'.
Get it working reusing functionality from 'aws_vpc_peering_connection' resource.
* Add a new data provider to decrypt AWS KMS secrets
* Address feedback
* Rename aws_kms_secrets to aws_kms_secret
* Add more examples to the documentation
Add support for creating, updating, and deleting projects, as well as
their enabled services and their IAM policies.
Various concessions were made for backwards compatibility, and will be
removed in 0.9 or 0.10.
* vendor: update gopkg.in/ns1/ns1-go.v2
* provider/ns1: Port the ns1 provider to Terraform core
* docs/ns1: Document the ns1 provider
* ns1: rename remaining nsone -> ns1 (#10805)
* Ns1 provider (#11300)
* provider/ns1: Flesh out support for meta structs.
Following the structure outlined by @pashap.
Using reflection to reduce copy/paste.
Putting metas inside single-item lists. This is clunky, but I couldn't
figure out how else to have a nested struct. Maybe the Terraform people
know a better way?
Inside the meta struct, all fields are always written to the state; I
can't figure out how to omit fields that aren't used. This is not just
verbose, it actually causes issues because you can't have both "up" and
"up_feed" set).
Also some minor other changes:
- Add "terraform" import support to records and zones.
- Create helper class StringEnum.
* provider/ns1: Make fmt
* provider/ns1: Remove stubbed out RecordRead (used for testing metadata change).
* provider/ns1: Need to get interface that m contains from Ptr Value with Elem()
* provider/ns1: Use empty string to indicate no feed given.
* provider/ns1: Remove old record.regions fields.
* provider/ns1: Removes redundant testAccCheckRecordState
* provider/ns1: Moves account permissions logic to permissions.go
* provider/ns1: Adds tests for team resource.
* provider/ns1: Move remaining permissions logic to permissions.go
* ns1/provider: Adds datasource.config
* provider/ns1: Small clean up of datafeed resource tests
* provider/ns1: removes testAccCheckZoneState in favor of explicit name check
* provider/ns1: More renaming of nsone -> ns1
* provider/ns1: Comment out metadata for the moment.
* Ns1 provider (#11347)
* Fix the removal of empty containers from a flatmap
Removal of empty nested containers from a flatmap would sometimes fail a
sanity check when removed in the wrong order. This would only fail
sometimes due to map iteration. There was also an off-by-one error in
the prefix check which could match the incorrect keys.
* provider/ns1: Adds ns1 go client through govendor.
* provider/ns1: Removes unused debug line
* docs/ns1: Adds docs around apikey/datasource/datafeed/team/user/record.
* provider/ns1: Gets go vet green
* provider/aws: New DataSource: aws_elb_hosted_zone_id
This datasource is a list of all of the ELB DualStack Hosted Zone IDs.
This will allow us to reference the correct hosted zone id when creating
route53 alias records
There are many bugs for this - this is just the beginning of fixing them
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSElbHostedZoneId_basic'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/01/04 13:04:32 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSElbHostedZoneId_basic -timeout 120m
=== RUN TestAccAWSElbHostedZoneId_basic
--- PASS: TestAccAWSElbHostedZoneId_basic (20.46s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 20.484s
```
* Update elb_hosted_zone_id.html.markdown
* provider/aws: New Resource - aws_codedeploy_deployment_config
* provider/aws: Adding acceptance tests for new
aws_codedeploy_deployment_config resource
* provider/aws: Documentation for the aws_codedeploy_deployment_config resource
* Update codedeploy_deployment_config.html.markdown
* Importing the OpsGenie SDK
* Adding the goreq dependency
* Initial commit of the OpsGenie / User provider
* Refactoring to return a single client
* Adding an import test / fixing a copy/paste error
* Adding support for OpsGenie docs
* Scaffolding the user documentation for OpsGenie
* Adding a TODO
* Adding the User data source
* Documentation for OpsGenie
* Adding OpsGenie to the internal plugin list
* Adding support for Teams
* Documentation for OpsGenie Team's
* Validation for Teams
* Removing Description for now
* Optional fields for a User: Locale/Timezone
* Removing an implemented TODO
* Running makefmt
* Downloading about half the internet
Someone witty might simply sign this commit with "npm install"
* Adding validation to the user object
* Fixing the docs
* Adding a test creating multple users
* Prompting for the API Key if it's not specified
* Added a test for multiple users / requested changes
* Fixing the linting
* Adding a missing property to the Consumer Groups doc
* Support for Event Hub Authorization Rules
* Documentation for Authorization Rules
* Missed a comment
* Fixing the `no authorisation rule` state
* Making the documentation around the Permissions more explicit / updating the import url
* Fixing up the tests
* Clearing up the docs
* Fixing the linting
* Moving the validation inside of the expand
* Fixing the indentation
* Adding the Container Registry SDK
* Implementing the container registry
* Enabling the provider / registering the Resource Provider
* Acceptance Tests
* Documentation for Container Registry
* Fixing the name validation
* Validation for the Container Registry Name
* Added Import support for Containr Registry
* Storage Account is no longer optional
* Updating the docs
* Forcing a re-run in Travis
* Add 'aws_vpc_peering_connection' data source.
* Changes after code review.
* Add 'accepter' and 'requester' blocks to aws_vpc_peering_connection data source output attributes.
* Documentation
* Scaffolding Consumer Groups
* Linting
* Adding a missing </li>
* User MetaData needs to explicitly be optional
* Typo
* Fixing the test syntax
* Removing the eventHubPath field since it appears to be deprecated
* Added a Complete import test
* WIP
* Updating to use SDK 7.0.1
* Calling the correct method
* Removing eventhubPath and updating the docs
* Fixing a typo
* Implementing Redis Cache
* Properties should never be nil
* Updating the SDK to 7.0.1
* Redis Cache updated for SDK 7.0.1
* Fixing the max memory validation tests
* Cleaning up
* Adding tests for Standard with Tags
* Int's -> Strings for the moment
* Making the RedisConfiguration object mandatory
* Only parse out redis configuration values if they're set
* Updating the RedisConfiguration object to be required in the documentaqtion
* Adding Tags to the Standard tests / importing excluding the redisConfiguration
* Removing support for import for Redis Cache for now
* Removed a scaling test
* vendor: update github.com/Ensighten/udnssdk to v1.2.1
* ultradns_tcpool: add
* ultradns.baseurl: set default
* ultradns.record: cleanup test
* ultradns_record: extract common, cleanup
* ultradns: extract common
* ultradns_dirpool: add
* ultradns_dirpool: fix rdata.ip_info.ips to be idempotent
* ultradns_tcpool: add doc
* ultradns_dirpool: fix rdata.geo_codes.codes to be idempotent
* ultradns_dirpool: add doc
* ultradns: cleanup testing
* ultradns_record: rename resource
* ultradns: log username from config, not client
udnssdk.Client is being refactored to use x/oauth2, so don't assume we
can access Username from it
* ultradns_probe_ping: add
* ultradns_probe_http: add
* doc: add ultradns_probe_ping
* doc: add ultradns_probe_http
* ultradns_record: remove duplication from error messages
* doc: cleanup typos in ultradns
* ultradns_probe_ping: add test for pool-level probe
* Clean documentation
* ultradns: pull makeSetFromStrings() up to common.go
* ultradns_dirpool: log hashIPInfoIPs
Log the key and generated hashcode used to index ip_info.ips into a set.
* ultradns: simplify hashLimits()
Limits blocks only have the "name" attribute as their primary key, so
hashLimits() needn't use a buffer to concatenate.
Also changes log level to a more approriate DEBUG.
* ultradns_tcpool: convert rdata to schema.Set
RData blocks have the "host" attribute as their primary key, so it is
used by hashRdatas() to create the hashcode.
Tests are updated to use the new hashcode indexes instead of natural
numbers.
* ultradns_probe_http: convert agents to schema.Set
Also pull the makeSetFromStrings() helper up to common.go
* ultradns: pull hashRdatas() up to common
* ultradns_dirpool: convert rdata to schema.Set
Fixes TF-66
* ultradns_dirpool.conflict_resolve: fix default from response
UltraDNS REST API User Guide claims that "Directional Pool
Profile Fields" have a "conflictResolve" field which "If not
specified, defaults to GEO."
https://portal.ultradns.com/static/docs/REST-API_User_Guide.pdf
But UltraDNS does not actually return a conflictResolve
attribute when it has been updated to "GEO".
We could fix it in udnssdk, but that would require either:
* hide the response by coercing "" to "GEO" for everyone
* use a pointer to allow checking for nil (requires all
users to change if they fix this)
An ideal solution would be to have the UltraDNS API respond
with this attribute for every dirpool's rdata.
So at the risk of foolish consistency in the sdk, we're
going to solve it where it's visible to the user:
by checking and overriding the parsing. I'm sorry.
* ultradns_record: convert rdata to set
UltraDNS does not store the ordering of rdata elements, so we need a way
to identify if changes have been made even it the order changes.
A perfect job for schema.Set.
* ultradns_record: parse double-encoded answers for TXT records
* ultradns: simplify hashLimits()
Limits blocks only have the "name" attribute as their primary key, so
hashLimits() needn't use a buffer to concatenate.
* ultradns_dirpool.description: validate
* ultradns_dirpool.rdata: doc need for set
* ultradns_dirpool.conflict_resolve: validate
* provider/aws: data source for AWS Hosted Zone
* add caller_reference, resource_record_set_count fields, manage private zone and trailing dot
* fix fmt
* update documentation, use string function in hostedZoneNamewq
* add vpc_id support
* add tags support
* add documentation for hosted zone data source tags support
* "external" provider for gluing in external logic
This provider will become a bit of glue to help people interface external
programs with Terraform without writing a full Terraform provider.
It will be nowhere near as capable as a first-class provider, but is
intended as a light-touch way to integrate some pre-existing or custom
system into Terraform.
* Unit test for the "resourceProvider" utility function
This small function determines the dependable name of a provider for
a given resource name and optional provider alias. It's simple but it's
a key part of how resource nodes get connected to provider nodes so
worth specifying the intended behavior in the form of a test.
* Allow a provider to export a resource with the provider's name
If a provider only implements one resource of each type (managed vs. data)
then it can be reasonable for the resource names to exactly match the
provider name, if the provider name is descriptive enough for the
purpose of the each resource to be obvious.
* provider/external: data source
A data source that executes a child process, expecting it to support a
particular gateway protocol, and exports its result. This can be used as
a straightforward way to retrieve data from sources that Terraform
doesn't natively support..
* website: documentation for the "external" provider
* Add new aws_vpc_endpoint_route_table_association resource.
This commit adds a new resource which allows to a list of route tables to be
either added and/or removed from an existing VPC Endpoint. This resource would
also be complimentary to the existing `aws_vpc_endpoint` resource where the
route tables might not be specified (not a requirement for a VPC Endpoint to
be created successfully) during creation, especially where the workflow is
such where the route tables are not immediately known.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
Additions by Kit Ewbank <Kit_Ewbank@hotmail.com>:
* Add functionality
* Add documentation
* Add acceptance tests
* Set VPC endpoint route_table_ids attribute to "Computed"
* Changes after review - Set resource ID in create function.
* Changes after code review by @kwilczynski:
* Removed error types and simplified the error handling in 'resourceAwsVPCEndpointRouteTableAssociationRead'
* Simplified logging in 'resourceAwsVPCEndpointRouteTableAssociationDelete'
* provider/aws: Add ability to create aws_ebs_snapshot
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSEBSSnapshot_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2016/11/10 14:18:36 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSEBSSnapshot_
-timeout 120m
=== RUN TestAccAWSEBSSnapshot_basic
--- PASS: TestAccAWSEBSSnapshot_basic (31.56s)
=== RUN TestAccAWSEBSSnapshot_withDescription
--- PASS: TestAccAWSEBSSnapshot_withDescription (189.35s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws220.928s
```
* docs/aws: Addition of the docs for aws_ebs_snapshot resource
* provider/aws: Creation of shared schema funcs for common AWS data source
patterns
* provider/aws: Create aws_ebs_snapshot datasource
Fixes#8828
This data source will use a number of filters, owner_ids, snapshot_ids
and restorable_by_user_ids in order to find the correct snapshot. The
data source has no real use case for most_recent and will error on no
snapshots found or greater than 1 snapshot found
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSEbsSnapshotDataSource_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2016/11/10 14:34:33 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v
-run=TestAccAWSEbsSnapshotDataSource_ -timeout 120m
=== RUN TestAccAWSEbsSnapshotDataSource_basic
--- PASS: TestAccAWSEbsSnapshotDataSource_basic (192.66s)
=== RUN TestAccAWSEbsSnapshotDataSource_multipleFilters
--- PASS: TestAccAWSEbsSnapshotDataSource_multipleFilters (33.84s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws226.522s
```
* docs/aws: Addition of docs for the aws_ebs_snapshot data source
Adds the new resource `aws_ebs_snapshot`
* provider/aws: Add aws_alb data source
This adds the aws_alb data source for getting information on an AWS
Application Load Balancer.
The schema is nearly the same as the resource of the same name, with
most of the resource population logic de-coupled into its own function
so that they can be shared between the resource and data source.
* provider/aws: aws_alb data source language revisions
* Multiple/zero result error slightly updated to be a bit more
specific.
* Fixed relic of the copy of the resource docs (resource -> data
source)
This commit adds the openstack_compute_volume_attach_v2 resource. This
resource enables a volume to be attached to an instance by using the
OpenStack Compute (Nova) v2 volumeattach API.
This commit adds the openstack_blockstorage_volume_attach_v2 resource. This
resource enables a volume to be attached to an instance by using the OpenStack
Block Storage (Cinder) v2 API.
* provider/github: add GitHub labels resource
Provides a GitHub issue label resource.
This resource allows easy management of issue labels for an
organisation's repositories. A name, and a color can be set.
These attributes can be updated without creating a new resource.
* provider/github: add documentation for GitHub issue labels resource
* provider/aws: Add aws_alb_listener data source
This adds the aws_alb_listener data source to get information on an AWS
Application Load Balancer listener.
The schema is slightly modified (only option-wise, attributes are the
same) and we use the aws_alb_listener resource read function to get the
data.
Note that the HTTPS test here may fail due until
hashicorp/terraform#10180 is merged.
* provider/aws: Add aws_alb_listener data source docs
Now documented.
Picked up from where #6548 left off
settings and protected_settings take JSON objects as strings to make extension
generic
TF_ACC=1 go test ./builtin/providers/azurerm -v -run "TestAccAzureRMVirtualMachineExtension" -timeout 120m
=== RUN TestAccAzureRMVirtualMachineExtension_importBasic
--- PASS: TestAccAzureRMVirtualMachineExtension_importBasic (697.55s)
=== RUN TestAccAzureRMVirtualMachineExtension_basic
--- PASS: TestAccAzureRMVirtualMachineExtension_basic (824.17s)
=== RUN TestAccAzureRMVirtualMachineExtension_concurrent
--- PASS: TestAccAzureRMVirtualMachineExtension_concurrent (929.74s)
=== RUN TestAccAzureRMVirtualMachineExtension_linuxDiagnostics
--- PASS: TestAccAzureRMVirtualMachineExtension_linuxDiagnostics (803.19s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/azurerm 3254.663s
* Implemented EventHubs
* Missing the sidebar link
* Fixing the type
* Fixing the docs for Namespace
* Removing premium tests
* Checking the correct status code on delete
* Added a test case for the import
* Documentation for importing
* Fixing a typo
* GH-8755 - Adding in support to attach ASG to ELB as independent action
* GH-8755 - Adding in docs
* GH-8755 - Adjusting attribute name and responding to other PR feedback
This will allows us to filter a specific ebs_volume for attachment to an
aws_instance
```
make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSEbsVolumeDataSource_'✹
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2016/11/01 12:39:19 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v
-run=TestAccAWSEbsVolumeDataSource_ -timeout 120m
=== RUN TestAccAWSEbsVolumeDataSource_basic
--- PASS: TestAccAWSEbsVolumeDataSource_basic (28.74s)
=== RUN TestAccAWSEbsVolumeDataSource_multipleFilters
--- PASS: TestAccAWSEbsVolumeDataSource_multipleFilters (28.37s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws57.145s
```
* Adding private gateway and static route resource to cloudstack provider
Testing the private gateway and static route resource requires a ROOT
account in Cloudstack
* changes requested by reviewer
* provider/aws: data source for AWS Security Group
* provider/aws: add documentation for data source for AWS Security Group
* provider/aws: data source for AWS Security Group (improve if condition and syntax)
* fix fmt
* Add AWS Prefix List data source.
AWS Prefix List data source acceptance test.
AWS Prefix List data source documentation.
* Improve error message when PL not matched.
* govendor: update go-cloudstack dependency
* Separate security groups and rules
This commit separates the creation and management of security groups and security group rules.
It extends the `icmp` options so you can supply `icmp_type` and `icmp_code` to enbale more specific configs.
And it adds lifecycle management of security group rules, so that security groups do not have to be recreated when rules are added or removed.
This is particulary helpful since the `cloudstack_instance` cannot update a security group without having to recreate the instance.
In CloudStack >= 4.9.0 it is possible to update security groups of existing instances, but as that is just added to the latest version it seems a bit too soon to start using this (causing backwards incompatibility issues for people or service providers running older versions).
* Add and update documentation
* Add acceptance tests
* Converting archive_file to datasource.
* Ratcheting back new dir perms.
* Ratcheting back new dir perms.
* goimports
* Adding output_base64sha256 attribute to archive_file.
Updating docs.
* Dropping CheckDestroy since this is a data source.
* Correcting data source attribute checks.
use us-west-2 region in tests
update test with working config
provider/aws: Update EMR contribution with passing test, polling for instance in DELETE method
remove defaulted role
document emr_cluster
rename aws_emr -> aws_emr_cluster
update docs for name change
update delete timeout/polling
rename emr taskgroup to emr instance group
default instance group count to 0, down from 60
update to ref emr_cluster, emr_instance_group
more cleanups for instance groups; need to read and update
add read, delete method for instance groups
refactor the read method to seperate out the fetching of the specific group
more refactoring for finding instance groups
update emr instance group docs
err check on reading HTTP. Dont' return the error, just log it
refactor the create method to catch optionals
additional cleanups, added a read method
update test to be non-master-only
wrap up the READ method for clusters
poll for instance group to be running after a modification
patch up a possible deref
provider/aws: EMR cleanups
fix test naming
remove outdated docs
randomize emr_profile names
The primary purpose of this data source is to ask the question "what is
my current region?", but it can also be used to retrieve the endpoint
hostname for a particular (possibly non-current) region, should that be
useful for some esoteric case.
This adds a singular data source in addition to the existing plural one.
This allows retrieving data about a specific AZ.
As a helper for writing reusable modules, the AZ letter (without its
usual region name prefix) is exposed so that it can be used in
region-agnostic mappings where a different value is used per AZ, such as
for subnet numbering schemes.
In c244e5a6 this resource was converted to a data source, but that was
a mistake since data sources are expected to produce stable results on
each run, and yet certificate requests contain a random nonce as part of
the signature.
Additionally, using the data source as a managed resource through the
provided compatibility shim was not actually working, since "Read" was
trying to parse the private key out of a SHA1 hash of the key, which is
what we place in state due to the StateFunc on that attribute.
By restoring this we restore Terraform's ability to produce all of the
parts of a basic PKI/CA, which is useful for creating dev environments
and bootstrapping PKI for production environments.
- add remote state provider backed by Joyent's Manta
- add documentation of Manta remote state provider
- explicitly check for passphrase-protected SSH keys, which are currently
unsupported, and generate a more helpful error (borrowed from Packer's
solution to the same problem):
https://github.com/mitchellh/packer/blob/master/common/ssh/key.go#L27
This is a rework of pull request #6213 submitted by @joshuaspence,
adjusted to work with the remote state data source. We also add
a deprecation warning for people using the unsupported API, and retain
the ability to refer to "_local" as well as "local" for users in a mixed
version environment.
This is a requirement for enabling CloudWatch Logging on Kinesis
Firehost
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSCloudWatchLogStream_'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2016/09/02 16:19:14 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v
-run=TestAccAWSCloudWatchLogStream_ -timeout 120m
=== RUN TestAccAWSCloudWatchLogStream_basic
--- PASS: TestAccAWSCloudWatchLogStream_basic (22.31s)
=== RUN TestAccAWSCloudWatchLogStream_disappears
--- PASS: TestAccAWSCloudWatchLogStream_disappears (21.21s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 43.538s
This commit adds a new "attachment" style resource for setting the
policy of an AWS S3 bucket. This is desirable such that the ARN of the
bucket can be referenced in an IAM Policy Document.
In addition, we now suppress diffs on the (now-computed) policy in the
S3 bucket for structurally equivalent policies, which prevents flapping
because of whitespace and map ordering changes made by the S3 endpoint.
* provider/aws: Add docs for Default Route Table
* add new default_route_table_id attribute, test to VPC
* stub
* add warning to docs
* rough implementation
* first test
* update test, add swap test
* fix typo
* provider/consul: first stab at adding prepared query support
* provider/consul: flatten pq resource
* provider/consul: implement updates for PQ's
* provider/consul: implement PQ delete
* provider/consul: add acceptance tests for prepared queries
* provider/consul: add template support to PQ's
* provider/consul: use substructures to express optional related components for PQs
* website: first pass at consul prepared query docs
* provider/consul: PQ's support datacenter option and store_token option
* provider/consul: remove store_token on PQ's for now
* provider/consul: allow specifying a separate stored_token
* website: update consul PQ docs
* website: add link to consul_prepared_query resource
* vendor: update github.com/hashicorp/consul/api
* provider/consul: handle 404's when reading prepared queries
* provider/consul: prepared query failover dcs is a list
* website: update consul PQ example usage
* website: re-order arguments for consul prepared queries
This commit adds a resource, acceptance tests and documentation for the
Target Groups for Application Load Balancers.
This is the second in a series of commits to fully support the new
resources necessary for Application Load Balancers.
This commit adds a resource, acceptance tests and documentation for the
new Application Load Balancer (aws_alb). We choose to use the name alb
over the package name, elbv2, in order to avoid confusion.
This is the first in a series of commits to fully support the new
resources necessary for Application Load Balancers.
When you need to enable monitoring for Redshift, you need to create the
correct policy in the bucket for logging. This needs to have the
Redshift Account ID for a given region. This data source provides a
handy lookup for this
http://docs.aws.amazon.com/redshift/latest/mgmt/db-auditing.html#db-auditing-enable-logging
% make testacc TEST=./builtin/providers/aws
% TESTARGS='-run=TestAccAWSRedshiftAccountId_basic' 2 ↵ ✹ ✭
==> Checking that code complies with gofmt requirements...
/Users/stacko/Code/go/bin/stringer
go generate $(go list ./... | grep -v /terraform/vendor/)
2016/08/16 14:39:35 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v
-run=TestAccAWSRedshiftAccountId_basic -timeout 120m
=== RUN TestAccAWSRedshiftAccountId_basic
--- PASS: TestAccAWSRedshiftAccountId_basic (19.47s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 19.483s
This commit adds the `state rm` command for removing an address from
state. It is the result of a rebase from pull-request #5953 which was
lost at some point during the Terraform 0.7 feature branch merges.
This data source provides access during configuration to the ID of the
AWS account for the connection to AWS. It is primarily useful for
interpolating into policy documents, for example when creating the
policy for an ELB or ALB access log bucket.
This will need revisiting and further testing once the work for
AssumeRole is integrated.
This commit adds VPN Gateway attachment resource, and also an initial tests and
documentation stubs.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
* Improve influxdb provider
- reduce public funcs. We should not make things public that don't need to be public
- improve tests by verifying remote state
- add influxdb_user resource
allows you to manage influxdb users:
```
resource "influxdb_user" "admin" {
name = "administrator"
password = "super-secret"
admin = true
}
```
and also database specific grants:
```
resource "influxdb_user" "ro" {
name = "read-only"
password = "read-only"
grant {
database = "a"
privilege = "read"
}
}
```
* Grant/ revoke admin access properly
* Add continuous_query resource
see
https://docs.influxdata.com/influxdb/v0.13/query_language/continuous_queries/
for the details about continuous queries:
```
resource "influxdb_database" "test" {
name = "terraform-test"
}
resource "influxdb_continuous_query" "minnie" {
name = "minnie"
database = "${influxdb_database.test.name}"
query = "SELECT min(mouse) INTO min_mouse FROM zoo GROUP BY time(30m)"
}
```
* provider/mysql: User Resource
This commit introduces a mysql_user resource. It includes basic
functionality of adding a user@host along with a password.
* provider/mysql: Grant Resource
This commit introduces a mysql_grant resource. It can grant a set
of privileges to a user against a whole database.
* provider/mysql: Adding documentation for user and grant resources
Previously the consul_keys resource did double-duty as both a reader and
writer of values from the Consul key/value store, but that made its
interface rather confusing and complex, as well as having all of the other
general problems associated with read-only resources.
Here we split the functionality such that reading is done with the
consul_keys data source while writing is done with the consul_keys
resource.
The old read behavior of the resource is still supported, but it's no
longer documented (except as a deprecation note) and will generate
deprecation warnings when used.
In future it should be possible to simplify the consul_keys resource by
removing all of the read support, but that is deferred for now to give
users a chance to gracefully migrate to the new data source.
Sidebar:
- Rename "Azure (Resource Manager)" to "Microsoft Azure" and sort
accordingly
- Rename "Azure (Service Management)" to "Microsoft Azure (Legacy ASM)"
and sort accordingly
ARM provider docs:
- Name changes everywhere to Microsoft Azure Provider
- Mention and link to "legacy Azure Service Management Provider" in opening paragraph
- Sidebar gains link at bottom to Azure Service Management Provider
ASM provider docs:
- Name changes everywhere to Azure Service Management Provider
- Sidebar gains link at bottom to Microsoft Azure Provider
- Every page gets a header with the following
- "NOTE: The Azure Service Management provider is no longer being actively developed by HashiCorp employees. It continues to be supported by the community. We recommend using the Azure Resource Manager based [Microsoft Azure Provider] instead if possible."
* Add scaleway provider
this PR allows the entire scaleway stack to be managed with terraform
example usage looks like this:
```
provider "scaleway" {
api_key = "snap"
organization = "snip"
}
resource "scaleway_ip" "base" {
server = "${scaleway_server.base.id}"
}
resource "scaleway_server" "base" {
name = "test"
# ubuntu 14.04
image = "aecaed73-51a5-4439-a127-6d8229847145"
type = "C2S"
}
resource "scaleway_volume" "test" {
name = "test"
size_in_gb = 20
type = "l_ssd"
}
resource "scaleway_volume_attachment" "test" {
server = "${scaleway_server.base.id}"
volume = "${scaleway_volume.test.id}"
}
resource "scaleway_security_group" "base" {
name = "public"
description = "public gateway"
}
resource "scaleway_security_group_rule" "http-ingress" {
security_group = "${scaleway_security_group.base.id}"
action = "accept"
direction = "inbound"
ip_range = "0.0.0.0/0"
protocol = "TCP"
port = 80
}
resource "scaleway_security_group_rule" "http-egress" {
security_group = "${scaleway_security_group.base.id}"
action = "accept"
direction = "outbound"
ip_range = "0.0.0.0/0"
protocol = "TCP"
port = 80
}
```
Note that volume attachments require the server to be stopped, which can lead to
downtimes of you attach new volumes to already used servers
* Update IP read to handle 404 gracefully
* Read back resource on update
* Ensure IP detachment works as expected
Sadly this is not part of the official scaleway api just yet
* Adjust detachIP helper
based on feedback from @QuentinPerez in
https://github.com/scaleway/scaleway-cli/pull/378
* Cleanup documentation
* Rename api_key to access_key
following @stack72 suggestion and rename the provider api_key for more clarity
* Make tests less chatty by using custom logger
The template resources don't actually need to retain any state, so they
are good candidates to be data sources.
This includes a few tweaks to the acceptance tests -- now configured to
run as unit tests -- since it seems that they have been slightly broken
for a while now. In particular, the "update" cases are no longer tested
because updating is not a meaningful operation for a data source.
This resource (unlike the others in this provider) isn't stateful, so it
is a good candidate to be a data source.
The old resource form is preserved via the standard shim in helper/schema,
which will generate a deprecation warning but will still allow the
resource to be used.
* small doc update
* provider/atlas: Add docs for Artifact Data Source
* provider/atlas: Remove a test method that isn't used
* provider/atlas: Add Data Source for Atlas Artifact
* provider/atlas: Show deprecation error on atlas_artifact resource
* Add SES resource
* Detect ReceiptRule deletion outside of Terraform
* Handle order of rule actions
* Add position field to docs
* Fix hashes, add log messages, and other small cleanup
* Fix rebase issue
* Fix formatting
this datasource allows terraform to work with externally modified state, e.g.
when you're using an ECS service which is continously updated by your CI via the
AWS CLI.
right now you'd have to wrap terraform into a shell script which looks up the
current image digest, so running terraform won't change the updated service.
using the aws_ecs_container_definition data source you can now leverage
terraform, removing the wrapper entirely.