* "external" provider for gluing in external logic
This provider will become a bit of glue to help people interface external
programs with Terraform without writing a full Terraform provider.
It will be nowhere near as capable as a first-class provider, but is
intended as a light-touch way to integrate some pre-existing or custom
system into Terraform.
* Unit test for the "resourceProvider" utility function
This small function determines the dependable name of a provider for
a given resource name and optional provider alias. It's simple but it's
a key part of how resource nodes get connected to provider nodes so
worth specifying the intended behavior in the form of a test.
* Allow a provider to export a resource with the provider's name
If a provider only implements one resource of each type (managed vs. data)
then it can be reasonable for the resource names to exactly match the
provider name, if the provider name is descriptive enough for the
purpose of the each resource to be obvious.
* provider/external: data source
A data source that executes a child process, expecting it to support a
particular gateway protocol, and exports its result. This can be used as
a straightforward way to retrieve data from sources that Terraform
doesn't natively support..
* website: documentation for the "external" provider
* add rds db for opsworks
* switched to stack in vpc
* implement update method
* add docs
* implement and document force new resource behavior
* implement retry for update and delete
* add test that forces new resource
This commit changes allowed_address_pairs from a TypeList to a TypeSet
allowing for arbitrary ordering. This solves the issue where a user
specifies an address pair one way and OpenStack returns a different
order.
* Update to latest version of go-datadog-api
* Updates to latest go-datadog-api version, which adds more complete
timeboard support.
* Add more complete timeboard support
* Adds in support for missing timeboard fields, so now we can have nice
things like conditional formats and more.
* Document new fields in datadog_timeboard resource
* Add acceptance test for datadog timeboard changes
* Add new aws_vpc_endpoint_route_table_association resource.
This commit adds a new resource which allows to a list of route tables to be
either added and/or removed from an existing VPC Endpoint. This resource would
also be complimentary to the existing `aws_vpc_endpoint` resource where the
route tables might not be specified (not a requirement for a VPC Endpoint to
be created successfully) during creation, especially where the workflow is
such where the route tables are not immediately known.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
Additions by Kit Ewbank <Kit_Ewbank@hotmail.com>:
* Add functionality
* Add documentation
* Add acceptance tests
* Set VPC endpoint route_table_ids attribute to "Computed"
* Changes after review - Set resource ID in create function.
* Changes after code review by @kwilczynski:
* Removed error types and simplified the error handling in 'resourceAwsVPCEndpointRouteTableAssociationRead'
* Simplified logging in 'resourceAwsVPCEndpointRouteTableAssociationDelete'
Update our instance template to include metadata_startup_script, to
match our instance resource. Also, we've resolved the diff errors around
metadata.startup-script, and people want to use that to create startup
scripts that don't force a restart when they're changed, so let's stop
disallowing it.
Also, we had a bunch of calls to `schema.ResourceData.Set` that ignored
the errors, so I added error handling for those calls. It's mostly
bundled with this code because I couldn't be sure whether it was the
root of bugs or not, so I took care of it while addressing the startup
script issue.
* provider/openstack: Detect Region for Importing Resources
This commit changes the way the OpenStack region is detected and set.
Any time a region is required, the region attribute will first be
checked. Next, the OS_REGION_NAME environment variable will be checked.
While schema.EnvDefaultFunc handles this same situation, it is not
applicable when importing resources.
* provider/openstack: No longer ignore region in importing tests
* provider/openstack: Network and Subnet Import Fixes
This commit fixes the OpenStack Network and Subnet resources so that
importing of those resources is successful.
Fixes#10440
This updates the behavior of "apply" resources to depend on the
destroy versions of their dependencies.
We make an exception to this behavior when the "apply" resource is CBD.
This is odd and not 100% correct, but it mimics the behavior of the
legacy graphs and avoids us having to do major core work to support the
100% correct solution.
I'll explain this in examples...
Given the following configuration:
resource "null_resource" "a" {
count = "${var.count}"
}
resource "null_resource" "b" {
triggers { key = "${join(",", null_resource.a.*.id)}" }
}
Assume we've successfully created this configuration with count = 2.
When going from count = 2 to count = 1, `null_resource.b` should wait
for `null_resource.a.1` to destroy.
If it doesn't, then it is a race: depending when we interpolate the
`triggers.key` attribute of `null_resource.b`, we may get 1 value or 2.
If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get
2. This was the root cause of #10440
In the legacy graphs, `null_resource.b` would depend on the destruction
of any `null_resource.a` (orphans, tainted, anything!). This would
ensure proper ordering. We mimic that behavior here.
The difference is CBD. If `null_resource.b` has CBD enabled, then the
ordering **in the legacy graph** becomes:
1. null_resource.b (create)
2. null_resource.b (destroy)
3. null_resource.a (destroy)
In this case, the update would always have 2 values for `triggers.key`,
even though we were destroying a resource later! This scenario required
two `terraform apply` operations.
This is what the CBD check is for in this PR. We do this to mimic the
behavior of the legacy graph.
The correct solution to do one day is to allow splat references
(`null_resource.a.*.id`) to happen in parallel and only read up to to
the `count` amount in the state. This requires some fairly significant
work close to the 0.8 release date, so we can defer this to later and
adopt the 0.7.x behavior for now.