These two provider options are optional though if they are not set,
the user will be prompted to enter values.
By changing them to use the envDefaultFuncAllowMissing, the values
are still passed in the environment if they are set and safely
discarded if they are not.
* 'master' of github.com:reverbdotcom/terraform: (524 commits)
docs: tweaks to RELEASING
Minor change to docs
Update CHANGELOG.md
Update DynamoDB example docs to remove non-key attributes; update test to remove non-key attribute from attribute set to prevent infinite planning loops
Update CHANGELOG.md
use /usr/bin/env bash
provider/aws: fix go vet
provider/aws: ignore providers with Meta nil
update CHANGELOG
provider/aws: Code cleanups for Spot Requests
provider/aws: fix db_subnet acc test
Fixing the tests
Fixes issue #2568
Update CHANGELOG.md
Update CHANGELOG.md
fixes typo
Fixed void Azure network config bug.
provider/aws: ecs task definition is deregistered correctly
provider/azure: fixup storage service test
provider/docker: [tests] change images
...
We changed the way validation works for providers so that they aren't
always configured if they have computed attributes. The result is that
sometimes the Configure won't be called, hence Meta is nil
AWS accepts uppercase DB Subnet Group names - it just automatically
downcases them. We already had logic to handle that - so we
intentionally had an acctest with uppercase characters that was now
failing.
Loosening the regexp to allow uppercase letters for now - we can discuss
if we want to tighten the validation as a separate question.
/cc @radeksimko @catsby
When surrounding the version with quotes, even no version (an empty
string) will be accepted as parameter. The install.sh script treats an
empty version string the same as no when version is set. So it will
then just use the latest available version.
favor of attempting to detect if the initial container ever enters
running state, and erroring out if not. It will re-check the container
once every 500ms for 15 seconds total; future work could make that
configurable.
Links cause there to be more than one name for a container to be
returned. As a result, only looking at the first element of the
container names could cause a container to not be found, leading
Terraform to remove it from state and attempt to recreate it.
the Docker API get those containers running. Otherwise when
you try to start a container linking to them, the start command
will fail, leading to an error.
Before this option (`os_type`) the provisioner would use the connection
type to determine the targeted OS. When not supplying a value for
`os_type`, it will fall back to the old behaviour, so this is full BC.
Fixes crash in #2431
Decided that `findResourceSecurityGroup` should return an error when
the SG is not found, since the callers cannot happily continue with a
`nil` SG
Also passes through a few error cases that were being swallowed.
/cc @catsby
Some AMIs have a RootDeviceName like "/dev/sda1" that does not appear as a
DeviceName in the BlockDeviceMapping list (which will instead have
something like "/dev/sda")
While this seems like it breaks an invariant of AMIs, it ends up working
on the AWS side, and AMIs like this are common enough that we need to
special case it so Terraform does the right thing.
Our heuristic is: if the RootDeviceName does not appear in the
BlockDeviceMapping, assume that the DeviceName of the first
BlockDeviceMapping entry serves as the root device.
fixes#2224
* master:
Update CHANGELOG.md
Update CHANGELOG.md
Added affinity group resource.
update link to actually work
provider/azure: Fix SQL client name to match upstream
add warning message to explain scenario of conflicting rules
typo
remove debugging
Update CHANGELOG.md
provider/aws: Add docs for autoscaling_policy + cloudwatch_metric_alarm
provider/aws: Add autoscaling_policy
provider/aws: Add cloudwatch_metric_alarm
rename method, update docs
clean up some conflicts with
clean up old, incompatible test
update tests with another example
update test
remove meta usage, stub test
fix existing tests
Consider security groups with source security groups when hashing
* master: (23 commits)
typo
Update CHANGELOG.md
provider/aws: Add docs for autoscaling_policy + cloudwatch_metric_alarm
provider/aws: Add autoscaling_policy
provider/aws: Add cloudwatch_metric_alarm
Update CHANGELOG.md
Update CHANGELOG.md
provider/template: don't error when rendering fails in Exists
Update CHANGELOG.md
Added Azure SQL server and service support.
Update CHANGELOG.md
docs: clarify wording around destroy/apply args
Getting Started: Added a Next Step upon finishing install.
docs: add description of archive format to download page
docs: snapshot plugin dependencies when releasing
add v0.5.3 transitory deps
Fixes support for changing just the read / write capacity of a GSI
Change sleep time for DynamoDB table waits from 3 seconds to 5 seconds
Remove request for attribute changes
Fix AWS SDK imports
...
The Exists function can run in a context where the contents of the
template have changed, but it uses the old set of variables from the
state. This means that when the set of variables changes, rendering will
fail in Exists. This was returning an error, but really it just needs to
be treated as a scenario where the template needs re-rendering.
fixes#2344 and possibly a few other template issues floating around
Previously they would conflict you had multiple security group rules
with the same ingress or egress ports but different source security
groups because only the CIDR blocks were considered (which are empty
when using source security groups).
Updated to include migrations (from clint@ctshryock.com)
Signed-off-by: Clint Shryock <clint@ctshryock.com>
regex solution is extremely complex, which makes it hard to debug and
understand; the original switches and
commenting lay out the various cases in a straightforward fashion. Plus,
implementing namespace/repo support in the original code was a simple
strings.Join call.
This commit converts the openstack compute instances security groups to
a set from a list.
This fixes ordering problems which forces or indicates change to security
groups where none exist, and mimics the functionality in the aws
provider's compute resource.
Includes fixes from dupuy addressing crashes due to an empty state.
I snuck this in with #2263 because thought it was simply a stylistic
clarity thing, but it actually generates a resource-replacement-forcing
diff for existing resources that don't have this set in the config.
Definitely don't want that. :P
/cc @catsby
* master: (91 commits)
update CHANGELOG
update CHANGELOG
state/remote: more canonical Go for skip TLS verify
update CHANGELOG
update CHANGELOG
command/apply: flatten multierrors
provider/aws: improve iam_policy err msgs
acc tests: ensure each resource has a _basic test
aws/provider convert _normal tests to _basic
go fmt
Enpoint type configuration for OpenStack provider
Fix page title for aws_elasticache_cluster
Update CHANGELOG.md
Corrected Frankfurt S3 Website Endpoint fixes#2258
Only run Swift tests when Swift is available
Implement OpenStack/Swift remote
Minor correction to aws_s3_bucket docs
docs: Fix wrong title (aws_autoscaling_notification)
provider/aws: clarify scaling timeout error
Update CHANGELOG.md
...
This is an iteration on the great work done by @dalehamel in PRs #2095
and #2109.
The core team went back and forth on how to best model Spot Instance
Requests, requesting and then rejecting a separate-resource
implementation in #2109.
After more internal discussion, we landed once again on a separate
resource to model Spot Instance Requests. Out of respect for
@dalehamel's already-significant donated time, with this I'm attempting
to pick up the work to take this across the finish line.
Important architectural decisions represented here:
* Spot Instance Requests are always of type "persistent", to properly
match Terraform's declarative model.
* The spot_instance_request resource exports several attributes that
are expected to be constantly changing as the spot market changes:
spot_bid_status, spot_request_state, and instance_id. Creating
additional resource dependencies based on these attributes is not
recommended, as Terraform diffs will be continually generated to keep
up with the live changes.
* When a Spot Instance Request is deleted/canceled, an attempt is made
to terminate the last-known attached spot instance. Race conditions
dictate that this attempt cannot guarantee that the associated spot
instance is terminated immediately.
Implementation notes:
* This version of aws_spot_instance_request borrows a lot of common
code from aws_instance.
* In order to facilitate borrowing, we introduce `awsInstanceOpts`, an
internal representation of instance details that's meant to be shared
between resources. The goal here would be to refactor ASG Launch
Configurations to use the same struct.
* The new aws_spot_instance_request acc. test is passing.
* All aws_instance acc. tests remain passing.
When a user tried to create an `aws_network_interface` resource without specifying the `private_ips` or `security_groups` attributes the API call to AWS would fail with a 500 HTTP error. Length checks have been put in place for both of these attributes before they are added to the `ec2.CreateNetworkInterfaceInput` struct.
Documentation was also added for the `aws_network_interface` resource.
While cidr_block is required for static route creation, there are
apparently cases (involving some combination of VPNs, Customer Gateways,
and automatic route propogation) where the cidr_block can come back nil.
This means we cannot assume it's there in the set hash calculation.
We need to decode both the Raw config and the parsed Config to make
sure all set keys are visible. Otherwise keys that will need to be
interpolated later, will be missing causing the validation to fail.
Set Elasticache Port number to not be set by default, and require
Elasticache Port number to be specified.
Also updated acceptance tests to supply port number upon resource
declaration
Fixes#2084
Next to the remaining docs, I also updated the code so any Virtual
Network related API calls are now synchronised by using a mutex (thanks
@aznashwan for pointing that out!).
* upstream/master: (21 commits)
fix typo
fix typo, use awslabs/aws-sdk-go
Update CHANGELOG.md
More internal links in template documentation.
providers/aws: Requires ttl and records attributes if there isn't an ALIAS block.
Condense switch fallthroughs into expr lists
Fix docs for aws_route53_record params
Update CHANGELOG.md
provider/aws: Add IAM Server Certificate resource
aws_db_instance docs updated per #2070
providers/aws: Adds link to AWS docs about RDS parameters.
Downgrade middleman to 3.3.12 as 3.3.13 does not exist
providers/aws: Clarifies db_security_group usage.
"More more" no more!
Indentation issue
Export ARN in SQS queue and SNS topic / subscription; updated tests for new AWS SDK errors; updated documentation.
Changed Required: false to Optional: true in the SNS topic schema
Initial SNS support
correct resource name in example
added attributes reference section for AWS_EBS_VOLUME
...
Only the azure_instance is fully working (for both Linux and Windows
instances) now, but needs some tests. network and disk and pretty much
empty, but the idea is clear so will not take too much time…
commit a92fe29b909af033c4c57257ddcb6793bfb694aa
Author: Michael Austin <m_austin@me.com>
Date: Wed May 20 16:35:38 2015 -0400
updated to new style of awserr
commit 428271c9b9ca01ed2add1ffa608ab354f520bfa0
Merge: b3bae0e 883e284
Author: Michael Austin <m_austin@me.com>
Date: Wed May 20 16:29:00 2015 -0400
Merge branch 'master' into 2544-terraform-s3-forceDelete
commit b3bae0efdac81adf8bb448d11cc1ca62eae75d94
Author: Michael Austin <m_austin@me.com>
Date: Wed May 20 12:06:36 2015 -0400
removed extra line
commit 85eb40fc7ce24f5eb01af10eadde35ebac3c8223
Author: Michael Austin <m_austin@me.com>
Date: Tue May 19 14:27:19 2015 -0400
stray [
commit d8a405f7d6880c350ab9fccb70b833d2239d9915
Author: Michael Austin <m_austin@me.com>
Date: Tue May 19 14:24:01 2015 -0400
addressed feedback concerning parsing of aws error in a more standard way
commit 5b9a5ee613af78e466d89ba772959bb38566f50e
Author: Michael Austin <m_austin@me.com>
Date: Tue May 19 10:55:22 2015 -0400
clarify comment to highlight recursion
commit 91043781f4ba08b075673cd4c7c01792975c2402
Author: Michael Austin <m_austin@me.com>
Date: Tue May 19 10:51:13 2015 -0400
addressed feedback about reusing err variable and unneeded parens
commit 95e9c3afbd34d4d09a6355b0aaeb52606917b6dc
Merge: 2637edf db095e2
Author: Michael Austin <m_austin@me.com>
Date: Mon May 18 19:15:36 2015 -0400
Merge branch 'master' into 2544-terraform-s3-forceDelete
commit 2637edfc48a23b2951032b1e974d7097602c4715
Author: Michael Austin <m_austin@me.com>
Date: Fri May 15 15:12:41 2015 -0400
optimize delete to delete up to 1000 at once instead of one at a time
commit 1441eb2ccf13fa34f4d8c43257c2e471108738e4
Author: Michael Austin <m_austin@me.com>
Date: Fri May 15 12:34:53 2015 -0400
Revert "hook new resource provider into configuration"
This reverts commit e14a1ade5315e3276e039b745a40ce69a64518b5.
commit b532fa22022e34e4a8ea09024874bb0e8265f3ac
Author: Michael Austin <m_austin@me.com>
Date: Fri May 15 12:34:49 2015 -0400
this file should not be in this branch
commit 645c0b66c6f000a6da50ebeca1d867a63e5fd9f1
Author: Michael Austin <m_austin@me.com>
Date: Thu May 14 21:15:29 2015 -0400
buckets tagged force_destroy will delete all files and then delete buckets
commit ac50cae214ce88e22bb1184386c56b8ba8c057f7
Author: Michael Austin <m_austin@me.com>
Date: Thu May 14 12:41:40 2015 -0400
added code to delete policy from s3 bucket
commit cd45e45d6d04a3956fe35c178d5e816ba18d1051
Author: Michael Austin <m_austin@me.com>
Date: Thu May 14 12:27:13 2015 -0400
added code to read bucket policy from bucket, however, it's not working as expected currently
commit 0d3d51abfddec9c39c60d8f7b81e8fcd88e117b9
Merge: 31ffdea 8a3b75d
Author: Michael Austin <m_austin@me.com>
Date: Thu May 14 08:38:06 2015 -0400
Merge remote-tracking branch 'hashi_origin/master' into 2544-terraform-s3-policy
commit 31ffdea96ba3d5ddf5d42f862e68c1c133e49925
Author: Michael Austin <m_austin@me.com>
Date: Wed May 13 16:01:52 2015 -0400
add name for use with resouce id
commit b41c7375dbd9ae43ee0d421cf2432c1eb174b5b0
Author: Michael Austin <m_austin@me.com>
Date: Wed May 13 14:48:24 2015 -0400
Revert "working policy assignment"
This reverts commit 0975a70c37eaa310d2bdfe6f77009253c5e450c7.
commit b926b11521878f1527bdcaba3c1b7c0b973e89e5
Author: Michael Austin <m_austin@me.com>
Date: Wed May 13 14:35:02 2015 -0400
moved policy to it's own provider
commit 233a5f443c13d71f3ddc06cf034d07cb8231b4dd
Merge: e14a1ad c003e96
Author: Michael Austin <m_austin@me.com>
Date: Wed May 13 12:39:14 2015 -0400
merged origin/master
commit e14a1ade5315e3276e039b745a40ce69a64518b5
Author: Michael Austin <m_austin@me.com>
Date: Wed May 13 12:26:51 2015 -0400
hook new resource provider into configuration
commit 455b409cb853faae3e45a0a3d4e2859ffc4ed865
Author: Michael Austin <m_austin@me.com>
Date: Wed May 13 12:26:15 2015 -0400
dummy resource provider
commit 0975a70c37eaa310d2bdfe6f77009253c5e450c7
Author: Michael Austin <m_austin@me.com>
Date: Wed May 13 09:42:31 2015 -0400
working policy assignment
commit 3ab901d6b3ab605adc0a8cb703aa047a513b68d4
Author: Michael Austin <m_austin@me.com>
Date: Tue May 12 10:39:56 2015 -0400
added policy string to schema
This landed in aws-sdk-go yesterday, breaking the AWS provider in many places:
3c259c9586
Here, with much sedding, grepping, and manual massaging, we attempt to
catch Terraform up to the new `awserr.Error` interface world.
Additionally:
Update CHANGELOG
Make cooldown period optional for autoscaler
Refactor autoscaler and add more error checking
Instance template now supports image aliases
Replace instance group manager 'size' -- use target_size (now writeable)
Add documentation for autoscaler
Add beta warnings to docs
- rename test to have _basic suffix, so we can run it individually
- use us-east-1 for basic test, since that's probably the only region that has
Classic
- update the indexing of nodes; cache nodes are 4 digits
Needs to wait for len(cluster.CacheNodes) == cluster.NumCacheNodes, since
apparently that takes a bit of time and the initial response always has
an empty collection of nodes
This commit follows suit of #1897 by fixing volume-related
parameters which allow the volume attach acceptance test
to work. It also re-enables the volume attach test.
This commit adds a server group resource. Users can create server
groups with different policies. If a server is launched in a certain
group, the server will adhere to that policy. For example, servers
can be made to all launch on the same compute node or different compute
nodes.
This reworks the template lifecycle a bit such that we get nicer diff
behavior.
First, we tick ForceNew on for both filename and vars, so that the diff
indicates that the template will be "replaced" on change. This is mostly
cosmetic, but it also tracks conceptually with the fact that the
identifier we use is a hash of the contents, so any change essentially
makes a "new resource".
Second, we change the Exists implementation to only return `false` when
there has been a change in the rendered template. This lets descendent
resources see the computed value changing so that they'll properly
trigger in the plan.
Fixes#1898
Refs #1866 (but does not fix, there's another deeper issue there)
I added a debug log line in the last commit, only to find out it’s now
logging the same info twice. So removed the double entry and tweaked
the existing once.
In order to fix the failing test in the preceding commit when optional
params are changed from their default "computed" values.
These weren't working well with `HttpHealthCheck.Patch()` because it was
attempting to set all unspecified params to Go's type defaults (eg. 0 for
int64) which the API rejected.
Changing the call to `HttpHealthCheck.Update()` seemed to fix this but it
still didn't allow you to reset a param back to it's default by no longer
specifying it.
Settings defaults like this, which match the Terraform docs, seems like the
best all round solution. Includes two additional tests for the acceptance
tests which verify the params are really getting set correctly.
By first creating a very simple resource that mostly uses the default
values and then changing the two thresholds from their computed defaults.
This currently fails with the following error and will be fixed in a
subsequent commit:
--- FAIL: TestAccComputeHttpHealthCheck_update (5.58s)
testing.go:131: Step 1 error: Error applying: 1 error(s) occurred:
* 1 error(s) occurred:
* 1 error(s) occurred:
* Error patching HttpHealthCheck: googleapi: Error 400: Invalid value for field 'resource.port': '0'. Must be greater than or equal to 1
More details:
Reason: invalid, Message: Invalid value for field 'resource.port': '0'. Must be greater than or equal to 1
Reason: invalid, Message: Invalid value for field 'resource.checkIntervalSec': '0'. Must be greater than or equal to 1
Reason: invalid, Message: Invalid value for field 'resource.timeoutSec': '0'. Must be greater than or equal to 1
Mixture of hard and soft tabs, which isn't picked up by `go fmt` because
it's inside a string. Standardise on hard-tabs since that is what's used
in the rest of the code.
The commit is pretty complete and has a tested/working provisioner for
both SSH and WinRM. There are a few tests, but we maybe need another
few to have better coverage. Docs are also included…
* ctiwald/ct/fix-protocol-problem:
aws: Document the odd protocol = "-1" behavior in security groups.
aws: Fixup structure_test to handle new expandIPPerms behavior.
aws: Add security group acceptance tests for protocol -1 fixes.
aws: error on expndIPPerms(...) if our ports and protocol conflict.
Users can input a limited number of protocol names (e.g. "tcp") as
inputs to network ACL rules, but the API only supports valid protocol
number:
http://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml
Preserve the convenience of protocol names and simultaneously support
numbers by only writing numbers to the state file. Also use numbers
when hashing the rules, to keep everything consistent.
AWS will accept any overly-specific IP/mask combination, such as
10.1.2.2/24, but will store it by its implied network: 10.1.2.0/24.
This results in hashing errors, because the remote API will return
hashing results out of sync with the local configuration file.
Enforce a stricter API rule than AWS. Force users to use valid masks,
and run a quick calculation on their input to discover their intent.
AWS doesn't store ports for -1 protocol rules, thus the read from the
API will always come up with a different hash. Force the user to make a
deliberate port choice when enabling -1 protocol rules. All from_port
and to_port's on these rules must be 0.
AWS includes default rules with all network ACL resources which cannot
be modified by the user. Don't attempt to store them locally or change
them remotely if they are already stored -- it'll consistently result
in hashing problems.
resourceAwsNetworkAclRead swallowed these errors resulting in rules
that never properly updated. Implement an entry-to-maplist function
that'll allow us to write something that Set knows how to read.
aws hides its credentials in many places:
multiple env vars, config files,
ec2 metadata.
Terraform currently recognizes only the env vars;
to use the other options, you had to put in a
dummy empty value for access_key and secret_key.
Rather than duplicate all aws checks, ask the
aws sdk to fetch credentials earlier.
If an AutoScalingGroup is in the middle of performing a Scaling
Activity, it cannot be deleted, and yields a ScalingActivityInProgress
error.
Retry the delete for up to 5m so we don't choke on this error. It's
telling us something's in progress, so we'll keep trying until the
scaling activity completed.
On ASG creation, waits for up to 10m for desired_capacity or min_size
healthy nodes to show up in the group before continuing.
With CBD and proper HealthCheck tuning, this allows us guarantee safe
ASG replacement.
* 'master' of github.com:hashicorp/terraform:
provider/aws: detach VPN gateway with proper ID
update CHANGELOG
provider/aws: Update ARN in instanceProfileReadResult
provider/aws: remove placement_group from acctest
core: module targeting
Added support for more complexly images repos such as images on a private registry that are stored as namespace/name
Depends on there being an existing placement group in the account called
"terraform-placement-group" - we'll need to circle back around to cover
this with AccTests after TF gets an `aws_placement_group` resource.
- Users
- Groups
- Roles
- Inline policies for the above three
- Instance profiles
- Managed policies
- Access keys
This is most of the data types provided by IAM. There are a few things
missing, but the functionality here is probably sufficient for 95% of
the cases. Makes a dent in #28.
Ingress and egress rules given a "-1" protocol don't have ports when
Read out of AWS. This results in hashing problems, as a local
config file might contain port declarations AWS can't ever return.
Rather than making ports optional fields, which carries with it a huge
headache trying to distinguish between zero-value attributes (e.g.
'to_port = 0') and attributes that are simply omitted, simply force the
user to opt-in when using the "-1" protocol. If they choose to use it,
they must now specify "0" for both to_port and from_port. Any other
configuration will error.
Do directory expansion on filenames.
Add basic acceptance tests. Code coverage is 72.5%.
Uncovered code is uninteresting and/or impossible error cases.
Note that this required adding a knob to
helper/resource.TestStep to allow transient
resources.
* We now return an error when you set the script_path to
C:\Windows\Temp explaining this is currently not supported
* The fix in PR #1588 is converted to the updated setup in this PR
including the unit tests
Last thing to do is add a few tests for the WinRM communicator…
The reason why the shebang is removed from these tests, is because the
shebang is only needed for SSH/Linux connections. So in the new setup
the shebang line is added in the SSH communicator instead of in the
resource provisioner itself…
This is needed as preperation for adding WinRM support. There is still
one error in the tests which needs another look, but other than that it
seems like were now ready to start working on the WinRM part…
As a module author, I'd like to be able to create a module that includes
a key_pair. I don't care about the name, I only know I don't want it to
collide with anything else in the account.
This allows my module to be used multiple times in the same account
without having to do anything funky like adding a user-specified unique
name parameter.
Currently, if a record isn't found, we get an error like:
Couldn't find record: Record not found
This change improves the error message to add more context:
Couldn't find record ID (123456789) for domain (example.com): Record not found
Currently, we weren't correctly setting the ids, and are setting both
`security_groups` and `vpc_security_group_ids`. As a result, we really only use
the former.
We also don't actually update the latter in the `update` method.
This PR fixes both issues, correctly reading `security_groups` vs.
`vpc_security_group_ids` and allows users to update the latter without
destroying the Instance when in a VPC.
As we've seen elsewhere, the SDK now wants nils instead of empty arrays
for collections
fixes#1696
thanks @jstremick for pointing me in the right direction
The upstream behavior here changed, and the request needs a `nil`
instead of an empty slice to indicate that we _don't_ want to filter on
Network ACL IDs.
fixes#1634
Adds an "alias" field to the provider which allows creating multiple instances
of a provider under different names. This provides support for configurations
such as multiple AWS providers for different regions. In each resource, the
provider can be set with the "provider" field.
(thanks to Cisco Cloud for their support)
If reading an S3 bucket's state, and that bucket has been deleted, don't
fail with a 404 error. Instead, update the state to reflect that the
bucket does not exist. Fixes#1574.
EIP with VPC only returns an allocationID. However, for standard we need
to lookup for PublicIP. When we use an example for standard EC2 instance
(here `t1.micro`):
```
resource "aws_instance" "example" {
ami = "ami-25773a24"
instance_type = "t1.micro"
}
resource "aws_eip" "ip" {
instance = "${aws_instance.example.id}"
}
```
then in this case, allocationID will be nil, but publicIP will be non
nil (which is used later for association the IP). So check for
allocationId only if it's of domain `VPC`.
* master: (511 commits)
Update CHANGELOG.md
core: avoid diff mismatch on NewRemoved fields during -/+
Update CHANGELOG.md
update CHANGELOG
Fix minor error in index/count docs
terraform: remove debug
terraform: when pruning destroy, only match exact nodes, or exact counts
up version for dev
update CHANGELOG
terraform: prune tainted destroys if no tainted in state [GH-1475]
update CHANGELOG
config/lang: support math on variables through implicits
update CHANGELOG
update cHANGELOG
update cHANGELOG
providers/aws: set id outside if/esle
providers/aws: set ID after creation
core: remove dead code from pre-deposed refactor
website: update LC docs to note name is optional
security_groups field expects a list of Security Group Group Names, not IDs
...
It doesn't need to be a List of Maps, it can just be a Map.
We're also safe to remove a previous workaround I stuck in there.
The config parsing is equivalent between a list of maps and a plain map,
so we just need a state migration to make this backwards compatible.
fixes#1508
In a DESTROY/CREATE scenario, the plan diff will be run against the
state of the old instance, while the apply diff will be run against an
empty state (because the state is cleared when the destroy node does its
thing.)
For complex attributes, this can result in keys that seem to disappear
between the two diffs, when in reality everything is working just fine.
Same() needs to take into account this scenario by analyzing NewRemoved
and treating as "Same" a diff that does indeed have that key removed.
Fixes#1409
Resource set hash calculation is a bit of a devil's bargain when it
comes to optional, computed attributes.
If you omit the optional, computed attribute from the hash function,
changing it in an existing config is not properly detected.
If you include the optional, computed attribute in the hash and do not
specify a value for it in the config, then you'll end up with a
perpetual, unresolvable diff.
We'll need to think about how to get the best of both worlds, here, but
for now I'm switching us to the latter and documenting the fact that
changing these attributes requires manual `terraform taint` to apply.
These bugs were found by additional check added in #1443
* Reversed nil err check meant that block devices were broken :(
* Fixing the err check revealed a few missed pointer derefs
* Unlike instances, ephemeral block devices do come back in
`BlockDeviceMappings` from `DescribeLaunchConfigurations` calls, so
we need to recognize them and filter them properly. Even though
they're not set as computed, I'm doing a `d.Set` since it doesn't
hurt and it gives us the benefit of basic drift detection.
Route 53 records were silently erroring out when saving the records returned
from AWS, because they weren't being presented as an array of strings like we
expected.
Turns out AssociatePublicIPAddress was always being set, but the AWS
APIs don't like that when you're launching into EC2 Classic and return a
validation error at ASG launch time.
Fixes#1410
This removes `ForceNew` from `records` and `ttl`, and introduces a
`resourceAwsRoute53RecordUpdate` function. The `resourceAwsRoute53RecordUpdate`
falls through to the `resourceAwsRoute53RecordCreate` function, which utilizes
AWS `UPSERT` behavior and diffs for us.
`Name` and `Type` are used by AWS in the `UPSERT`, so only records with matching
`name` and `type` can be updated. Others are created as new, so we leave the
`ForceNew` behavior here.
These changes should fix#1367:
* `ebs_optimized` gets `Computed: true` and set from `Read`
* `ephemeral_block_device` loses `Computed: true`
* explicitly set `root_block_device` to empty from `Read`
While I was in there (tm):
* Send pointers to `d.Set` so we can use its internal nil check.
If a given resource does not define an `Update` function, then all of
its attributes must be specified as `ForceNew`, lest Applys fail with
"doesn't support update" like #1367.
This is something we can detect automatically, so this adds a check for
it when we validate provider implementations.
This commit resolves an issue where the tenant-network api extension
does not exist. The caveat is that the user must either specify no
networks (single network environment) or can only specify UUIDs for
network configurations.
* upstream/master: (295 commits)
Update CHANGELOG.md
provider/aws: Allow DB Parameter group to change in RDS
return error if failed to set tags on Route 53 zone
core: [tests] fix order dependent test
Fix hashcode for ASG test
provider/aws: Fix issue with tainted ASG groups failing to re-create
Don't error when reading s3 bucket with no tags
Avoid panics when DBName is not set
Add floating IP association in aceptance tests
Use env var OS_POOL_NAME as default for pool attribute
providers/heroku: Add heroku-postgres to example
docs: resource addressing
providers/heroku: Document environment variables
providers/heroku: Add region to example
Bugfix on floating IP assignment
Update CHANGELOG.md
update CHANGELOG
website: note on docker
core: formalize resource addressing
core: fill out context tests for targeted ops
...
* master: (167 commits)
return error if failed to set tags on Route 53 zone
core: [tests] fix order dependent test
Fix hashcode for ASG test
provider/aws: Fix issue with tainted ASG groups failing to re-create
Don't error when reading s3 bucket with no tags
Avoid panics when DBName is not set
Add floating IP association in aceptance tests
Use env var OS_POOL_NAME as default for pool attribute
providers/heroku: Add heroku-postgres to example
docs: resource addressing
providers/heroku: Document environment variables
providers/heroku: Add region to example
Bugfix on floating IP assignment
Update CHANGELOG.md
update CHANGELOG
website: note on docker
core: formalize resource addressing
core: fill out context tests for targeted ops
core: docs for targeted operations
core: targeted operations
...
* upstream/master:
return error if failed to set tags on Route 53 zone
cleanups
provider/aws: Finish Tag support for Route 53 zone
provider/aws: Add tags to Route53 hosted zones
* master: (172 commits)
core: [tests] fix order dependent test
Fix hashcode for ASG test
provider/aws: Fix issue with tainted ASG groups failing to re-create
Don't error when reading s3 bucket with no tags
Avoid panics when DBName is not set
Add floating IP association in aceptance tests
Use env var OS_POOL_NAME as default for pool attribute
providers/heroku: Add heroku-postgres to example
docs: resource addressing
providers/heroku: Document environment variables
providers/heroku: Add region to example
Bugfix on floating IP assignment
Update CHANGELOG.md
update CHANGELOG
website: note on docker
core: formalize resource addressing
core: fill out context tests for targeted ops
core: docs for targeted operations
core: targeted operations
user_data support
...
* master: (172 commits)
core: [tests] fix order dependent test
Fix hashcode for ASG test
provider/aws: Fix issue with tainted ASG groups failing to re-create
Don't error when reading s3 bucket with no tags
Avoid panics when DBName is not set
Add floating IP association in aceptance tests
Use env var OS_POOL_NAME as default for pool attribute
providers/heroku: Add heroku-postgres to example
docs: resource addressing
providers/heroku: Document environment variables
providers/heroku: Add region to example
Bugfix on floating IP assignment
Update CHANGELOG.md
update CHANGELOG
website: note on docker
core: formalize resource addressing
core: fill out context tests for targeted ops
core: docs for targeted operations
core: targeted operations
user_data support
...
* d.Set has a pointer nil check we can lean on
* need to be a bit more conservative about nil checks on nested structs;
(this fixes the RDS acceptance tests)
/cc @fanhaf
This commit changes how the network info is read from OpenStack.
It pulls all relevant information from server.Addresses and merges
it with the available information from the networks parameters.
The access_v4, access_v6, and floating IP information is then
determined from the result.
A MAC address parameter is also added since that information is
available in server.Addresses.
This commit allows the user to specify a network by name rather than
just uuid. This is done via the os-tenant-networks api extension.
This works for both neutron and nova-network.
This commit causes the resource to manage floating IPs by way of the
os-floating-ips API.
At the moment, it works with both nova-network and Neutron environments,
but if you use multiple Neutron networks, the network that supports the
floating IP must be listed first.
s3.GetBucketTagging returns an error if there are no tags associated
with a bucket. Consequently, any configuration with a tagless s3 bucket
would fail with an error, "the TagSet does not exist".
Handle that error more appropriately, interpreting it as an empty set of
tags.
The `getFirstNetworkID` does not work correctly because the first
network is not always the private network of the instance.
As long as the `GET /networks` gives a list containing also public
networks we don't have any guarantee that the first network is the
one we want. Furthermore, with a loop over the network list we are
not able to determine which network is the one we want.
Instead of retrieving the network ID and then finding the port ID,
it's better to basically take the first port ID of the instance.
This commit ensures that a volume is detached from all instances
before it is deleted.
It also adds in an `attachment` exported parameter that shows details
of the volume's attachment(s).
This commit populates access_ip_v6 by either the AccessIPv6 attribute
or by finding the first available IPv6 address.
This commit retains the original feature of setting the default ssh
connection to the IPv4 address unless one is not found. IPv6 access
can still be enabled by explicitly setting it in the resource paramters.
This commit also removes d.Set("host") in favor of SetConnInfo
This commit renames flavor_ref to flavor_id and adds the flavor_name
parameter. Users can now specify either a flavor ID or name when launching
instances.
This commit renames image_ref to image_id and adds the image_name
parameter. Users can now specify either an image UUID or image name
when launching instances.
image_name is preferrable as deployers/sysadmins generally regularly
deprecate/remove outdated and insecure images. Using a consistent
naming scheme allows end-users to always retrieve a working image.
Some cloud don't implement correctly IP addresses.
Instead of failing during the provisionning, we just take the
first IP available and try with this one.
* f-aws-rds-tags:
fix index out of range error
fix formatting
upgrade VPC Ids and DB Subnet to be optionally computed
fix typo
provider/aws: Introduce IAM connection
* master:
provider/aws: Fix dependency violation when deleting Internet Gateways
command/remote-config: failing tests
update CHANGELOG
command/remote-config: do a pull with `terraform remote config`
command/remote-{pull,push}: colorize and show success output
command/remote-config: lowercase the type so that Atlas works, for example
command/remote-config: show flag parse errors
command/remote-config: remove weird error case that shows no error message
command: when setting up state, only write back if local is newer
* master: (66 commits)
provider/aws: Fix dependency violation when deleting Internet Gateways
command/remote-config: failing tests
update CHANGELOG
command/remote-config: do a pull with `terraform remote config`
command/remote-{pull,push}: colorize and show success output
command/remote-config: lowercase the type so that Atlas works, for example
command/remote-config: show flag parse errors
command/remote-config: remove weird error case that shows no error message
command: when setting up state, only write back if local is newer
minor code cleanups to get acceptance tests passing
update CHANGELOG
providers/digitalocean: add dot in GET response
providers/digitalocean: force fqdn in dns rr value
update CHANGELOG
small code cleanup
Add proper reading/updating of tags for S3
provider/aws: Add tags to S3
Documentation for ASG Tags added
Tags support added for AWS ASG
command/output: don't panic if no root module in state [GH-1263]
...
* master:
update CHANGELOG
providers/digitalocean: add dot in GET response
providers/digitalocean: force fqdn in dns rr value
update CHANGELOG
Add disk size to google_compute_instance disk blocks.
'project' should be set to the project's ID, not its name.
Don't error when enabling DNS hostnames in a VPC
Correct AWS VPC or route table read functions
Updates to GCE Instances and Instance Templates to allow for false values to be set for the auto_delete setting.
Update GCE Instance Template tests now that existing disk must exist prior to template creation.
Update Google API import to point to the new location.
add network field to the network_interface
I was working on building a validation to check the user-provided
"device_name" for "root_block_device" on AWS Instances, when I realized
that if I can check it, I might as well just derive it automatically!
So that's what we do here - when you customize the details of the root
block device, device name is just comes from the selected AMI.
The AWS API call ModifyVpcAttribute will allow only one attribute to be
modified at a time. Modifying both results in the error:
Fields for multiple attribute types specified: enableDnsHostnames, enableDnsSupport
Retructure the provider to honor this restriction.
Also, enable DNS support before attempting to enable DNS hostnames,
since the former is a prerequisite of the latter.
Additionally, fix what must have been a copy&paste error, setting
enable_dns_support to the value of enable_dns_hostnames.
If the state file contained a VPC or a route table which no longer
exists, Terraform would fail to create the correct plan, which is to
recreate them.
In the case of VPCs, this was due to incorrect error handling. The AWS
SDK returns a aws.APIError, not a *aws.APIError on error. When the VPC
no longer exists, upon attempting to refresh state Terraform would
simply exit with an error.
For route tables, the provider would recognize that the route table no
longer existed, but would not make the appropriate call to update the
state as such. Thus there'd be no crash, but also no plan to re-create
the route table.
Though not directly connected, trying to delete a subnet and security group in
parallel can cause a dependency violation from the subnet, claiming there are
dependencies.
This commit fixes that by allowing subnet deletion to tolerate failure with a
retry / refresh function.
Fixes#934
Instance block devices are now managed by three distinct sub-resources:
* `root_block_device` - introduced previously
* `ebs_block_device` - all additional ebs-backed volumes
* `ephemeral_block_device` - instance store / ephemeral devices
The AWS API support around BlockDeviceMapping is pretty confusing. It's
a single collection type that supports these three members each of which
has different fields and different behavior.
My biggest hiccup came from the fact that Instance Store volumes do not
show up in any response BlockDeviceMapping for any EC2 `Describe*` API
calls. They're only available from the instance meta-data service as
queried from inside the node.
This removes `block_device` altogether for a clean break from old
configs. New configs will need to sort their `block_device`
declarations into the three new types. The field has been marked
`Removed` to indicate this to users.
With the new block device format being introduced, we need to ensure
Terraform is able to properly read statefiles written in the old format.
So we use the new `helper/schema` facility of "state migrations" to
transform statefiles in the old format to something that the current
version of the schema can use.
Fixes#858
Fixes a bug in Route53 and wildcard entries. Refs #501.
Also fixes:
- an issue in the library where we don't fully wait for the results, because the
error code/condition changed with the migration to aws-sdk-go
- a limitation in the test, where we only consider the first record returned
* master:
provider/aws: Fix encoding bug with AWS Instance
minor style cleanups
Tags Schema
Added Tagging
Added vpc refactor in aws sdk go
Removed additional variable for print, added for debugging
Using hashicorp/aws-sdk-go
Changed things around as suggested by @catsby
Refactor with Acceptance Tests
VPC Refactor
First refactor
Added Connection to config
* master: (69 commits)
upgrade tests and remove ICMPTypeCode for now
helper/ssh: update import location
clean up
provider/aws: Convert AWS Network ACL to aws-sdk-go
Update website docs on AWS RDS encryption field
more test updates
provider/aws update Network ACL tests
code cleanup on subnet check
restore IOPS positioning
Code cleanup
Update CHANGELOG.md
Bugfix: Add tags on AWS IG creation, not just on update
fix nit-pick from go vet
remove duplicated function
provider/aws: Convert AWS Route Table Association to aws-sdk-go
Cleansup: Restore expandIPPerms, remove flattenIPPerms
clean up debug output to make go vet happy
providers/aws: Convert AWS VPC Peering to aws-sdk-go
provider/aws: Add env default for AWS_ACCOUNT_ID in VPC Peering connection
convert route table tests to aws-sdk-go
...
* master:
Code cleanup
Update CHANGELOG.md
fix nit-pick from go vet
remove duplicated function
provider/aws: Convert AWS Route Table Association to aws-sdk-go
Cleansup: Restore expandIPPerms, remove flattenIPPerms
clean up debug output to make go vet happy
providers/aws: Convert AWS VPC Peering to aws-sdk-go
provider/aws: Add env default for AWS_ACCOUNT_ID in VPC Peering connection
convert route table tests to aws-sdk-go
provider/aws: Convert AWS Route Table to aws-sdk-go
providers/aws: iops in root device skipped when output state
Give route table assoc it's own copy of this method for now
provider/aws: Convert Main Route Table assoc. to aws-sdk-go
aws/Route53 record creation timeout 10->30 mins
provider/aws: Convert AWS Security Group to aws-sdk-go
Fixing up the tests to make them pass correctly
Fixing a corner case while retrieving a template UUID
Adding tests and docs for the new VPN resources
Adding a few new resources
Docker's API is huge and only a small subset is currently implemented,
but this is expected to grow over time. Currently it's enough to
satisfy the use cases of probably 95% of Docker users.
I'm preparing this initial pull request as a preview step for feedback.
My ideal scenario would be to develop this within a branch in the main
repository; the more eyes and testing and pitching in on the code, the
better (this would avoid a merge request-to-the-merge-request scenario,
as I figure this will be built up over the longer term, even before
a merge into master).
Unit tests do not exist yet. Right now I've just been focused on getting
initial functionality ported over. I've been testing each option
extensively via the Docker inspect capabilities.
This code (C)2014-2015 Akamai Technologies, Inc. <opensource@akamai.com>
Removing `ForceNew` from `final_snapshot_identifier` - it's a parameter
that's _only_ passed during the DeleteDBInstance API call, so it's perfectly
valid to change the attribute for an existing DB Instance.
fixes#1138
Since the default value is not available in the initial config (when
`action` or `traffic_type` is omitted), the result would be `nil`
instead of a string when trying to access one of these the values.
This allows you to set lifecycle create_before_destroy = true
and fixes#532 as then we'll make a new launch config, change
the launch config on the ASG, and *then* delete the old launch
config.
Also tried adding tests which unfortunately don't seem to fail...
* master:
providers/aws: Convert Launch Configurations to awslabs/aws-sdk-go
update CHANGELOG
terraform: test post state update is called
command: StateHook for continous state updates
terraform: more state tests, fix a bug
state: deep copies are required
terraform: make DeepCopy public
state/remote: increment serial properly
state: only change serial if changed
terraform: call the EvalUpdateStateHook strategically
terraform: PostStateUpdate hook and EvalUpdateStateHook
- Remove check on password for AWS RDS Instance
- Update documentation on AWS RDS Instance regarding DB Security Groups
- Change error handling to check error code from AWS API [ci skip]
The `SourceDestCheck` attribute can only be changed via
`ModifyInstance`, so the AWS instance resource's `Create` function calls
out to `Update` before it returns to take care of applying
`source_dest_check` properly.
The `Update` function originally guarded against unnecessary API calls
with `GetOk`, which worked fine until #993 when we changed the `GetOk`
semantics to no longer distinguish between "configured and zero-value"
and "not configured".
I attempted in #1003 to fix this by switching to `HasChange` for the
guard, but this does not work in the `Create` case.
I played around with a few different ideas, none of which worked:
(a) Setting `Default: true` on `source_dest_check' has no effect
(b) Setting `Computed: true` on `source_dest_check' and adding a `d.Set`
call in the `Read` function (which will initially set the value to `true`
after instance creation). I really thought I could get this to work,
but it results in the following:
```go
d.Get('source_dest_check') // true
d.HasChange('source_dest_check') // false
d.GetChange('source_dest_check') // old: false, new: false
```
I couldn't figure out a way of coherently dealing with that result, so I
ended up throwing up my hands and giving up on the guard altogether.
We'll call `ModifyInstance` more than we have to, but this at least
yields expected behavior for both Creates and Updates.
Fixes#1020
library.
This commit updates the Route 53 Zone resource to use AWS Labs aws-sdk-go
library instead of mitchellh/goamz.
- hard code us-east-1 for Route53 region, since it's a global endpoint
- add some units test for CleanZoneID
Unfortunately, the acceptance tests here were improperly passing, and
allowing Subnet updates on ELBs is not as straightfoward as simply
removing `ForceNew`.
Subnets on ELBs need to be managed by two explicit API calls:
* `AttachLoadBalancerToSubnets` - http://bit.ly/elbattachsubnet
* `DetachLoadBalanceFromSubnets` - http://bit.ly/elbdetachsubnet
We'll need to circle back and use these APIs to explicitly add support.
This fixes the failure of `TestAccAWSELB_AddSubnet` by removing the
test.
This reverts commit 61e91017be, reversing
changes made to 49b3afe452.
Was relying on old behavior of GetOk and therefore never properly seeing
a change from true -> false.
This fixes the acceptance test failure of
`TestAccAWSInstance_sourceDestCheck`.
The Mailgun provider was relying on an old behavior of
`ResourceData.Set` that would allow nested access to
maps. We now just build up our own maps like sane people.
AWS provides a single `BlockDeviceMapping` to manage three different
kinds of block devices:
(a) The root volume
(b) Ephemeral storage
(c) Additional EBS volumes
Each of these types has slightly different semantics [1].
(a) The root volume is defined by the AMI; it can only be customized
with `volume_size`, `volume_type`, and `delete_on_termination`.
(b) Ephemeral storage is made available based on instance type [2]. It's
attached automatically if _no_ block device mappings are specified, and
must otherwise be defined with block device mapping entries that contain
only DeviceName set to a device like "/dev/sdX" and VirtualName set to
"ephemeralN".
(c) Additional EBS volumes are controlled by mappings that omit
`virtual_name` and can specify `volume_size`, `volume_type`,
`delete_on_termination`, `snapshot_id`, and `encryption`.
After deciding to ignore root block devices to fix#859, we had users
with configurations that were attempting to manage the root block device chime
in on #913.
Terraform does not have the primitives to be able to properly handle a
single collection of resources that is partially managed and partially
computed, so our strategy here is to break out logical sub-resources for
Terraform and hide the BlockDeviceMapping inside the provider
implementation.
Now (a) is supported by the `root_block_device` sub-resource, and (b)
and (c) are still both merged together under `block_device`, though I
have yet to see ephemeral block devices working properly.
Looking into possibly separating out `ephemeral_block_device` and
`ebs_block_device` sub-resources as well, which seem like the logical
next step. We'll wait until the next big release for this, though, since
it will break backcompat.
[1] http://bit.ly/ec2bdmap
[2] http://bit.ly/instancestorebytypeFixes#913
Refs #858
Right now we yield a perpetual diff on ASGs because we're not reading
termination policies back out in the provider.
This depends on https://github.com/mitchellh/goamz/pull/218 and fixes
it.
An `InstanceDiff` will include `ResourceAttrDiff` entries for the
"length" / `#` field of maps. This makes sense, since for something like
`terraform plan` it's useful to see when counts are changing.
The `DiffFieldReader` was not taking these entries into account when
reading maps out, and was therefore incorrectly returning maps that
included an extra `'#'` field, which was causing all sorts of havoc
for providers (extra tags on AWS instances, broken google compute
instance launch, possibly others).
* fixes#914 - extra tags on AWS instances
* fixes#883 - general core issue sprouted from #757
* removes the hack+TODO from #757
This resource allows an existing Route Table to be assigned as the
"main" Route Table of a VPC. This means that the Route Table will be
used for any subnets within the VPC without an explicit Route Table
assigned [1].
This is particularly useful in getting an Internet Gateway in place as
the default for a VPC, since the automatically created Main Route Table
does not have one [2].
Note that this resource is an abstraction over an association and does not
map directly to a CRUD-able object in AWS. In order to retain a coherent
"Delete" operation for this resource, we remember the ID of the AWS-created
Route Table and reset the VPC's main Route Table to it when this
resource is deleted.
refs #843, #748
[1] http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html#RouteTableDetails
[2] http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Internet_Gateway.html#Add_IGW_Routing
If map_public_ip_on_launch was not specified, AWS picks a default of
"0", which is different than the "" in the state file, triggerinng an
update each time. Mark that parameter as Computed, avoiding the update.
This is necessary to support creating parameter groups with parameters
that require a reboot, since the RDS API will return an error when
attempting to set those parameters with ApplyMethod "immediate".
If a subnet exists in the state file and a refresh is performed, the
read function for subnets would return an error. Now it updates the
state to indicate that the subnet no longer exists, so Terraform can
plan to recreate it.
with this commit, the google compute instance acceptance tests are
passing
- remove GOOGLE_CLIENT_FILE requirement from provider tests to finish
out #452
- skip extra "#" key that shows up in metadata maps, fixes#757 and
sprouts #883 to figure out core issue
- more verbose variablenames in metadata parsing, since it took me
awhile to grok and i thought there might have been a shadowing bug in
there for a minute. maybe someday when i'm a golang master i'll be
smart enough to be comfortable with one-char varnames. :)
Several of the arguments were optional, and if omitted, they are
calculated. Mark them as such in the schema to avoid triggering an
update.
Go back to storing the password in the state file. Without doing so,
there's no way for Terraform to know the password has changed. It should
be hashed, but then interpolating the password yields a hash instead of
the password.
Make the `name` parameter optional. It's not required in any engine, and
in some (MS SQL Server) it's not allowed at all.
Drop the `skip_final_snapshot` argument. If `final_snapshot_identifier`
isn't specified, then don't make a final snapshot. As things were, it
was possible to create a resource with neither of these arguments
specified which would later fail when it was to be deleted since the RDS
API requires exactly one of the two.
Resolves issue #689.
It’s now also possible to don’t give any rules, when the firewall is
configured with `managed = true`. This will in effect mean; make sure
no rules exist at all for the firewall.
These fixes are needed to make the provider work with master again.
These are still some issues, but they seem not to be related to the
provider, but the changes in `helper/schema`.
This goes for the normal firewall, the egress firewall and the network
ACL.
USE WITH CAUTION! When setting `managed = true` in your config, it
means it will delete all firewall rules that are not in your config, so
unknown to TF.
Also adding the new `cloudstack_egress_firewall` resource with this
commit and updating go-cloudstack to the latest API version (v4.4)
- 5.6.17 is no longer a valid mysql engine version, bumping to 5.6.21
- updating security_group_names assertion to match new set structure
introduce in #663
When DeleteInternetGateway is successful it returns a nil error value.
However, for a nil error value, the RetryFunc returns an error yielding a
unnecessary second call to DeleteInternetGateway in the retry logic.
The logic works because DeleteInternetGateway eventually returns an ec2.Error
with error code InvalidInternetGatewayID.NotFound since the internet gateway
has been deleted in the previous call. The return value of nil breaks the
retry logic and the deletion is deemed successful.
Fix the unnecessary second call to DeleteInternetGateway by short circuiting
with a nil error value when deletion of the internet gateway is successful on
the first try.
Add an acceptance test for internet gateway deletion and remove unreachable
code while here.
Update the Google Compute Engine provider to add support for service
accounts on `google_compute_instance`. Both gcloud shorthand (`compute-ro`,
`storage-ro`, etc.) and OAuth2 API endpoints are supported.
This feature is currently limited to a single service account (supporting
multiple scopes) and an automatically-generated service account email.
If not suppling the `availability_zones`, they will be computed
(meaning an update/refresh will retrieve the info and update the values
to the state file).
So without the `Computed = true` the diff will always flag this as a
change, even when it’s not.
Some instance types have a block device by default. So when selecting
such an instance type, you will not set a config for the block device,
but the update/refresh func will notice one and update the state
nonetheless.
So in those cases the `block_device` becomes a `computed` field.
1. The schema contained a few fields that where not marked as
`computed`, while they were updated inside the resource.
2. While updating the `volume_size` it was doing so with a `string`,
but in the schema this field is set as `int`.
3. The set func for calculating the hashes for the `block` set items,
also used computed values to calculate the hash. As these values will
not be in the config, but only in the state, this will always show as a
diff. The solution is to only use the fields that aren’t computed in
order to get consistent hashes.
These where all issues before, but weren’t visible as such. All should
be good again now.