The Exists function can run in a context where the contents of the
template have changed, but it uses the old set of variables from the
state. This means that when the set of variables changes, rendering will
fail in Exists. This was returning an error, but really it just needs to
be treated as a scenario where the template needs re-rendering.
fixes#2344 and possibly a few other template issues floating around
Previously they would conflict you had multiple security group rules
with the same ingress or egress ports but different source security
groups because only the CIDR blocks were considered (which are empty
when using source security groups).
Updated to include migrations (from clint@ctshryock.com)
Signed-off-by: Clint Shryock <clint@ctshryock.com>
regex solution is extremely complex, which makes it hard to debug and
understand; the original switches and
commenting lay out the various cases in a straightforward fashion. Plus,
implementing namespace/repo support in the original code was a simple
strings.Join call.
This commit converts the openstack compute instances security groups to
a set from a list.
This fixes ordering problems which forces or indicates change to security
groups where none exist, and mimics the functionality in the aws
provider's compute resource.
Includes fixes from dupuy addressing crashes due to an empty state.
I snuck this in with #2263 because thought it was simply a stylistic
clarity thing, but it actually generates a resource-replacement-forcing
diff for existing resources that don't have this set in the config.
Definitely don't want that. :P
/cc @catsby
* master: (91 commits)
update CHANGELOG
update CHANGELOG
state/remote: more canonical Go for skip TLS verify
update CHANGELOG
update CHANGELOG
command/apply: flatten multierrors
provider/aws: improve iam_policy err msgs
acc tests: ensure each resource has a _basic test
aws/provider convert _normal tests to _basic
go fmt
Enpoint type configuration for OpenStack provider
Fix page title for aws_elasticache_cluster
Update CHANGELOG.md
Corrected Frankfurt S3 Website Endpoint fixes#2258
Only run Swift tests when Swift is available
Implement OpenStack/Swift remote
Minor correction to aws_s3_bucket docs
docs: Fix wrong title (aws_autoscaling_notification)
provider/aws: clarify scaling timeout error
Update CHANGELOG.md
...
This is an iteration on the great work done by @dalehamel in PRs #2095
and #2109.
The core team went back and forth on how to best model Spot Instance
Requests, requesting and then rejecting a separate-resource
implementation in #2109.
After more internal discussion, we landed once again on a separate
resource to model Spot Instance Requests. Out of respect for
@dalehamel's already-significant donated time, with this I'm attempting
to pick up the work to take this across the finish line.
Important architectural decisions represented here:
* Spot Instance Requests are always of type "persistent", to properly
match Terraform's declarative model.
* The spot_instance_request resource exports several attributes that
are expected to be constantly changing as the spot market changes:
spot_bid_status, spot_request_state, and instance_id. Creating
additional resource dependencies based on these attributes is not
recommended, as Terraform diffs will be continually generated to keep
up with the live changes.
* When a Spot Instance Request is deleted/canceled, an attempt is made
to terminate the last-known attached spot instance. Race conditions
dictate that this attempt cannot guarantee that the associated spot
instance is terminated immediately.
Implementation notes:
* This version of aws_spot_instance_request borrows a lot of common
code from aws_instance.
* In order to facilitate borrowing, we introduce `awsInstanceOpts`, an
internal representation of instance details that's meant to be shared
between resources. The goal here would be to refactor ASG Launch
Configurations to use the same struct.
* The new aws_spot_instance_request acc. test is passing.
* All aws_instance acc. tests remain passing.
When a user tried to create an `aws_network_interface` resource without specifying the `private_ips` or `security_groups` attributes the API call to AWS would fail with a 500 HTTP error. Length checks have been put in place for both of these attributes before they are added to the `ec2.CreateNetworkInterfaceInput` struct.
Documentation was also added for the `aws_network_interface` resource.
While cidr_block is required for static route creation, there are
apparently cases (involving some combination of VPNs, Customer Gateways,
and automatic route propogation) where the cidr_block can come back nil.
This means we cannot assume it's there in the set hash calculation.
We need to decode both the Raw config and the parsed Config to make
sure all set keys are visible. Otherwise keys that will need to be
interpolated later, will be missing causing the validation to fail.
Set Elasticache Port number to not be set by default, and require
Elasticache Port number to be specified.
Also updated acceptance tests to supply port number upon resource
declaration
Fixes#2084
Next to the remaining docs, I also updated the code so any Virtual
Network related API calls are now synchronised by using a mutex (thanks
@aznashwan for pointing that out!).
* upstream/master: (21 commits)
fix typo
fix typo, use awslabs/aws-sdk-go
Update CHANGELOG.md
More internal links in template documentation.
providers/aws: Requires ttl and records attributes if there isn't an ALIAS block.
Condense switch fallthroughs into expr lists
Fix docs for aws_route53_record params
Update CHANGELOG.md
provider/aws: Add IAM Server Certificate resource
aws_db_instance docs updated per #2070
providers/aws: Adds link to AWS docs about RDS parameters.
Downgrade middleman to 3.3.12 as 3.3.13 does not exist
providers/aws: Clarifies db_security_group usage.
"More more" no more!
Indentation issue
Export ARN in SQS queue and SNS topic / subscription; updated tests for new AWS SDK errors; updated documentation.
Changed Required: false to Optional: true in the SNS topic schema
Initial SNS support
correct resource name in example
added attributes reference section for AWS_EBS_VOLUME
...
Only the azure_instance is fully working (for both Linux and Windows
instances) now, but needs some tests. network and disk and pretty much
empty, but the idea is clear so will not take too much time…
commit a92fe29b909af033c4c57257ddcb6793bfb694aa
Author: Michael Austin <m_austin@me.com>
Date: Wed May 20 16:35:38 2015 -0400
updated to new style of awserr
commit 428271c9b9ca01ed2add1ffa608ab354f520bfa0
Merge: b3bae0e 883e284
Author: Michael Austin <m_austin@me.com>
Date: Wed May 20 16:29:00 2015 -0400
Merge branch 'master' into 2544-terraform-s3-forceDelete
commit b3bae0efdac81adf8bb448d11cc1ca62eae75d94
Author: Michael Austin <m_austin@me.com>
Date: Wed May 20 12:06:36 2015 -0400
removed extra line
commit 85eb40fc7ce24f5eb01af10eadde35ebac3c8223
Author: Michael Austin <m_austin@me.com>
Date: Tue May 19 14:27:19 2015 -0400
stray [
commit d8a405f7d6880c350ab9fccb70b833d2239d9915
Author: Michael Austin <m_austin@me.com>
Date: Tue May 19 14:24:01 2015 -0400
addressed feedback concerning parsing of aws error in a more standard way
commit 5b9a5ee613af78e466d89ba772959bb38566f50e
Author: Michael Austin <m_austin@me.com>
Date: Tue May 19 10:55:22 2015 -0400
clarify comment to highlight recursion
commit 91043781f4ba08b075673cd4c7c01792975c2402
Author: Michael Austin <m_austin@me.com>
Date: Tue May 19 10:51:13 2015 -0400
addressed feedback about reusing err variable and unneeded parens
commit 95e9c3afbd34d4d09a6355b0aaeb52606917b6dc
Merge: 2637edf db095e2
Author: Michael Austin <m_austin@me.com>
Date: Mon May 18 19:15:36 2015 -0400
Merge branch 'master' into 2544-terraform-s3-forceDelete
commit 2637edfc48a23b2951032b1e974d7097602c4715
Author: Michael Austin <m_austin@me.com>
Date: Fri May 15 15:12:41 2015 -0400
optimize delete to delete up to 1000 at once instead of one at a time
commit 1441eb2ccf13fa34f4d8c43257c2e471108738e4
Author: Michael Austin <m_austin@me.com>
Date: Fri May 15 12:34:53 2015 -0400
Revert "hook new resource provider into configuration"
This reverts commit e14a1ade5315e3276e039b745a40ce69a64518b5.
commit b532fa22022e34e4a8ea09024874bb0e8265f3ac
Author: Michael Austin <m_austin@me.com>
Date: Fri May 15 12:34:49 2015 -0400
this file should not be in this branch
commit 645c0b66c6f000a6da50ebeca1d867a63e5fd9f1
Author: Michael Austin <m_austin@me.com>
Date: Thu May 14 21:15:29 2015 -0400
buckets tagged force_destroy will delete all files and then delete buckets
commit ac50cae214ce88e22bb1184386c56b8ba8c057f7
Author: Michael Austin <m_austin@me.com>
Date: Thu May 14 12:41:40 2015 -0400
added code to delete policy from s3 bucket
commit cd45e45d6d04a3956fe35c178d5e816ba18d1051
Author: Michael Austin <m_austin@me.com>
Date: Thu May 14 12:27:13 2015 -0400
added code to read bucket policy from bucket, however, it's not working as expected currently
commit 0d3d51abfddec9c39c60d8f7b81e8fcd88e117b9
Merge: 31ffdea 8a3b75d
Author: Michael Austin <m_austin@me.com>
Date: Thu May 14 08:38:06 2015 -0400
Merge remote-tracking branch 'hashi_origin/master' into 2544-terraform-s3-policy
commit 31ffdea96ba3d5ddf5d42f862e68c1c133e49925
Author: Michael Austin <m_austin@me.com>
Date: Wed May 13 16:01:52 2015 -0400
add name for use with resouce id
commit b41c7375dbd9ae43ee0d421cf2432c1eb174b5b0
Author: Michael Austin <m_austin@me.com>
Date: Wed May 13 14:48:24 2015 -0400
Revert "working policy assignment"
This reverts commit 0975a70c37eaa310d2bdfe6f77009253c5e450c7.
commit b926b11521878f1527bdcaba3c1b7c0b973e89e5
Author: Michael Austin <m_austin@me.com>
Date: Wed May 13 14:35:02 2015 -0400
moved policy to it's own provider
commit 233a5f443c13d71f3ddc06cf034d07cb8231b4dd
Merge: e14a1ad c003e96
Author: Michael Austin <m_austin@me.com>
Date: Wed May 13 12:39:14 2015 -0400
merged origin/master
commit e14a1ade5315e3276e039b745a40ce69a64518b5
Author: Michael Austin <m_austin@me.com>
Date: Wed May 13 12:26:51 2015 -0400
hook new resource provider into configuration
commit 455b409cb853faae3e45a0a3d4e2859ffc4ed865
Author: Michael Austin <m_austin@me.com>
Date: Wed May 13 12:26:15 2015 -0400
dummy resource provider
commit 0975a70c37eaa310d2bdfe6f77009253c5e450c7
Author: Michael Austin <m_austin@me.com>
Date: Wed May 13 09:42:31 2015 -0400
working policy assignment
commit 3ab901d6b3ab605adc0a8cb703aa047a513b68d4
Author: Michael Austin <m_austin@me.com>
Date: Tue May 12 10:39:56 2015 -0400
added policy string to schema
This landed in aws-sdk-go yesterday, breaking the AWS provider in many places:
3c259c9586
Here, with much sedding, grepping, and manual massaging, we attempt to
catch Terraform up to the new `awserr.Error` interface world.
Additionally:
Update CHANGELOG
Make cooldown period optional for autoscaler
Refactor autoscaler and add more error checking
Instance template now supports image aliases
Replace instance group manager 'size' -- use target_size (now writeable)
Add documentation for autoscaler
Add beta warnings to docs
- rename test to have _basic suffix, so we can run it individually
- use us-east-1 for basic test, since that's probably the only region that has
Classic
- update the indexing of nodes; cache nodes are 4 digits
Needs to wait for len(cluster.CacheNodes) == cluster.NumCacheNodes, since
apparently that takes a bit of time and the initial response always has
an empty collection of nodes
This commit follows suit of #1897 by fixing volume-related
parameters which allow the volume attach acceptance test
to work. It also re-enables the volume attach test.
This commit adds a server group resource. Users can create server
groups with different policies. If a server is launched in a certain
group, the server will adhere to that policy. For example, servers
can be made to all launch on the same compute node or different compute
nodes.
This reworks the template lifecycle a bit such that we get nicer diff
behavior.
First, we tick ForceNew on for both filename and vars, so that the diff
indicates that the template will be "replaced" on change. This is mostly
cosmetic, but it also tracks conceptually with the fact that the
identifier we use is a hash of the contents, so any change essentially
makes a "new resource".
Second, we change the Exists implementation to only return `false` when
there has been a change in the rendered template. This lets descendent
resources see the computed value changing so that they'll properly
trigger in the plan.
Fixes#1898
Refs #1866 (but does not fix, there's another deeper issue there)
I added a debug log line in the last commit, only to find out it’s now
logging the same info twice. So removed the double entry and tweaked
the existing once.
In order to fix the failing test in the preceding commit when optional
params are changed from their default "computed" values.
These weren't working well with `HttpHealthCheck.Patch()` because it was
attempting to set all unspecified params to Go's type defaults (eg. 0 for
int64) which the API rejected.
Changing the call to `HttpHealthCheck.Update()` seemed to fix this but it
still didn't allow you to reset a param back to it's default by no longer
specifying it.
Settings defaults like this, which match the Terraform docs, seems like the
best all round solution. Includes two additional tests for the acceptance
tests which verify the params are really getting set correctly.
By first creating a very simple resource that mostly uses the default
values and then changing the two thresholds from their computed defaults.
This currently fails with the following error and will be fixed in a
subsequent commit:
--- FAIL: TestAccComputeHttpHealthCheck_update (5.58s)
testing.go:131: Step 1 error: Error applying: 1 error(s) occurred:
* 1 error(s) occurred:
* 1 error(s) occurred:
* Error patching HttpHealthCheck: googleapi: Error 400: Invalid value for field 'resource.port': '0'. Must be greater than or equal to 1
More details:
Reason: invalid, Message: Invalid value for field 'resource.port': '0'. Must be greater than or equal to 1
Reason: invalid, Message: Invalid value for field 'resource.checkIntervalSec': '0'. Must be greater than or equal to 1
Reason: invalid, Message: Invalid value for field 'resource.timeoutSec': '0'. Must be greater than or equal to 1
Mixture of hard and soft tabs, which isn't picked up by `go fmt` because
it's inside a string. Standardise on hard-tabs since that is what's used
in the rest of the code.
The commit is pretty complete and has a tested/working provisioner for
both SSH and WinRM. There are a few tests, but we maybe need another
few to have better coverage. Docs are also included…
* ctiwald/ct/fix-protocol-problem:
aws: Document the odd protocol = "-1" behavior in security groups.
aws: Fixup structure_test to handle new expandIPPerms behavior.
aws: Add security group acceptance tests for protocol -1 fixes.
aws: error on expndIPPerms(...) if our ports and protocol conflict.
Users can input a limited number of protocol names (e.g. "tcp") as
inputs to network ACL rules, but the API only supports valid protocol
number:
http://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml
Preserve the convenience of protocol names and simultaneously support
numbers by only writing numbers to the state file. Also use numbers
when hashing the rules, to keep everything consistent.
AWS will accept any overly-specific IP/mask combination, such as
10.1.2.2/24, but will store it by its implied network: 10.1.2.0/24.
This results in hashing errors, because the remote API will return
hashing results out of sync with the local configuration file.
Enforce a stricter API rule than AWS. Force users to use valid masks,
and run a quick calculation on their input to discover their intent.
AWS doesn't store ports for -1 protocol rules, thus the read from the
API will always come up with a different hash. Force the user to make a
deliberate port choice when enabling -1 protocol rules. All from_port
and to_port's on these rules must be 0.
AWS includes default rules with all network ACL resources which cannot
be modified by the user. Don't attempt to store them locally or change
them remotely if they are already stored -- it'll consistently result
in hashing problems.
resourceAwsNetworkAclRead swallowed these errors resulting in rules
that never properly updated. Implement an entry-to-maplist function
that'll allow us to write something that Set knows how to read.
aws hides its credentials in many places:
multiple env vars, config files,
ec2 metadata.
Terraform currently recognizes only the env vars;
to use the other options, you had to put in a
dummy empty value for access_key and secret_key.
Rather than duplicate all aws checks, ask the
aws sdk to fetch credentials earlier.
If an AutoScalingGroup is in the middle of performing a Scaling
Activity, it cannot be deleted, and yields a ScalingActivityInProgress
error.
Retry the delete for up to 5m so we don't choke on this error. It's
telling us something's in progress, so we'll keep trying until the
scaling activity completed.
On ASG creation, waits for up to 10m for desired_capacity or min_size
healthy nodes to show up in the group before continuing.
With CBD and proper HealthCheck tuning, this allows us guarantee safe
ASG replacement.
* 'master' of github.com:hashicorp/terraform:
provider/aws: detach VPN gateway with proper ID
update CHANGELOG
provider/aws: Update ARN in instanceProfileReadResult
provider/aws: remove placement_group from acctest
core: module targeting
Added support for more complexly images repos such as images on a private registry that are stored as namespace/name
Depends on there being an existing placement group in the account called
"terraform-placement-group" - we'll need to circle back around to cover
this with AccTests after TF gets an `aws_placement_group` resource.
- Users
- Groups
- Roles
- Inline policies for the above three
- Instance profiles
- Managed policies
- Access keys
This is most of the data types provided by IAM. There are a few things
missing, but the functionality here is probably sufficient for 95% of
the cases. Makes a dent in #28.
Ingress and egress rules given a "-1" protocol don't have ports when
Read out of AWS. This results in hashing problems, as a local
config file might contain port declarations AWS can't ever return.
Rather than making ports optional fields, which carries with it a huge
headache trying to distinguish between zero-value attributes (e.g.
'to_port = 0') and attributes that are simply omitted, simply force the
user to opt-in when using the "-1" protocol. If they choose to use it,
they must now specify "0" for both to_port and from_port. Any other
configuration will error.
Do directory expansion on filenames.
Add basic acceptance tests. Code coverage is 72.5%.
Uncovered code is uninteresting and/or impossible error cases.
Note that this required adding a knob to
helper/resource.TestStep to allow transient
resources.
* We now return an error when you set the script_path to
C:\Windows\Temp explaining this is currently not supported
* The fix in PR #1588 is converted to the updated setup in this PR
including the unit tests
Last thing to do is add a few tests for the WinRM communicator…
The reason why the shebang is removed from these tests, is because the
shebang is only needed for SSH/Linux connections. So in the new setup
the shebang line is added in the SSH communicator instead of in the
resource provisioner itself…
This is needed as preperation for adding WinRM support. There is still
one error in the tests which needs another look, but other than that it
seems like were now ready to start working on the WinRM part…
As a module author, I'd like to be able to create a module that includes
a key_pair. I don't care about the name, I only know I don't want it to
collide with anything else in the account.
This allows my module to be used multiple times in the same account
without having to do anything funky like adding a user-specified unique
name parameter.
Currently, if a record isn't found, we get an error like:
Couldn't find record: Record not found
This change improves the error message to add more context:
Couldn't find record ID (123456789) for domain (example.com): Record not found
Currently, we weren't correctly setting the ids, and are setting both
`security_groups` and `vpc_security_group_ids`. As a result, we really only use
the former.
We also don't actually update the latter in the `update` method.
This PR fixes both issues, correctly reading `security_groups` vs.
`vpc_security_group_ids` and allows users to update the latter without
destroying the Instance when in a VPC.
As we've seen elsewhere, the SDK now wants nils instead of empty arrays
for collections
fixes#1696
thanks @jstremick for pointing me in the right direction
The upstream behavior here changed, and the request needs a `nil`
instead of an empty slice to indicate that we _don't_ want to filter on
Network ACL IDs.
fixes#1634
Adds an "alias" field to the provider which allows creating multiple instances
of a provider under different names. This provides support for configurations
such as multiple AWS providers for different regions. In each resource, the
provider can be set with the "provider" field.
(thanks to Cisco Cloud for their support)
If reading an S3 bucket's state, and that bucket has been deleted, don't
fail with a 404 error. Instead, update the state to reflect that the
bucket does not exist. Fixes#1574.
EIP with VPC only returns an allocationID. However, for standard we need
to lookup for PublicIP. When we use an example for standard EC2 instance
(here `t1.micro`):
```
resource "aws_instance" "example" {
ami = "ami-25773a24"
instance_type = "t1.micro"
}
resource "aws_eip" "ip" {
instance = "${aws_instance.example.id}"
}
```
then in this case, allocationID will be nil, but publicIP will be non
nil (which is used later for association the IP). So check for
allocationId only if it's of domain `VPC`.
* master: (511 commits)
Update CHANGELOG.md
core: avoid diff mismatch on NewRemoved fields during -/+
Update CHANGELOG.md
update CHANGELOG
Fix minor error in index/count docs
terraform: remove debug
terraform: when pruning destroy, only match exact nodes, or exact counts
up version for dev
update CHANGELOG
terraform: prune tainted destroys if no tainted in state [GH-1475]
update CHANGELOG
config/lang: support math on variables through implicits
update CHANGELOG
update cHANGELOG
update cHANGELOG
providers/aws: set id outside if/esle
providers/aws: set ID after creation
core: remove dead code from pre-deposed refactor
website: update LC docs to note name is optional
security_groups field expects a list of Security Group Group Names, not IDs
...
It doesn't need to be a List of Maps, it can just be a Map.
We're also safe to remove a previous workaround I stuck in there.
The config parsing is equivalent between a list of maps and a plain map,
so we just need a state migration to make this backwards compatible.
fixes#1508
In a DESTROY/CREATE scenario, the plan diff will be run against the
state of the old instance, while the apply diff will be run against an
empty state (because the state is cleared when the destroy node does its
thing.)
For complex attributes, this can result in keys that seem to disappear
between the two diffs, when in reality everything is working just fine.
Same() needs to take into account this scenario by analyzing NewRemoved
and treating as "Same" a diff that does indeed have that key removed.
Fixes#1409
Resource set hash calculation is a bit of a devil's bargain when it
comes to optional, computed attributes.
If you omit the optional, computed attribute from the hash function,
changing it in an existing config is not properly detected.
If you include the optional, computed attribute in the hash and do not
specify a value for it in the config, then you'll end up with a
perpetual, unresolvable diff.
We'll need to think about how to get the best of both worlds, here, but
for now I'm switching us to the latter and documenting the fact that
changing these attributes requires manual `terraform taint` to apply.
These bugs were found by additional check added in #1443
* Reversed nil err check meant that block devices were broken :(
* Fixing the err check revealed a few missed pointer derefs
* Unlike instances, ephemeral block devices do come back in
`BlockDeviceMappings` from `DescribeLaunchConfigurations` calls, so
we need to recognize them and filter them properly. Even though
they're not set as computed, I'm doing a `d.Set` since it doesn't
hurt and it gives us the benefit of basic drift detection.
Route 53 records were silently erroring out when saving the records returned
from AWS, because they weren't being presented as an array of strings like we
expected.
Turns out AssociatePublicIPAddress was always being set, but the AWS
APIs don't like that when you're launching into EC2 Classic and return a
validation error at ASG launch time.
Fixes#1410
This removes `ForceNew` from `records` and `ttl`, and introduces a
`resourceAwsRoute53RecordUpdate` function. The `resourceAwsRoute53RecordUpdate`
falls through to the `resourceAwsRoute53RecordCreate` function, which utilizes
AWS `UPSERT` behavior and diffs for us.
`Name` and `Type` are used by AWS in the `UPSERT`, so only records with matching
`name` and `type` can be updated. Others are created as new, so we leave the
`ForceNew` behavior here.
These changes should fix#1367:
* `ebs_optimized` gets `Computed: true` and set from `Read`
* `ephemeral_block_device` loses `Computed: true`
* explicitly set `root_block_device` to empty from `Read`
While I was in there (tm):
* Send pointers to `d.Set` so we can use its internal nil check.
If a given resource does not define an `Update` function, then all of
its attributes must be specified as `ForceNew`, lest Applys fail with
"doesn't support update" like #1367.
This is something we can detect automatically, so this adds a check for
it when we validate provider implementations.
This commit resolves an issue where the tenant-network api extension
does not exist. The caveat is that the user must either specify no
networks (single network environment) or can only specify UUIDs for
network configurations.
* upstream/master: (295 commits)
Update CHANGELOG.md
provider/aws: Allow DB Parameter group to change in RDS
return error if failed to set tags on Route 53 zone
core: [tests] fix order dependent test
Fix hashcode for ASG test
provider/aws: Fix issue with tainted ASG groups failing to re-create
Don't error when reading s3 bucket with no tags
Avoid panics when DBName is not set
Add floating IP association in aceptance tests
Use env var OS_POOL_NAME as default for pool attribute
providers/heroku: Add heroku-postgres to example
docs: resource addressing
providers/heroku: Document environment variables
providers/heroku: Add region to example
Bugfix on floating IP assignment
Update CHANGELOG.md
update CHANGELOG
website: note on docker
core: formalize resource addressing
core: fill out context tests for targeted ops
...
* master: (167 commits)
return error if failed to set tags on Route 53 zone
core: [tests] fix order dependent test
Fix hashcode for ASG test
provider/aws: Fix issue with tainted ASG groups failing to re-create
Don't error when reading s3 bucket with no tags
Avoid panics when DBName is not set
Add floating IP association in aceptance tests
Use env var OS_POOL_NAME as default for pool attribute
providers/heroku: Add heroku-postgres to example
docs: resource addressing
providers/heroku: Document environment variables
providers/heroku: Add region to example
Bugfix on floating IP assignment
Update CHANGELOG.md
update CHANGELOG
website: note on docker
core: formalize resource addressing
core: fill out context tests for targeted ops
core: docs for targeted operations
core: targeted operations
...
* upstream/master:
return error if failed to set tags on Route 53 zone
cleanups
provider/aws: Finish Tag support for Route 53 zone
provider/aws: Add tags to Route53 hosted zones
* master: (172 commits)
core: [tests] fix order dependent test
Fix hashcode for ASG test
provider/aws: Fix issue with tainted ASG groups failing to re-create
Don't error when reading s3 bucket with no tags
Avoid panics when DBName is not set
Add floating IP association in aceptance tests
Use env var OS_POOL_NAME as default for pool attribute
providers/heroku: Add heroku-postgres to example
docs: resource addressing
providers/heroku: Document environment variables
providers/heroku: Add region to example
Bugfix on floating IP assignment
Update CHANGELOG.md
update CHANGELOG
website: note on docker
core: formalize resource addressing
core: fill out context tests for targeted ops
core: docs for targeted operations
core: targeted operations
user_data support
...
* master: (172 commits)
core: [tests] fix order dependent test
Fix hashcode for ASG test
provider/aws: Fix issue with tainted ASG groups failing to re-create
Don't error when reading s3 bucket with no tags
Avoid panics when DBName is not set
Add floating IP association in aceptance tests
Use env var OS_POOL_NAME as default for pool attribute
providers/heroku: Add heroku-postgres to example
docs: resource addressing
providers/heroku: Document environment variables
providers/heroku: Add region to example
Bugfix on floating IP assignment
Update CHANGELOG.md
update CHANGELOG
website: note on docker
core: formalize resource addressing
core: fill out context tests for targeted ops
core: docs for targeted operations
core: targeted operations
user_data support
...
* d.Set has a pointer nil check we can lean on
* need to be a bit more conservative about nil checks on nested structs;
(this fixes the RDS acceptance tests)
/cc @fanhaf
This commit changes how the network info is read from OpenStack.
It pulls all relevant information from server.Addresses and merges
it with the available information from the networks parameters.
The access_v4, access_v6, and floating IP information is then
determined from the result.
A MAC address parameter is also added since that information is
available in server.Addresses.
This commit allows the user to specify a network by name rather than
just uuid. This is done via the os-tenant-networks api extension.
This works for both neutron and nova-network.
This commit causes the resource to manage floating IPs by way of the
os-floating-ips API.
At the moment, it works with both nova-network and Neutron environments,
but if you use multiple Neutron networks, the network that supports the
floating IP must be listed first.
s3.GetBucketTagging returns an error if there are no tags associated
with a bucket. Consequently, any configuration with a tagless s3 bucket
would fail with an error, "the TagSet does not exist".
Handle that error more appropriately, interpreting it as an empty set of
tags.