The `aws_availability_zones` data source test was panicking. This fixes both tests
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSAvailabilityZones'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/01/31 15:47:39 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSAvailabilityZones -timeout 120m
=== RUN TestAccAWSAvailabilityZones_basic
--- PASS: TestAccAWSAvailabilityZones_basic (12.56s)
=== RUN TestAccAWSAvailabilityZones_stateFilter
--- PASS: TestAccAWSAvailabilityZones_stateFilter (13.59s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 26.187s
```
* Added Step Function Activity & Step Function State Machine
* Added SFN State Machine documentation
* Added aws_sfn_activity & documentation
* Allowed import of sfn resources
* Added more checks on tests, fixed documentation
* Handled the update case of a SFN function (might be already deleting)
* Removed the State Machine import test file
* Fixed the eventual consistency of the read after delete for SFN functions
This adds a Meta field (similar to InstanceState.Meta) to InstanceDiff.
This allows providers to store arbitrary k/v data as part of a diff and
have it persist through to the Apply. This will be used by helper/schema
for timeout storage being done by @catsby.
The type here is `map[string]interface{}`. A couple notes:
* **Not using `string`**: The Meta field of InstanceState is a string
value. We've learned that forcing things to strings is bad. Let's
just allow types.
* **Primitives only**: Even though it is type `interface{}`, it must
be able to cleanly pass the go-plugin RPC barrier as well as be
encoded to a file as Gob. Given these constraints, the value must
only comprise of primitive types and collections. No structs,
functions, channels, etc.
The API asks you to send lower case values, but returns uppercase ones.
Here we lowercase the returned API values.
There is no migration here because the field in question is nested in a
set, so the hash will change regardless. Anyone using this feature now
has it broken anyway.
Fixes 2 acceptance tests for the `aws_instance` data source
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSInstanceDataSource_SecurityGroups'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/01/31 12:12:15 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSInstanceDataSource_SecurityGroups -timeout 120m
=== RUN TestAccAWSInstanceDataSource_SecurityGroups
--- PASS: TestAccAWSInstanceDataSource_SecurityGroups (119.14s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 119.172s
```
```
$ make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSInstanceDataSource_tags'
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/01/31 12:15:42 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSInstanceDataSource_tags -timeout 120m
=== RUN TestAccAWSInstanceDataSource_tags
--- PASS: TestAccAWSInstanceDataSource_tags (118.87s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 118.900s
```
The existing hash function for set items cannot generate consistent hashes when using both `Optional` and `Computed` on a schema field.
I tried to add this use case to the existing code base, but came to the conclusion this would be quite an endeavor.
That together with the fact this is the only field in all sets used in all builtin providers/resources that would be using both options at the same time, made me decide to change this single resource instead.
When switching from one Rancher server to another, we want Terraform
to recreate Rancher resources. This currently leads to ugly `EOF` errors.
This patch resets resource Ids when they can't be found in the Rancher API.
* Image and vhdcontainers are mutually exclusive.
* Fix ip configuration handling and update support for load balancer backend pools.
* Fix os disk handling.
* Remove os_type from disk hash.
* Load balancer pools should not be computed.
* Add support for the overprovision property.
* Update documentation.
* Create acceptance test for scale set lb changes.
* Create acceptance test for scale set overprovisioning.
* OS-131 Updated dependencies to use ukcloud/govcloudair instead of hmrc/vmware-govcd
* OS-131 Fixed failing tests by adding package name to imports of ukcloud/govcloudair
* OS-131 Minor change to force Travis to re-build the PR
* added server_side_encryption to s3_bucket_object resource including associated acceptance test and documentation.
* got acceptance tests passing.
* made server_side_encryption a computed attribute and only set kms_key_id attribute if an S3 non-default master key is in use.
* ensured kms api is only interrogated if required.
The old behavior in this situation was to simply delete the file. Since
we now have a lock on this file we don't want to close or delete it, so
instead truncate the file at offset 0.
Fix a number of related tests
Having the state files always created for locking breaks a lot of tests.
Most can be fixed by simple checking for state within a file, but a few
still might be writing state when they shouldn't.
Read state would assume that having a reader meant there should be a
valid state. Check for an empty file and return ErrNoState to
differentiate a bad file from an empty one.
Use a DynamoDB table to coodinate state locking in S3.
We use a simple strategy here, defining a key containing the value of
the bucket/key of the state file as the lock. If the keys exists, the
locks fails.
TODO: decide if locks should automatically be expired, or require manual
intervention.
In order to provide lockign for remote states, the Cache state,
remote.State need to expose Lock and Unlock methods. The actual locking
will be done by the remote.Client, which can implement the same
state.Locker methods.
Add the LockUnlock methods to LocalState and BackupState.
The implementation for LocalState will be platform specific. We will use
OS-native locking on the state files, speficially locking whichever
state file we intend to write to.