Fixes#5338 (and I'm sure many others)
There is no use case for "simple" variables in Terraform at all so
anytime one is found it should be an error.
There is a _huge_ backwards incompatibility here that was not supposed
to be by design but I'm sure a lot of people are relying on: in the
`template_file` datasource, this bug allowed you to not escape your
interpolations and have the work. For example:
```
data "template_file" "foo" {
template = "${a}"
vars { a = 12 }
}
```
The above would work, but it shouldn't. The template should have to be
`"$${a}"` (to escape the interpolation).
Because of this BC, I recommend holding this until Terraform 0.8.0 and
documenting it carefully. As part of this PR, I've added some special
error message notes.
* provider/google Document MySQL versions for second generation instances
Google Cloud SQL has first-gen and second-gen instances with different
supported versions of MySQL.
* provider/google Increase SQL Admin operation timeout to 10 minutes
Creating SQL instances for MySQL 5.7 can take over 7 minutes,
so the timeout needs to be increased to allow the
google_sql_database_instance resource to successfully create.
The underlying [go-getter](https://github.com/hashicorp/go-getter)
supports S3 Buckets and unarchiving. Adding mentions of this to the
module sources documentation.
This commit adds an ability to modify the `AutoMinorVersionUpgrade` property of the
Replication Group (which is enabled by default) accordingly.
Signed-off-by: Krzysztof Wilczynski <krzysztof.wilczynski@linux.com>
* Adding private gateway and static route resource to cloudstack provider
Testing the private gateway and static route resource requires a ROOT
account in Cloudstack
* changes requested by reviewer
Fixes#9654
Before the fix, I created an ASG with a schedule on it. Went to the AWS
console and deleted the schedule. A terraform plan looked as follows:
```
% terraform plan
See https://www.terraform.io/docs/internals/internal-plugins.html
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.
aws_launch_configuration.foobar: Refreshing state... (ID:
terraform-test-foobar5)
aws_autoscaling_group.foobar: Refreshing state... (ID:
terraform-test-foobar5)
aws_autoscaling_schedule.foobar: Refreshing state... (ID: foobar)
Error refreshing state: 1 error(s) occurred:
* aws_autoscaling_schedule.foobar: Unable to find Autoscaling
* Scheduled Action: []*autoscaling.ScheduledUpdateGroupAction(nil)
```
After the fix:
```
terraform plan 1 ↵
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.
aws_launch_configuration.foobar: Refreshing state... (ID: terraform-test-foobar5)
aws_autoscaling_group.foobar: Refreshing state... (ID: terraform-test-foobar5)
aws_autoscaling_schedule.foobar: Refreshing state... (ID: foobar)
The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.
Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.
+ aws_autoscaling_schedule.foobar
arn: "<computed>"
autoscaling_group_name: "terraform-test-foobar5"
desired_capacity: "0"
end_time: "2018-01-16T13:00:00Z"
max_size: "0"
min_size: "0"
recurrence: "<computed>"
scheduled_action_name: "foobar"
start_time: "2018-01-16T07:00:00Z"
Plan: 1 to add, 0 to change, 0 to destroy.
```
Tests run as expected:
```
% make testacc TEST=./builtin/providers/aws TESTARGS='-run=TestAccAWSAutoscalingSchedule_' 2 ↵ ✹
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2016/10/27 17:45:19 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/aws -v -run=TestAccAWSAutoscalingSchedule_ -timeout 120m
=== RUN TestAccAWSAutoscalingSchedule_basic
--- PASS: TestAccAWSAutoscalingSchedule_basic (140.94s)
=== RUN TestAccAWSAutoscalingSchedule_disappears
--- PASS: TestAccAWSAutoscalingSchedule_disappears (179.17s)
=== RUN TestAccAWSAutoscalingSchedule_recurrence
--- PASS: TestAccAWSAutoscalingSchedule_recurrence (186.72s)
=== RUN TestAccAWSAutoscalingSchedule_zeroValues
--- PASS: TestAccAWSAutoscalingSchedule_zeroValues (167.73s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/aws 674.530s
```
* provider/aws: data source for AWS Security Group
* provider/aws: add documentation for data source for AWS Security Group
* provider/aws: data source for AWS Security Group (improve if condition and syntax)
* fix fmt
* provider/scaleway: fix scaleway_volume_attachment with count > 1
since scaleway requires servers to be powered off to attach volumes to, we need
to make sure that we don't power down a server twice, or power up a server while
it's supposed to be modified.
sadly terraform doesn't seem to sport serialization primitives for usecases like
this, but putting the code in question behind a `sync.Mutex` does the trick, too
fixes#9417
* provider/scaleway: use mutexkv to lock per-resource
following @dcharbonnier suggestion. thanks!
* provider/scaleway: cleanup waitForServerState signature
* provider/scaleway: store serverID in var
* provider/scaleway: correct imports
* provider/scaleway: increase timeouts
* Improve messaging when storage account isn't found.
* Add client for finding resources when you don't know it's resource group.
* Add function to find Storage Account resource group name.
* Use the storage account resource group, not the virtual machine's resource group when deleting VHDs.
* Add description of storage account ID for clarity.
* Improve VHD deletion test when storage account is in different resource group.
* Use common function for ID parsing of storage account.
* Add AWS Prefix List data source.
AWS Prefix List data source acceptance test.
AWS Prefix List data source documentation.
* Improve error message when PL not matched.