* initial commit - 101-vm-from-user-image
* changed branch name
* not deploying - storage problems
* provisions vm but image not properly prepared
* storage not correct
* provisions properly
* changed main.tf to azuredeploy.tf
* added tfvars and info for README
* tfvars ignored and corrected file ext
* added CI config; added sane defaults for variables; updated deployment script, added mac specific deployment for local testing
* deploy.sh to be executable
* executable deploy files
* added CI files; changed vars
* prep for PR
* removal of old folder
* prep for PR
* wrong args for travis
* more PR prep
* updated README
* commented out variables in terraform.tfvars
* Topic 101 vm from user image (#2)
* initial commit - 101-vm-from-user-image
* added tfvars and info for README
* added CI config; added sane defaults for variables; updated deployment script, added mac specific deployment for local testing
* prep for PR
* added new template
* oops, left off master
* prep for PR
* correct repository for destination
* renamed scripts to be more intuitive; added check for docker
* merge vm simple; vm from image
* initial commit
* deploys locally
* updated deploy
* changed to allow http & https (like ARM tmplt)
* changed host_name & host_name variable desc
* prep for PR
* took out armviz button and minor README changes
* changed host_name
* fixed merge conflicts
* changed host_name variable
* updating Hashicorp's changes to merged simple linux branch
* updating files to merge w/master and prep for Hashicorp pr
* reverted to Hashicorp's .travis.yml
* terraform graph
* removing pending example
* cleaned up deploy.ci.sh
* merge master
* added new constructs/naming for deploy scripts, etc.
* suppress az login output
* removed .tfvars and provider.tf; updated prev merge
* reverted .travis.yml back to Hashicorp's
* Reverting back to the Hashicorp travis file
This is a fix for PR #11040. The code here lowercases the name and/or prefix before sending it to the AWS API and the terraform state. This means the state will match the actual resource name and be able to converge the diff.
Previously Nomad acceptance tests would look for a `404` response from the Nomad API when querying a nomad job after it had been stopped. Nomad has, for a while now, cached jobs such that they are still returnable from the API but have `"dead"` as their status.
```
$ make testacc TEST=./builtin/providers/nomad
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/05/03 16:27:31 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/nomad -v -timeout 120m
=== RUN TestProvider
--- PASS: TestProvider (0.00s)
=== RUN TestResourceJob_basic
--- PASS: TestResourceJob_basic (0.03s)
=== RUN TestResourceJob_refresh
--- PASS: TestResourceJob_refresh (0.04s)
=== RUN TestResourceJob_disableDestroyDeregister
--- PASS: TestResourceJob_disableDestroyDeregister (0.05s)
=== RUN TestResourceJob_idChange
--- PASS: TestResourceJob_idChange (0.06s)
=== RUN TestResourceJob_parameterizedJob
--- PASS: TestResourceJob_parameterizedJob (0.02s)
PASS
ok github.com/hashicorp/terraform/builtin/providers/nomad 0.222s
```
* Allowing for volumes to be created with private ProfitBricks images without image password or ssh keys
* Acceptance test fix
* Errorf formatting fix
* Dependencies update
We've been incorrectly validating (or not validating at all) the
requirement that certain blocks be followed by a name string, to prohibit
e.g. this:
variable {}
and:
variable = ""
Before this change we were catching this for most constructs only if
there were no _valid_ blocks of the same name in the same file. For
modules in particular, we were not catching this at all.
Now we detect this for all kinds of block (resources had a pre-existing
check, so aren't touched here) and produce a different error message
depending on which of the above incorrect forms are used.
This fixes#13575.
Unset id in case the backend service cannot be created. This basically
updates these lines of code to match the more modern style which is
being used e.g. for the google_compute_instance resource.
The current google_compute_url_map resource already supports backend
buckets out of the box: Just pass the self_link of the backend buckets
as you would pass the self_link of a backend service.
This adds some example code as well.
When trying to run the example code with the most recent terraform
master you get the following errors:
```
2 error(s) occurred:
* google_compute_backend_service.home: "region": [REMOVED] region has been removed as it was never used
* google_compute_backend_service.login: "region": [REMOVED] region has been removed as it was never used
```
Hence remove it from the documentation.
There's a pre-existing Terraform presence on both Server Fault and
Stack Overflow that seem to be generating good questions and answers, so
here we'll try to send users to the right sites to ask questions and
encourage them to be good participants in those communities.