* Add scaleway provider
this PR allows the entire scaleway stack to be managed with terraform
example usage looks like this:
```
provider "scaleway" {
api_key = "snap"
organization = "snip"
}
resource "scaleway_ip" "base" {
server = "${scaleway_server.base.id}"
}
resource "scaleway_server" "base" {
name = "test"
# ubuntu 14.04
image = "aecaed73-51a5-4439-a127-6d8229847145"
type = "C2S"
}
resource "scaleway_volume" "test" {
name = "test"
size_in_gb = 20
type = "l_ssd"
}
resource "scaleway_volume_attachment" "test" {
server = "${scaleway_server.base.id}"
volume = "${scaleway_volume.test.id}"
}
resource "scaleway_security_group" "base" {
name = "public"
description = "public gateway"
}
resource "scaleway_security_group_rule" "http-ingress" {
security_group = "${scaleway_security_group.base.id}"
action = "accept"
direction = "inbound"
ip_range = "0.0.0.0/0"
protocol = "TCP"
port = 80
}
resource "scaleway_security_group_rule" "http-egress" {
security_group = "${scaleway_security_group.base.id}"
action = "accept"
direction = "outbound"
ip_range = "0.0.0.0/0"
protocol = "TCP"
port = 80
}
```
Note that volume attachments require the server to be stopped, which can lead to
downtimes of you attach new volumes to already used servers
* Update IP read to handle 404 gracefully
* Read back resource on update
* Ensure IP detachment works as expected
Sadly this is not part of the official scaleway api just yet
* Adjust detachIP helper
based on feedback from @QuentinPerez in
https://github.com/scaleway/scaleway-cli/pull/378
* Cleanup documentation
* Rename api_key to access_key
following @stack72 suggestion and rename the provider api_key for more clarity
* Make tests less chatty by using custom logger
These tests run each time Travis builds, causing additional noise and a
(negligible) speed decrease. However, since the advent of internal
plugins, these tests are unnecessary, and each file only carries a
package declaration anyway - so there are no tests actually executed!
* Grafana provider
* grafana_data_source resource.
Allows data sources to be created in Grafana. Supports all data source
types that are accepted in the current version of Grafana, and will
support any future ones that fit into the existing structure.
* Vendoring of apparentlymart/go-grafana-api
This is in anticipation of adding a Grafana provider plugin.
* grafana_dashboard resource
* Website documentation for the Grafana provider.
This provider will have logical resources that allow Terraform to "manage"
randomness as a resource, producing random numbers on create and then
retaining the outcome in the state so that it will remain consistent
until something explicitly triggers generating new values.
Managing randomness in this way allows configurations to do things like
random distributions and ids without causing "perma-diffs".
Here is an example that will setup the following:
+ An SSH key resource.
+ A virtual server resource that uses an existing SSH key.
+ A virtual server resource using an existing SSH key and a Terraform managed SSH key (created as "test_key_1" in the example below).
(create this as sl.tf and run terraform commands from this directory):
```hcl
provider "softlayer" {
username = ""
api_key = ""
}
resource "softlayer_ssh_key" "test_key_1" {
name = "test_key_1"
public_key = "${file(\"~/.ssh/id_rsa_test_key_1.pub\")}"
# Windows Example:
# public_key = "${file(\"C:\ssh\keys\path\id_rsa_test_key_1.pub\")}"
}
resource "softlayer_virtual_guest" "my_server_1" {
name = "my_server_1"
domain = "example.com"
ssh_keys = ["123456"]
image = "DEBIAN_7_64"
region = "ams01"
public_network_speed = 10
cpu = 1
ram = 1024
}
resource "softlayer_virtual_guest" "my_server_2" {
name = "my_server_2"
domain = "example.com"
ssh_keys = ["123456", "${softlayer_ssh_key.test_key_1.id}"]
image = "CENTOS_6_64"
region = "ams01"
public_network_speed = 10
cpu = 1
ram = 1024
}
```
You'll need to provide your SoftLayer username and API key,
so that Terraform can connect. If you don't want to put
credentials in your configuration file, you can leave them
out:
```
provider "softlayer" {}
```
...and instead set these environment variables:
- **SOFTLAYER_USERNAME**: Your SoftLayer username
- **SOFTLAYER_API_KEY**: Your API key
This introduces a provider for Cobbler. Cobbler manages bare-metal
deployments and, to some extent, virtual machines. This initial
commit supports the following resources: distros, profiles, systems,
kickstart files, and snippets.
This brings across the following resources for Triton from the
joyent/triton-terraform repository, and converts them to the canonical
Terraform style, introducing Terraform-style documentation and
acceptance tests which run against the live API rather than the local
APIs:
- triton_firewall_rule
- triton_machine
- triton_key
This brings across the following resources for Triton from the
joyent/triton-terraform repository, and converts them to the canonical
Terraform style, introducing Terraform-style documentation and
acceptance tests which run against the live API rather than the local
APIs:
- triton_firewall_rule
- triton_machine
- triton_key
- Add documentation for resources
- Rename files to match standard patterns
- Add acceptance tests for resource groups
- Add acceptance tests for vnets
- Remove ARM_CREDENTIALS file - as discussed this does not appear to be
an Azure standard, and there is scope for confusion with the
azureProfile.json file which the CLI generates. If a standard emerges
we can reconsider this.
- Validate credentials in the schema
- Remove storage testing artefacts
- Use ARM IDs as Terraform IDs
- Use autorest hooks for logging
This commit brings some of the work over from #3808, but rearchitects to
use a separate provider for Azure Resource Manager. This is in line with
the decisions made by the Azure Powershell Cmdlets, and is important for
usability since the sets of required fields change between the ASM and
ARM APIs.
Currently `azurerm_resource_group` and `azurerm_virtual_network` are
implemented, more resources will follow.
As of this commit this provider has only logical resources that allow
the creation of private keys, self-signed certs and certificate requests.
These can be useful when creating other resources that use TLS
certificates, such as AWS Elastic Load Balancers.
Later it could grow to include support for real certificate provision from
CAs using the LetsEncrypt ACME protocol, once it is stable.
Only the azure_instance is fully working (for both Linux and Windows
instances) now, but needs some tests. network and disk and pretty much
empty, but the idea is clear so will not take too much time…
The commit is pretty complete and has a tested/working provisioner for
both SSH and WinRM. There are a few tests, but we maybe need another
few to have better coverage. Docs are also included…
Docker's API is huge and only a small subset is currently implemented,
but this is expected to grow over time. Currently it's enough to
satisfy the use cases of probably 95% of Docker users.
I'm preparing this initial pull request as a preview step for feedback.
My ideal scenario would be to develop this within a branch in the main
repository; the more eyes and testing and pitching in on the code, the
better (this would avoid a merge request-to-the-merge-request scenario,
as I figure this will be built up over the longer term, even before
a merge into master).
Unit tests do not exist yet. Right now I've just been focused on getting
initial functionality ported over. I've been testing each option
extensively via the Docker inspect capabilities.
This code (C)2014-2015 Akamai Technologies, Inc. <opensource@akamai.com>