* vendor: update gopkg.in/ns1/ns1-go.v2
* provider/ns1: Port the ns1 provider to Terraform core
* docs/ns1: Document the ns1 provider
* ns1: rename remaining nsone -> ns1 (#10805)
* Ns1 provider (#11300)
* provider/ns1: Flesh out support for meta structs.
Following the structure outlined by @pashap.
Using reflection to reduce copy/paste.
Putting metas inside single-item lists. This is clunky, but I couldn't
figure out how else to have a nested struct. Maybe the Terraform people
know a better way?
Inside the meta struct, all fields are always written to the state; I
can't figure out how to omit fields that aren't used. This is not just
verbose, it actually causes issues because you can't have both "up" and
"up_feed" set).
Also some minor other changes:
- Add "terraform" import support to records and zones.
- Create helper class StringEnum.
* provider/ns1: Make fmt
* provider/ns1: Remove stubbed out RecordRead (used for testing metadata change).
* provider/ns1: Need to get interface that m contains from Ptr Value with Elem()
* provider/ns1: Use empty string to indicate no feed given.
* provider/ns1: Remove old record.regions fields.
* provider/ns1: Removes redundant testAccCheckRecordState
* provider/ns1: Moves account permissions logic to permissions.go
* provider/ns1: Adds tests for team resource.
* provider/ns1: Move remaining permissions logic to permissions.go
* ns1/provider: Adds datasource.config
* provider/ns1: Small clean up of datafeed resource tests
* provider/ns1: removes testAccCheckZoneState in favor of explicit name check
* provider/ns1: More renaming of nsone -> ns1
* provider/ns1: Comment out metadata for the moment.
* Ns1 provider (#11347)
* Fix the removal of empty containers from a flatmap
Removal of empty nested containers from a flatmap would sometimes fail a
sanity check when removed in the wrong order. This would only fail
sometimes due to map iteration. There was also an off-by-one error in
the prefix check which could match the incorrect keys.
* provider/ns1: Adds ns1 go client through govendor.
* provider/ns1: Removes unused debug line
* docs/ns1: Adds docs around apikey/datasource/datafeed/team/user/record.
* provider/ns1: Gets go vet green
* Importing the OpsGenie SDK
* Adding the goreq dependency
* Initial commit of the OpsGenie / User provider
* Refactoring to return a single client
* Adding an import test / fixing a copy/paste error
* Adding support for OpsGenie docs
* Scaffolding the user documentation for OpsGenie
* Adding a TODO
* Adding the User data source
* Documentation for OpsGenie
* Adding OpsGenie to the internal plugin list
* Adding support for Teams
* Documentation for OpsGenie Team's
* Validation for Teams
* Removing Description for now
* Optional fields for a User: Locale/Timezone
* Removing an implemented TODO
* Running makefmt
* Downloading about half the internet
Someone witty might simply sign this commit with "npm install"
* Adding validation to the user object
* Fixing the docs
* Adding a test creating multple users
* Prompting for the API Key if it's not specified
* Added a test for multiple users / requested changes
* Fixing the linting
* Initial checkin for PR request
* Added an argument to provider to allow control over whether or not TLS Certs will skip verification. Controllable via provider or env variable being set
* Initial check-in to use refactored module
* Checkin of very MVP for creating/deleting host test which works and validates basic host creation and deletion
* Check in with support for creating hosts with variables working
* Checking in work to date
* Remove code that causes travis CI to fail while I debug
* Adjust create to accept multivale
* Back on track. Working basic tests. go-icinga2-api needs more test too
* Squashing
* Back on track. Working basic tests. go-icinga2-api needs more test too
* Check in refactored hostgroup support
* Check in refactored check_command, hosts, and hsotgroup with a few test
* Checking in service code
* Add in dependency for icinga2 provider
* Add documentation. Refactor, fix and extend based on feedback from Hashicorp
* Added checking and validation around invalid URL and unavailable server
* "external" provider for gluing in external logic
This provider will become a bit of glue to help people interface external
programs with Terraform without writing a full Terraform provider.
It will be nowhere near as capable as a first-class provider, but is
intended as a light-touch way to integrate some pre-existing or custom
system into Terraform.
* Unit test for the "resourceProvider" utility function
This small function determines the dependable name of a provider for
a given resource name and optional provider alias. It's simple but it's
a key part of how resource nodes get connected to provider nodes so
worth specifying the intended behavior in the form of a test.
* Allow a provider to export a resource with the provider's name
If a provider only implements one resource of each type (managed vs. data)
then it can be reasonable for the resource names to exactly match the
provider name, if the provider name is descriptive enough for the
purpose of the each resource to be obvious.
* provider/external: data source
A data source that executes a child process, expecting it to support a
particular gateway protocol, and exports its result. This can be used as
a straightforward way to retrieve data from sources that Terraform
doesn't natively support..
* website: documentation for the "external" provider
Fixes#10412
The context wasn't properly adding variable values to the Interpolator
instance which made it so that the `console` command couldn't access
variables set via tfvars and the CLI.
This also adds better test coverage in command itself for this.
Fixes#10337
The `reset_bold` escape code (21) causes the text on Windows command
prompts to just become invisible. `reset` does the same job for us in
this scenario so do that.
Add `terraform debug json2dot` to convert debug log graphs to dot
format. This is not meant to be in place of more advanced debug
visualization, but may continue to be a useful way to work with the
debug output.
The dot format generation was done with a mix of code from the terraform
package and the dot package. Unify the dot generation code, and it into
the dag package.
Use an intermediate structure to allow a dag.Graph to marshal itself
directly. This structure will be ablt to marshal directly to JSON, or be
translated to dot format. This was we can record more information about
the graph in the debug logs, and provide a way to translate those logged
structures to dot, which is convenient for viewing the graphs.
This turns the new graphs on by default and puts the old graphs behind a
flag `-Xlegacy-graph`. This effectively inverts the current 0.7.x
behavior with the new graphs.
We've incubated most of these for a few weeks now. We've found issues
and we've fixed them and we've been using these graphs internally for
awhile without any major issue. Its time to default them on and get them
part of a beta.
Fixes#7774
This modifies the `import` command to load configuration files from the
pwd. This also augments the configuration loading section for the CLI to
have a new option (default false, same as old behavior) to
allow directories with no Terraform configurations.
For import, we allow directories with no Terraform configurations so
this option is set to true.
Implement debugInfo and the DebugGraph
DebugInfo will be a global variable through which graph debug
information can we written to a compressed archive. The DebugInfo
methods are all safe for concurrent use, and noop with a nil receiver.
The API outside of the terraform package will be to call SetDebugInfo
to create the archive, and CloseDebugInfo() to properly close the file.
Each write to the archive will be flushed and sync'ed individually, so
in the event of a crash or a missing call to Close, the archive can
still be recovered.
The DebugGraph is a representation of a terraform Graph to be written to
the debug archive, currently in dot format. The DebugGraph also contains
an internal buffer with Printf and Write methods to add to this buffer.
The buffer will be written to an accompanying file in the debug archive
along with the graph.
This also adds a GraphNodeDebugger interface. Any node implementing
`NodeDebug() string` can output information to annotate the debug graph
node, and add the data to the log. This interface may change or be
removed to provide richer options for debugging graph nodes.
The new graph builders all delegate the build to the BasicGraphBuilder.
Having a Name field lets us differentiate the actual builder
implementation in the debug graphs.
Fixes#7975
This changes the InputMode for the CLI to always be:
InputModeProvider | InputModeVar | InputModeVarUnset
Which means:
* Ask for provider variables
* Ask for user variables _that are not already set_
The change is the latter point. Before, we'd only ask for variables if
zero were given. This forces the user to either have no variables set
via the CLI, env vars, tfvars or ALL variables, but no in between. As
reported in #7975, this isn't expected behavior.
The new change makes is so that unset variables are always asked for.
Users can retain the previous behavior by setting `-input=false`. This
would ensure that variables set by external sources cover all cases.
To reduce the risk of secret exposure via Terraform state and log output,
we default to creating a relatively-short-lived token (20 minutes) such
that Vault can, where possible, automatically revoke any retrieved
secrets shortly after Terraform has finished running.
This has some implications for usage of this provider that will be spelled
out in more detail in the docs that will be added in a later commit, but
the most significant implication is that a plan created by "terraform plan"
that includes secrets leased from Vault must be *applied* before the
lease period expires to ensure that the issued secrets remain valid.
No resources yet. They will follow in subsequent commits.
Fixes#5409
I didn't expect this to be such a rabbit hole!
Based on git history, it appears that for "historical reasons"(tm),
setting up the various `state.State` structures for a plan were
_completely different logic_ than a normal `terraform apply`. This meant
that it was skipping things like disabling backups with `-backup="-"`.
This PR unifies loading from a plan to the normal state setup mechanism.
A few tests that were failing prior to this PR were added, no existing
tests were changed.
When passing a bool type to a variable such as `-var foo=true`, the CLI
would parse this as a `bool` type which Terraform core cannot handle.
It would then error with an invalid type error.
This changes the handling to convert the bool to its literally string
value given on the command-line.
This creates a standard package and interface for defining, querying,
setting experiments (`-X` flags).
I expect we'll want to continue to introduce various features behind
experimental flags. I want to make doing this as easy as possible and I
want to make _removing_ experiments as easy as possible as well.
The goal with this packge has been to rely on the compiler enforcing our
experiment references as much as possible. This means that every
experiment is a global variable that must be referenced directly, so
when it is removed you'll get compiler errors where the experiment is
referenced.
This also unifies and makes it easy to grab CLI flags to enable/disable
experiments as well as env vars! This way defining an experiment is just
a couple lines of code (documented on the package).
Since it is still very much possible for this to cause problems, this
can be used to disable the shadow graph. We'll purposely not document
this since the goal is to remove this flag as we become more confident
with it.
Wait for our expected number of goroutines to be staged and ready, then
allow Run() to complete. Use a high-water-mark counter to ensure we
never exceeded the max expected concurrent goroutines at any point
during the run.
When we parse a map from HCL, it's decoded into a list of maps because
HCL allows declaring a key multiple times to implicitly add it to a
list. Since there's no terraform variable configuration that supports
this structure, we can always flatten a []map[string]interface{} value
to a map[string]interface{} and remove any type rrors trying to apply
that value.
Don't try to parse a varibale as HCL if the value can be parse as a
single number. HCL will always attempt to convert the value to a number,
even if we later find the configured variable's type to be a string.
This commit adds the `state rm` command for removing an address from
state. It is the result of a rebase from pull-request #5953 which was
lost at some point during the Terraform 0.7 feature branch merges.
Fix checksum issue with remote state
If we read a state file with "null" objects in a module and they become
initialized to an empty map the state file may be written out with empty
objects rather than "null", changing the checksum. If we can detect
this, increment the serial number to prevent a conflict in atlas.
Our fakeAtlas test server now needs to decode the state directly rather
than using the ReadState function, so as to be able to read the state
unaltered.
The terraform.State data structures have initialization spread out
throughout the package. More thoroughly initialize State during
ReadState, and add a call to init() during WriteState as another
normalization safeguard.
Expose State.init through an exported Init() method, so that a new State
can be completely realized outside of the terraform package.
Additionally, the internal init now completely walks all internal state
structures ensuring that all maps and slices are initialized. While it
was mentioned before that the `init()` methods are problematic with too
many call sites, expanding this out better exposes the entry points that
will need to be refactored later for improved concurrency handling.
The State structures had a mix of `omitempty` fields. Remove omitempty
for all maps and slices as part of this normalization process. Make
Lineage mandatory, which is now explicitly set in some tests.
Some Atlas usage patterns expect to be able to override a variable set
in Atlas, even if it's not seen in the local context. This allows
overwriting a variable that is returned from atlas, and sends it back.
Also use a unique sential value in the context where we have variables
from atlas. This way atals variables aren't combined with the local
variables, and we don't do something like inadvertantly change the type,
double encode/escape, etc.
If we have a number value in our config variables, format it as a
string, and send it with the HCL=true flag just in case.
Also use %g for for float encoding, as the output is a generally a
little friendlier.
Set the default log package output to iotuil.Discard during tests if the
`-v` flag isn't set. If we are verbose, then apply the filter according
to the TF_LOG env variable.
The behaviour whereby outputs for a particular nested module can be
output was broken by the changes for lists and maps. This commit
restores the previous behaviour by passing the module path into the
outputsAsString function.
We also add a new test of this since the code path for indivdual output
vs all outputs for a module has diverged.
The handling of remote variables was completely disabled for push.
We still need to fetch variables from atlas for push, because if the
variable is only set remotely the Input walk will still prompt the user
for a value. We add the missing remote variables to the context
to disable input.
We now only handle remote variables as atlas.TFVar and explicitly pass
around that type rather than an `interface{}`.
Shorten the text fixture slightly to make the output a little more
readable on failures.
The strings we have in the variables may contain escaped double-quotes,
which have already been parsed and had the `\`s removed. We need to
re-escape these, but only if we are in the outer string and not inside an
interpolation.
Add tf_vars to the data structures sent in terraform push.
This takes any value of type []interface{} or map[string]interface{} and
marshals it as a string representation of the equivalent HCL. This
prevents ambiguity in atlas between a string that looks like a json
structure, and an actual json structure.
For the time being we will need a way to serialize data as HCL, so the
command package has an internal encodeHCL function to do so. We can
remove this if we get complete package for marshaling HCL.
Terraform 0.7 introduces lists and maps as first-class values for
variables, in addition to string values which were previously available.
However, there was previously no way to override the default value of a
list or map, and the functionality for overriding specific map keys was
broken.
Using the environment variable method for setting variable values, there
was previously no way to give a variable a value of a list or map. These
now support HCL for individual values - specifying:
TF_VAR_test='["Hello", "World"]'
will set the variable `test` to a two-element list containing "Hello"
and "World". Specifying
TF_VAR_test_map='{"Hello = "World", "Foo" = "bar"}'
will set the variable `test_map` to a two-element map with keys "Hello"
and "Foo", and values "World" and "bar" respectively.
The same logic is applied to `-var` flags, and the file parsed by
`-var-files` ("autoVariables").
Note that care must be taken to not run into shell expansion for `-var-`
flags and environment variables.
We also merge map keys where appropriate. The override syntax has
changed (to be noted in CHANGELOG as a breaking change), so several
tests needed their syntax updating from the old `amis.us-east-1 =
"newValue"` style to `amis = "{ "us-east-1" = "newValue"}"` style as
defined in TF-002.
In order to continue supporting the `-var "foo=bar"` type of variable
flag (which is not valid HCL), a special case error is checked after HCL
parsing fails, and the old code path runs instead.
This is the first step in allowing overrides of map and list variables.
We convert Context.variables to map[string]interface{} from
map[string]string and fix up all the call sites.
* Add scaleway provider
this PR allows the entire scaleway stack to be managed with terraform
example usage looks like this:
```
provider "scaleway" {
api_key = "snap"
organization = "snip"
}
resource "scaleway_ip" "base" {
server = "${scaleway_server.base.id}"
}
resource "scaleway_server" "base" {
name = "test"
# ubuntu 14.04
image = "aecaed73-51a5-4439-a127-6d8229847145"
type = "C2S"
}
resource "scaleway_volume" "test" {
name = "test"
size_in_gb = 20
type = "l_ssd"
}
resource "scaleway_volume_attachment" "test" {
server = "${scaleway_server.base.id}"
volume = "${scaleway_volume.test.id}"
}
resource "scaleway_security_group" "base" {
name = "public"
description = "public gateway"
}
resource "scaleway_security_group_rule" "http-ingress" {
security_group = "${scaleway_security_group.base.id}"
action = "accept"
direction = "inbound"
ip_range = "0.0.0.0/0"
protocol = "TCP"
port = 80
}
resource "scaleway_security_group_rule" "http-egress" {
security_group = "${scaleway_security_group.base.id}"
action = "accept"
direction = "outbound"
ip_range = "0.0.0.0/0"
protocol = "TCP"
port = 80
}
```
Note that volume attachments require the server to be stopped, which can lead to
downtimes of you attach new volumes to already used servers
* Update IP read to handle 404 gracefully
* Read back resource on update
* Ensure IP detachment works as expected
Sadly this is not part of the official scaleway api just yet
* Adjust detachIP helper
based on feedback from @QuentinPerez in
https://github.com/scaleway/scaleway-cli/pull/378
* Cleanup documentation
* Rename api_key to access_key
following @stack72 suggestion and rename the provider api_key for more clarity
* Make tests less chatty by using custom logger
This commit removes the ability to index into complex output types using
`terraform output a_list 1` (for example), and adds a `-json` flag to
the `terraform output` command, such that the output can be piped
through a post-processor such as jq or json. This removes the need to
allow arbitrary traversal of nested structures.
It also adds tests of human readable ("normal") output with nested lists
and maps, and of the new JSON output.
When working from an existing plan, we weren't setting the PathOut field
for a LocalState. This required adding an outPath argument to the
StateFromPlan function to avoid having to introspect the returned
state.State interface to find the appropriate field.
To test we run a plan first and provide the new plan to apply with
`-state-out` set.
Data sources are given a special plan output when the diff would show it
creating a new resource, to indicate that there is only a logical data
resource being read. Data sources nested in modules weren't formatted in
the plan output in the same way.
This checks for the "data." part of the path in nested modules, and adds
acceptance tests for the plan output.
In cases where we construct state directly rather than reading it via
the usual methods, we need to ensure that the necessary maps are
initialized correctly.
This commit fixes a crash in `terraform show` where there is no primary
resource, but there is a tainted resource, because of the changes made
to tainted resource handling in 0.7.
This an effort to address hashicorp/terraform#516.
Adding the Sensitive attribute to the resource schema, opening up the
ability for resource maintainers to mark some fields as sensitive.
Sensitive fields are hidden in the output, and, possibly in the future,
could be encrypted.