This creates a standard package and interface for defining, querying,
setting experiments (`-X` flags).
I expect we'll want to continue to introduce various features behind
experimental flags. I want to make doing this as easy as possible and I
want to make _removing_ experiments as easy as possible as well.
The goal with this packge has been to rely on the compiler enforcing our
experiment references as much as possible. This means that every
experiment is a global variable that must be referenced directly, so
when it is removed you'll get compiler errors where the experiment is
referenced.
This also unifies and makes it easy to grab CLI flags to enable/disable
experiments as well as env vars! This way defining an experiment is just
a couple lines of code (documented on the package).
Since it is still very much possible for this to cause problems, this
can be used to disable the shadow graph. We'll purposely not document
this since the goal is to remove this flag as we become more confident
with it.
Wait for our expected number of goroutines to be staged and ready, then
allow Run() to complete. Use a high-water-mark counter to ensure we
never exceeded the max expected concurrent goroutines at any point
during the run.
When we parse a map from HCL, it's decoded into a list of maps because
HCL allows declaring a key multiple times to implicitly add it to a
list. Since there's no terraform variable configuration that supports
this structure, we can always flatten a []map[string]interface{} value
to a map[string]interface{} and remove any type rrors trying to apply
that value.
Don't try to parse a varibale as HCL if the value can be parse as a
single number. HCL will always attempt to convert the value to a number,
even if we later find the configured variable's type to be a string.
This commit adds the `state rm` command for removing an address from
state. It is the result of a rebase from pull-request #5953 which was
lost at some point during the Terraform 0.7 feature branch merges.
Fix checksum issue with remote state
If we read a state file with "null" objects in a module and they become
initialized to an empty map the state file may be written out with empty
objects rather than "null", changing the checksum. If we can detect
this, increment the serial number to prevent a conflict in atlas.
Our fakeAtlas test server now needs to decode the state directly rather
than using the ReadState function, so as to be able to read the state
unaltered.
The terraform.State data structures have initialization spread out
throughout the package. More thoroughly initialize State during
ReadState, and add a call to init() during WriteState as another
normalization safeguard.
Expose State.init through an exported Init() method, so that a new State
can be completely realized outside of the terraform package.
Additionally, the internal init now completely walks all internal state
structures ensuring that all maps and slices are initialized. While it
was mentioned before that the `init()` methods are problematic with too
many call sites, expanding this out better exposes the entry points that
will need to be refactored later for improved concurrency handling.
The State structures had a mix of `omitempty` fields. Remove omitempty
for all maps and slices as part of this normalization process. Make
Lineage mandatory, which is now explicitly set in some tests.
Some Atlas usage patterns expect to be able to override a variable set
in Atlas, even if it's not seen in the local context. This allows
overwriting a variable that is returned from atlas, and sends it back.
Also use a unique sential value in the context where we have variables
from atlas. This way atals variables aren't combined with the local
variables, and we don't do something like inadvertantly change the type,
double encode/escape, etc.
If we have a number value in our config variables, format it as a
string, and send it with the HCL=true flag just in case.
Also use %g for for float encoding, as the output is a generally a
little friendlier.
Set the default log package output to iotuil.Discard during tests if the
`-v` flag isn't set. If we are verbose, then apply the filter according
to the TF_LOG env variable.
The behaviour whereby outputs for a particular nested module can be
output was broken by the changes for lists and maps. This commit
restores the previous behaviour by passing the module path into the
outputsAsString function.
We also add a new test of this since the code path for indivdual output
vs all outputs for a module has diverged.
The handling of remote variables was completely disabled for push.
We still need to fetch variables from atlas for push, because if the
variable is only set remotely the Input walk will still prompt the user
for a value. We add the missing remote variables to the context
to disable input.
We now only handle remote variables as atlas.TFVar and explicitly pass
around that type rather than an `interface{}`.
Shorten the text fixture slightly to make the output a little more
readable on failures.
The strings we have in the variables may contain escaped double-quotes,
which have already been parsed and had the `\`s removed. We need to
re-escape these, but only if we are in the outer string and not inside an
interpolation.
Add tf_vars to the data structures sent in terraform push.
This takes any value of type []interface{} or map[string]interface{} and
marshals it as a string representation of the equivalent HCL. This
prevents ambiguity in atlas between a string that looks like a json
structure, and an actual json structure.
For the time being we will need a way to serialize data as HCL, so the
command package has an internal encodeHCL function to do so. We can
remove this if we get complete package for marshaling HCL.
Terraform 0.7 introduces lists and maps as first-class values for
variables, in addition to string values which were previously available.
However, there was previously no way to override the default value of a
list or map, and the functionality for overriding specific map keys was
broken.
Using the environment variable method for setting variable values, there
was previously no way to give a variable a value of a list or map. These
now support HCL for individual values - specifying:
TF_VAR_test='["Hello", "World"]'
will set the variable `test` to a two-element list containing "Hello"
and "World". Specifying
TF_VAR_test_map='{"Hello = "World", "Foo" = "bar"}'
will set the variable `test_map` to a two-element map with keys "Hello"
and "Foo", and values "World" and "bar" respectively.
The same logic is applied to `-var` flags, and the file parsed by
`-var-files` ("autoVariables").
Note that care must be taken to not run into shell expansion for `-var-`
flags and environment variables.
We also merge map keys where appropriate. The override syntax has
changed (to be noted in CHANGELOG as a breaking change), so several
tests needed their syntax updating from the old `amis.us-east-1 =
"newValue"` style to `amis = "{ "us-east-1" = "newValue"}"` style as
defined in TF-002.
In order to continue supporting the `-var "foo=bar"` type of variable
flag (which is not valid HCL), a special case error is checked after HCL
parsing fails, and the old code path runs instead.
This is the first step in allowing overrides of map and list variables.
We convert Context.variables to map[string]interface{} from
map[string]string and fix up all the call sites.
* Add scaleway provider
this PR allows the entire scaleway stack to be managed with terraform
example usage looks like this:
```
provider "scaleway" {
api_key = "snap"
organization = "snip"
}
resource "scaleway_ip" "base" {
server = "${scaleway_server.base.id}"
}
resource "scaleway_server" "base" {
name = "test"
# ubuntu 14.04
image = "aecaed73-51a5-4439-a127-6d8229847145"
type = "C2S"
}
resource "scaleway_volume" "test" {
name = "test"
size_in_gb = 20
type = "l_ssd"
}
resource "scaleway_volume_attachment" "test" {
server = "${scaleway_server.base.id}"
volume = "${scaleway_volume.test.id}"
}
resource "scaleway_security_group" "base" {
name = "public"
description = "public gateway"
}
resource "scaleway_security_group_rule" "http-ingress" {
security_group = "${scaleway_security_group.base.id}"
action = "accept"
direction = "inbound"
ip_range = "0.0.0.0/0"
protocol = "TCP"
port = 80
}
resource "scaleway_security_group_rule" "http-egress" {
security_group = "${scaleway_security_group.base.id}"
action = "accept"
direction = "outbound"
ip_range = "0.0.0.0/0"
protocol = "TCP"
port = 80
}
```
Note that volume attachments require the server to be stopped, which can lead to
downtimes of you attach new volumes to already used servers
* Update IP read to handle 404 gracefully
* Read back resource on update
* Ensure IP detachment works as expected
Sadly this is not part of the official scaleway api just yet
* Adjust detachIP helper
based on feedback from @QuentinPerez in
https://github.com/scaleway/scaleway-cli/pull/378
* Cleanup documentation
* Rename api_key to access_key
following @stack72 suggestion and rename the provider api_key for more clarity
* Make tests less chatty by using custom logger
This commit removes the ability to index into complex output types using
`terraform output a_list 1` (for example), and adds a `-json` flag to
the `terraform output` command, such that the output can be piped
through a post-processor such as jq or json. This removes the need to
allow arbitrary traversal of nested structures.
It also adds tests of human readable ("normal") output with nested lists
and maps, and of the new JSON output.
When working from an existing plan, we weren't setting the PathOut field
for a LocalState. This required adding an outPath argument to the
StateFromPlan function to avoid having to introspect the returned
state.State interface to find the appropriate field.
To test we run a plan first and provide the new plan to apply with
`-state-out` set.
Data sources are given a special plan output when the diff would show it
creating a new resource, to indicate that there is only a logical data
resource being read. Data sources nested in modules weren't formatted in
the plan output in the same way.
This checks for the "data." part of the path in nested modules, and adds
acceptance tests for the plan output.
In cases where we construct state directly rather than reading it via
the usual methods, we need to ensure that the necessary maps are
initialized correctly.
This commit fixes a crash in `terraform show` where there is no primary
resource, but there is a tainted resource, because of the changes made
to tainted resource handling in 0.7.
This an effort to address hashicorp/terraform#516.
Adding the Sensitive attribute to the resource schema, opening up the
ability for resource maintainers to mark some fields as sensitive.
Sensitive fields are hidden in the output, and, possibly in the future,
could be encrypted.
This means it’s shown correctly in a plan and takes into account any
actions that are dependant on the tainted resource and, vice verse, any
actions that the tainted resource depends on.
So this changes the behaviour from saying this resource is tainted so
just forget about it and make sure it gets deleted in the background,
to saying I want that resource to be recreated (taking into account the
existing resource and it’s place in the graph).
This commit forward ports the changes made for 0.6.17, in order to store
the type and sensitive flag against outputs.
It also refactors the logic of the import for V0 to V1 state, and
fixes up the call sites of the new format for outputs in V2 state.
Finally we fix up tests which did not previously set a state version
where one is required.
This provider will have logical resources that allow Terraform to "manage"
randomness as a resource, producing random numbers on create and then
retaining the outcome in the state so that it will remain consistent
until something explicitly triggers generating new values.
Managing randomness in this way allows configurations to do things like
random distributions and ids without causing "perma-diffs".
Internally a data source read is represented as a creation diff for the
resource, but in the UI we'll show it as a distinct icon and color so that
the user can more easily understand that these operations won't affect
any real infrastructure.
Unfortunately by the time we get to formatting the plan in the UI we
only have the resource names to work with, and can't get at the original
resource mode. Thus we're forced to infer the resource mode by exploiting
knowledge of the naming scheme.
New resources logically don't have "old values" for their attributes, so
showing them as updates from the empty string is misleading and confusing.
Instead, we'll skip showing the old value in a creation diff.
Data resources don't have ids when they refresh, so we'll skip showing the
"(ID: ...)" indicator for these. Showing it with no id makes it look
like something is broken.