* Implemented EventHubs
* Missing the sidebar link
* Fixing the type
* Fixing the docs for Namespace
* Removing premium tests
* Checking the correct status code on delete
* Added a test case for the import
* Documentation for importing
* Fixing a typo
We now generate the read operation which sets the various encodings of
the random value such that adding new ones does not require generating a
new random value.
We also verify that these are set correctly via the acceptance tests.
This commit makes three related changes to the `random_id` resource:
1. Deprecate the `b64` attribute
2. Introduce a new `b64_url` attribute which functions in the same
manner as the original `b64` attribute
3. Introduce a new `b64_std` attribute which uses standard base64
encoding for the value rather than URL encoding.
Resource identifiers continue to use URL encoded base 64.
The reason for adding standard encoding of the base 64 value is to allow
the use of generated values as a Serf Encryption Key for separating
Consul clusters - these rely on standard encoding and do not permit some
characters which are allowed by URL encoding. `b64_url` is introduced
in order that there is consistency in specifying the desired encoding
during interpolation.
The documentation mentions ownership of both VPCs for aws_vpc_peering_connection auto_accept to work but if both VPC are in separate accounts it does not matter if both account are owned or not.
In #6843 its stated that aws_vpc_peering_connection only works if both VPC are in the same AWS account.
The documentation fails to mention that peeing of two VPCs in two different regions is not supported by AWS.
Fixes#9444
This appears to be a regression from 0.7.0, but there were no tests
covering it so we missed it and changed the behavior at some point! Oh
no.
This PR make the ordering of multi-var access: `resource.name.*.attr`
consistent: it is the ordering of the count, not the lexical ordering of
the value. This allows behavior where two lists are indexed by count
index and can be assumed to be related (for example user data for an aws
instance, as shown in the above referenced issue).
Two new context tests added to cover this case.
Implement debugInfo and the DebugGraph
DebugInfo will be a global variable through which graph debug
information can we written to a compressed archive. The DebugInfo
methods are all safe for concurrent use, and noop with a nil receiver.
The API outside of the terraform package will be to call SetDebugInfo
to create the archive, and CloseDebugInfo() to properly close the file.
Each write to the archive will be flushed and sync'ed individually, so
in the event of a crash or a missing call to Close, the archive can
still be recovered.
The DebugGraph is a representation of a terraform Graph to be written to
the debug archive, currently in dot format. The DebugGraph also contains
an internal buffer with Printf and Write methods to add to this buffer.
The buffer will be written to an accompanying file in the debug archive
along with the graph.
This also adds a GraphNodeDebugger interface. Any node implementing
`NodeDebug() string` can output information to annotate the debug graph
node, and add the data to the log. This interface may change or be
removed to provide richer options for debugging graph nodes.
The new graph builders all delegate the build to the BasicGraphBuilder.
Having a Name field lets us differentiate the actual builder
implementation in the debug graphs.