The config loader already extracts this during its initial pass and saves
it as a separate field in a configs.Connection value, so requiring it
again here causes confusing errors about this attribute being provided but
not set.
These are some remnants of the shadow graph functionality that was added
to support the graph builder changes in v0.8, but that has since been
removed and so there are no remaining callers for these types and
functions.
The contract for this function has changed as part of the 0.12
reorganization so that, while before it would accept a partial variables
map if all missing variables were optional, it now expects that the caller
has already collected values for all variables and is just checking them
for type-correctness here.
Although this rarely matters, making it always be nil when empty makes
deep assertions simpler in tests.
This also includes a minor update to the test in the core package that
first encountered this problem, to improve the quality of its output
on failure.
These tests now need schema available to run properly, so rather than
crafting a separate schema for each of these we'll just use the one
created by simpleMockProvider, which contains a resource type called
test_object that we'll now use as the basis for all of the tests here.
In this codepath we're still using the old "internal" ResourceAddress
parser to parse resource addresses from the diff, but the string being
parsed does not include a module path and so the result is interpreted
as a resource in the root module.
We need to rewrite this to be in the correct module path as part of
converting to the new address representation, in order to get a correct
absolute address.
A trivial mistake in the rework of this function meant that it was just
discarding its result rather than returning it. It will now return its
result as expected, allowing reference analysis to work for this node
type.
The final "<resource> provided by <provider>" message was showing the
original selection made by the resource itself rather than the provider
address finally resolved after inheritance, which caused the log to
misrepresent what was actually being created in the graph.
Previously this was just stubbing out provider types, but we now need to
include schema for each of the providers and resource types the tests
use in order for the references to be properly detected.
The test fixtures are adjusted slightly here so we can use the
simpleTestSchema as the schema for all of the different blocks in these
tests. The relationships between the resources are still preserved, but
the attributes are renamed to comply with this schema.
Since expression evaluation is now done via an EvalContext method, and
since these tests use the MockEvalContext, we need to provide a value for
the mock call to return. Previously expression evaluation happened at a
different layer that was not mocked here.
These transformers both construct temporary graphs using many of the same
transformers used in the apply graph, and properly doing this now requires
access to the providers and provisioners in order to obtain their schemas.
Along with this, we also update the tests here to use the
simpleMockComponentFactory helper to get a mock provider with a schema
already configured, which means we also need to update the test fixtures
and assertions to use the resource type and attributes defined in that
mock factory.
Having a missing schema is a programming error, but at the time of this
commit we're in the midst of introducing schema all over Terraform and so
there are inevitably some places -- particularly in older unit tests --
where schema isn't yet being provided.
This error allows us to catch those cases and fail gracefully, rather
than panicking further down here when we access t.Components methods.
Also includes some additional logging to aid debugging of this
transformer.
Previously we were attempting to construct a default manually here, and
actually getting it wrong by using the resource type name as a whole
rather than the expected inferrence by prefix.
Now we use the method provided in the addrs package for this purpose,
which implements the standard behavior of shaving off the first
underscore-separated word from the resource type name.
Now that any configuration processing requires schema, we need either a
standalone schema or a provider/provisioner configured with one a lot more
often in tests.
To avoid adding loads of extra boilerplate to the tests, these new helper
functions produce objects pre-configured with a schema that should be
useful for a number of different cases, and can be customized further for
more interesting situations.
A lot of our tests can then just use these pre-defined object and
attribute names in their fixtures in situations where the canned schemas
here are good enough.
The logic under test has intentionally changed here so that setting count
to any value -- even 1 -- causes Terraform to prefer to keep indexed
instances if both non-indexed and indexed are present.
Previously Terraform treated count = 1 as equivalent to count not set at
all, but we now recognize these as two different situations and treat
count = 1 as the same as any other non-zero count, to ensure that if
count is set conditionally it'll always produce indexed instances, even
if the dynamic expression ends up evaluating to 1.
This function was previously checking for a path length greater than one
because the older path format included an always present "root" element
at the start.
We now need to check for a totally-empty list, because otherwise we fail
to add the expected prefix to the front of a path with only one element.
This also includes some adjustments to the related tests and transforms
that do not change behavior but do make the test results easier to
understand and debug.
Due to some disagreement about what representation of provider addresses
we were using, the inherited providers map wasn't matching. Now we'll
consistenly use the "compact" form (just the provider name and optional
alias).
Also includes some other tweaks to make this test better-behaved.
Although the AddModule method now takes a new-style
module address as an argument, internally we're still
shimming it to a []string.
Therefore this test needs to still expect [][]string as
a result, rather tan []addrs.ModuleInstance.
Prior to the refactoring to move provider address parsing/selection up
into the frontend, there was some logic here to just-in-time default a
provider config based on the given resource type.
This is now expected to happen at a higher layer, with ImportTarget
expecting an already-valid provider configuration address.
The normal import codepath was already updated with this in mind, but some
of the provider transform tests are using ImportStateTransformer as a
shortcut for getting some resource nodes added to the graph, and so those
tests now need to include a valid provider address in their ImportTarget
values.
Also includes some adjustments to test output to make the tests easier
to debug.
The prior implementation of this function (before the rewrite to use
addrs.AbsProviderConfig values) was relying on some implicit behavior of
our provider address normalization to generate an address in the root
module. Since that wasn't explicit, I introduced a bug here when
introducing the new address type, where I generated an address in the
node's own module, rather than in the root as expected.
This new implementation is functionally equivalent to the prior, but is
written to make the intent more obvious: take _just_ the type from the
node's provider address and create an implicit configuration for it
_in the root module_.
Along with the change in approach, this new implementation also has an
updated documentation comment that better describes its current intent
(previously it was outdated) and hopefully clearer trace logging to
better communicate what it's doing.
In the change to using addrs types rather than string keys directly here
I incorrectly made this use the _relative_ provider config instead of
the absolute one, causing MissingProviderTransformer to only match
providers defined in the root module (due to ambiguity in the string
representations of these address types).
The rest of this change is improved logging and test output that helped
with debugging this issue.
Some of these tests were using an outdated form to reference a local
directory as a module. While here, I also updated the provider references
to the new unquoted form, which is preferred from v0.12 onwards.
The changes to our GraphNodeReferencable and GraphNodeReferencer
interfaces were not also reflected in our testing structs here, and so
these tests were no longer working.
This updates these implementations to the new required signatures,
adapting the simplified model used in the structs to generate local
variable references, since the reference model no longer uses the
arbitrary strings that this test originally depended on.
We also remove some of the tests here since the functionality they were
testing no longer applies: inter-module dependencies are now handled by
the graph nodes producing extra ReferencableAddrs, and the "backup" form
is no longer used because our new address model is able to distinguish
resources from resource instances without the need for the magical
backup reference forms we previously used.
Resolution of dependencies automatically from expressions now requires
a schema to be available. To avoid the need to provide mock schemas for
all of these different resource types, we instead use the depends_on
argument here to mark the dependencies explicitly.
Previously we had a bug where we'd use the resource type name instead of
the provider name. Now we use the DefaultProviderConfig helper function
to extract the provider name from the resource type name.
The ReferenceTransformer can't do its work here unless our resources have
schema attached, since otherwise it doesn't know which attributes to
search to find references.
Now that a schema is required to decode resource configuration, a lot more
tests than before need access to mock providers in order to get a schema.
These new helpers allow a mock provider with a schema to be constructed
concisely.
This test required some adaptation for the new variable handling model in
the HCL2-oriented codepaths, where more of the handling is now done in the
"command" package.
The initial reorganization of this test was not correct, so this is a
further revision to adapt the original assertions to the new model as much
as possible, removing a few tests that no longer make sense in the new
world.
A number of our test fixtures were previously using the non-idiomatic form
of including a single child attribute all on one line with the block
header and bounding braces.
This non-idiomatic form is an error in HCL2, and hclfmt has always "fixed"
it to the expected form of each attribute being on a line of its own, and
so here we just update all the affected test fixtures to canonical form
(using hclfmt), allowing them to be parsed as intended.
Since the these entire files were processed with hclfmt, there are some
other unrelated style changes included in situations where the file
layouts were non-idiomatic in other ways.
These are all things that ought to be present in normal use but can end up
being nil in incorrect tests. Test debugging is simpler if these things
return errors gracefully, rather than crashing.
While there's no good reason for this to happen in practice, it can arise
in tests if mocks aren't set up quite right, and so we'll catch it and
report it nicely to make test debugging a little easier.
There's actually no good reason for this to happen in the normal case, but
it can happen reasonably easy if a test doesn't properly configure the
MockEvalContext, and so having this check here makes test debugging a
little easier.
Since this test case is using t.Run, it must be sure to pass the nested
*testing.T over to the testDiffs function or else calls to t.Fatal will
crash the whole test process, since they would otherwise be called on the
wrong instance of *testing.T.
The only reason these cases are arising right now is because we have tests
that haven't yet been updated to properly support schema, but it can't
hurt to add some robustness here to reduce the risk of real crashes.
After the refactoring to integrate HCL2 many of the tests were no longer
using correct types, attribute names, etc.
This is a bulk update of all of the tests to make them compile again, with
minimal changes otherwise. Although the tests now compile, many of them
do not yet pass. The tests will be gradually repaired in subsequent
commits, as we continue to complete the refactoring and retrofit work.
Some of the objects that are referencable from expressions are transient
values computed only during a graph walk, and not persisted in state. In
order to support arbitrary evaluation of expressions, such as in the
"terraform console" CLI command, it's necessary to be able to evaluate
these values before we start evaluating.
This new Eval method achieves this by performing a special graph walk that
ignores resources (except for dependency resolution) and just focuses on
evaluating all of these transient values, before returning an evaluation
scope that can then resolve expressions in terms of that result.
This replaces the Context.Interpolator method, which was fraught with
various issues due to it not properly priming the state before evaluating.
Here we replace the stub implementations in evaluationStateData with real
implementations that are based on their equivalents in the old
Interpolator type.
The behavior here is a little different due to the different semantics
expected under HCL2, but the principle remains the same: the main
references are resolved from the state, using config for validation
in order to produce some helpful error messages.
I took some missteps here while doing the initial refactor for HCL2 types.
This restores the map of maps that retains all of the variable values, and
then makes it available to the evaluator.
Previously our evaluationStateData object was constructed inside
Evaluator.Scope, but this was awkward because all of the fields inside it
need to be populated from BuiltinEvalContext fields, and so the signature
of Evaluator.Scope kept growing new arguments over time.
Instead, we reassign the responsibilities here so that Evaluator.Scope
takes an already-constructed lang.Data, and then teach BuiltinEvalContext
to build this object itself from its own internal values.
Due to a logic error here we were trying to find our our module's parent
as a descendent of itself, rather than as a descendent of the root. It
turns out that we can do this even more simply by just accessing the
Parent field on the given config, avoiding the need to traverse the tree
down from the root at all.
While here, this also switches to using the path.Call helper method rather
than manually slicing the path array, since this better communicates our
intent.
This is a little awkward since we need to instantiate the providers much
earlier than before. To avoid a lot of reshuffling here we just spin each
one up and then immediately shut it down again, letting our existing init
functionality during the graph walk still do the main initialization.
We've not yet adjusted any of the state structs to reflect our new address
types because they are used with encoding/json to produce our state file
format, but the shimming here previously was incorrect because it failed
to include the special "root" string that's always required at element
zero of a module path in the state.
Due to how deeply the configuration types go into Terraform Core, there
isn't a great way to switch out to HCL2 gradually. As a consequence, this
huge commit gets us from the old state to a _compilable_ new state, but
does not yet attempt to fix any tests and has a number of known missing
parts and bugs. We will continue to iterate on this in forthcoming
commits, heading back towards passing tests and making Terraform
fully-functional again.
The three main goals here are:
- Use the configuration models from the "configs" package instead of the
older models in the "config" package, which is now deprecated and
preserved only to help us write our migration tool.
- Do expression inspection and evaluation using the functionality of the
new "lang" package, instead of the Interpolator type and related
functionality in the main "terraform" package.
- Represent addresses of various objects using types in the addrs package,
rather than hand-constructed strings. This is not critical to support
the above, but was a big help during the implementation of these other
points since it made it much more explicit what kind of address is
expected in each context.
Since our new packages are built to accommodate some future planned
features that are not yet implemented (e.g. the "for_each" argument on
resources, "count"/"for_each" on modules), and since there's still a fair
amount of functionality still using old-style APIs, there is a moderate
amount of shimming here to connect new assumptions with old, hopefully in
a way that makes it easier to find and eliminate these shims later.
I apologize in advance to the person who inevitably just found this huge
commit while spelunking through the commit history.
This is currently not very ergonomic due to the API exposed by providers.
We'll smooth this out in a later change to improve the provider API, since
we know we always want the entire schema.
Initially the intent here was to tease these apart a little more since
they don't really share much behavior in common in core, but in practice
it'll take a lot of refactoring to tease apart these assumptions in core
right now and so we'll keep these things unified at the configuration
layer in the interests of minimizing disruption at the core layer.
The two types are still kept in separate maps to help reinforce the fact
that they are separate concepts with some behaviors in common, rather than
the same concept.
For the moment this is just a lightly-adapted copy of
ModuleTreeDependencies named ConfigTreeDependencies, with the goal that
the two can live concurrently for the moment while not all callers are yet
updated and then we can drop ModuleTreeDependencies and its helper
functions altogether in a later commit.
This can then be used to make "terraform init" and "terraform providers"
work properly with the HCL2-powered configuration loader.
This is a rather-messy, complex change to get the "command" package
building again against the new backend API that was updated for
the new configuration loader.
A lot of this is mechanical rewriting to the new API, but
meta_config.go and meta_backend.go in particular saw some major
changes to interface with the new loader APIs and to deal with
the change in order of steps in the backend API.
When computing the count value, make sure to include walkDestroy with
walkApply, as the former is only a special case of the latter.
When applying a saved plan, the computed count values are lost and we
can no longer query the state for those values. The apply walk was
already considered in the `resourceCountMax` function, but the destroy
walk was not. This worked when destroying in a single operation
("terraform destroy"), since the state would still be updated with the
latest counts from the plan.
During a full destroy when outputs are removed, the
NodeDestroyableOutput was preventing it's sibling output from being
destroyed. Prune the output node if it only has its destroy node as a
dependent.
The destroy output test is simply run a second time with no state, which
would cause the output interpolation to fail if it remained in the
graph.
If an existing resources is scaled back to 0, locals and outputs will
still have a multi-variable reference to evaluate, which should return
an empty list. Due to how the resource is removed, the resource will
still exist in the state but with no primary instance, which needs to be
ignored in the instance count.
filterPartialOutputs was not taking into account that some dependent
resources might yet be removed from the graph. Check that they are not
in the targeted set before declaring that the output remain.
ReadState would hide any errors, assuming that it was an empty state.
This can mask errors on Windows, where the OS enforces read locks on the
state file.
The provisionerFail_createBeforeDestroy test was verifying the incorrect
output. The create_before_destroy instance in the state has an ID of
"bar" with require_new="abc", and a new instance would get an ID of
"foo" with require_new="xyz". The existing test was expecting the
following state:
aws_instance.bar: (1 deposed)
ID = bar
provider = provider.aws
require_new = abc
Deposed ID 1 = foo (tainted)
Which showed "bar" still the primary instance in the state, with the new
instance "foo" as being the deposed instance, though properly tainted.
The new output is:
aws_instance.bar: (tainted) (1 deposed)
ID = foo
provider = provider.aws
require_new = xyz
type = aws_instance
Deposed ID 1 = bar
Showing the new "foo instance as being the primary instance in the
state, with "bar" as the deposed instance.
If a count field references another count field which is interpolated
but is attached to a resource already in the state, the result of that
first interpolation will be lost when a plan is serialized. This is
because the result of the first interpolation is stored directly in the
module config, in an unexported config field.
This is not a general fix for the above situation, which would require
refactoring how counts are handles throughout the config. Ignoring the
error works, because in most cases the count will be properly
handled during the resource's interpolation.
An interpolated count value that is determined during plan, is lost
during plan serialization, causing apply to fail when the interpolation
string can't be evaluated.
While not normally possible, manual manipulation of the state and config
can cause us to end up with a nil config in
evalTreeManagedResourceNoState.
Regardless of how it got here, we can't ever assume the Config field is
not nil, and EvalInterpolate happily accepts a nil RawConfig
Add `host_key` and `bastion_host_key` fields to the ssh communicator
config for strict host key checking.
Both fields expect the contents of an openssh formated public key. This
key can either be the remote host's public key, or the public key of the
CA which signed the remote host certificate.
Support for signed certificates is limited, because the provisioner
usually connects to a remote host by ip address rather than hostname, so
the certificate would need to be signed appropriately. Connecting via
a hostname needs to currently be done through a secondary provisioner,
like one attached to a null_resource.
Similar to NodeApplyableOuptut, NodeDestroyableOutputs also need to stay
in the graph if any ancestor nodes
Use the same GraphNodeTargetDownstream method to keep them from being
pruned, since they are dependent on the output node and all its
descendants.
The id attribute can be missing during the destroy operation.
While the new destroy-time ordering of outputs and locals should prevent
resources from having their id attributes set to an empty string,
there's no reason to error out if we have the canonical ID field
available.
This still interrogates the attributes map first to retain any previous
behavior, but in the future we should settle on a single ID location.
Since outputs and local nodes are always evaluated, if the reference a
resource form the configuration that isn't in the state, the
interpolation could fail.
Prune any local or output values that have no references in the graph.
Now that outputs are always evaluated, we still need a way to remove
them from state when they are destroyed.
Previously, outputs were removed during destroy from the same
"Applyable" node type that evaluates them. Now that we need to possibly
both evaluate and remove output during an apply, we add a new node -
NodeDestroyableOutput.
This new node is added to the graph by the DestroyOutputTransformer,
which make the new destroy node depend on all descendants of the output
node. This ensures that the output remains in the state as long as
everything which may interpolate the output still exists.
Always evaluate outputs during destroy, just like we did for locals.
This breaks existing tests, which we will handle separately.
Don't reverse output/local node evaluation order during destroy, as they
are both being evaluated.
Add a complex destroy provisioner testcase using locals, outputs and
variables.
Add that pesky "id" attribute to the instance states for interpolation.
Destroy-time provisioners require us to re-evaluate during destroy.
Rather than destroying local values, which doesn't do much since they
aren't persisted to state, we always evaluate them regardless of the
type of apply. Since the destroy-time local node is no longer a
"destroy" operation, the order of evaluation need to be reversed. Take
the existing DestroyValueReferenceTransformer and change it to reverse
the outgoing edges, rather than in incoming edges. This makes it so that
any dependencies of a local or output node are destroyed after
evaluation.
Having locals evaluated during destroy failed one other test, but that
was the odd case where we need `id` to exist as an attribute as well as
a field.
Previously we would interpolate the count config (ResourceConfig.RawCount)
only while preparing to dynamic-expand aggregate resource nodes. This is
problematic because we do not dynamic-expand any resource nodes during the
apply walk, and so previously the count value was not available for
interpolation during apply and would result in an error.
Now we interpolate RawCount once for each resource we visit during the
apply walk -- even though that redundantly interpolates the same config
multiple times when count > 1 -- to ensure that it's available by the
time we interpolate any remaining expressions in the config and any
expressions within "connection" and "provisioner" blocks.
This error was masked by us sharing a single RawConfig instance between
the plan and apply walks when "terraform apply" is run with no explicit
plan file argument, but was exposed by the workflow where the plan is
written first to disk since in that case the interpolation result from
during the plan phase is not present in the deflated plan object. For
this reason, the new context test serializes the plan into an in-memory
buffer and reloads it in order to simulate the effect of the two-step
workflow.
Containers (maps, lists, sets) in an InstanceDiff need to be handled in
their entirety. Unchanged values cannot be filtered out from diffs, as
providers expect attribute containers to be complete.
If a value in ignore_changes maps to a single key in an attribute
container, and there are other changes present, that ignored value must
be included in the diff as well.
There was no test checking that Close wsa called on the mock provider.
This fails now since the CloseProviderTransformer isn't using the fully
resolved provider name.
The interrupt tests for providers no longer check for the condition
during the diff operation. defer the lock so other test's DiffFns don't
need to be as carefull locking themselves.
The init command needs to parse the state to resolve providers, but
changes to the state format can cause that to fail with difficult to
understand errors. Check the terraform version during init and provide
the same error that would be returned by plan or apply.
The bounds checking in ResourceConfig.get() was insufficient: it detected when the index was greater than or equal to cv.Len() but not when the index was less than zero. If the user provided an (invalid) configuration that referenced "foo.-1.bar", the provider would panic.
Now it behaves the same way as if the index were too high.
Validation is the best time to return detailed diagnostics
to the user since we're much more likely to have source
location information, etc than we are in later operations.
This change doesn't actually add any detail to the messages
yet, but it changes the interface so that we can gradually
introduce more detailed diagnostics over time.
While here there are some minor adjustments to some of the
messages to improve their consistency with terminology we
use elsewhere.
The provider name coming from ProvidedBy may be resolved if it only
exists in the state. Make sure to strip the module and provider
prefixes for the provider name when adding missing providers.
It's become apparent that passing in a provider config for an implicitly
used provider would be very useful. While the ProviderConfigTransformer
efficiently added providers to the graph, the algorithm was reversed
from what would be needed to allow overriding implicit providers.
Change the ProviderConfigTransformer to fist add all configured
provider, even if they are empty stubs. Then run through all providers
being passed in from the parent, and replace the provider nodes we
created with proxies, and add implicit proxies where none existed. The
extra nodes will then be pruned later.
There was a bug where all references would be discarded in the case when
a self-reference was encountered. Since a module references all
descendants by it's own path, it returns a self-reference by definition.
Our new resource-to-provider matching is stricter about explicitly
matching aliases when config is present (no longer automatically
inherited) and with locating providers to destroy removed resources.
With this in mind, this is an attempt to expand slightly on this error
message now that users are more likely to see it.
In future it would be nice to do some explicit validation of this a bit
closer to the UI, so we can have room for more explanatory text, but this
additional messaging is intended to help users understand why they might
be seeing this message after removing a provider configuration block from
configuration, whether directly or as a side-effect of removing a module.
Remove the module entry from the state if a module is no longer in the
configuration. Modules are not removed if there are any existing
resources with the module path as a prefix. The only time this should be
the case is if a module was removed in the config, but the apply didn't
target that module.
Create a NodeModuleRemoved and an associated EvalDeleteModule to track
the module in the graph then remove it from the state. The
NodeModuleRemoved dependencies are simply any other node which contains
the module path as a prefix in its path.
This could have probably been done much easier as a step in pruning the
state, but modules are going to have to be promoted to full graph nodes
anyway in order to support count.
You can't find orphans by walking the config, because by definition
orphans aren't in the config.
Leaving the broken test for when empty modules are removed from the
state as well.
Now that the resolved provider is always stored in state, we need to
udpate all the test data to match. There will probably be some more
breakage once the provider field is properly diffed.
Use the ResourceState.Provider field to store the full name of the
provider used during apply. This field is only used when a resource is
removed from the config, and will allow that resource to be removed by
the exact same provider with which it was created.
Modify the locations which might accept the alue of the
ResourceState.Provider field to detect that the name is resolved.
Here we complete the passing of providers between modules via the
module/providers configuration, add another test and update broken test
outputs.
The DisbableProviderTransformer is being removed, since it was really
only for provider configuration inheritance. Since configuration is no
longer inherited, there's no need to keep around unused providers. The
actually shouldn't be any unused providers going into the graph any
longer, but put off verifying that condition for later. Replace it's
usage with the PruneProviderTransformer, and use that to also remove the
unneeded proxy provider nodes.
Implement the adding of provider through the module/providers map in the
configuration.
The way this works is that we start walking the module tree from the
top, and for any instance of a provider that can accept a configuration
through the parent's module/provider map, we add a proxy node that
provides the real name and a pointer to the actual parent provider node.
Multiple proxies can be chained back to the original provider. When
connecting resources to providers, if that provider is a proxy, we can
then connect the resource directly to the proxied node. The proxies are
later removed by the DisabledProviderTransformer.
This should re-instate the 0.11 beta inheritance behavior, but will
allow us to later store the actual concrete provider used by a resource,
so that it can be re-connected if it's orphaned by removing its module
configuration.
A missing provider alias should not be implicitly added to the graph.
Run the AttachaProviderConfigTransformer immediately after adding the
providers, since the ProviderConfigTransformer should have just added
these nodes.
We have a few pesky functions that don't act like proper functions and
instead return different values on each call. These are tricky because
we need to make sure we don't trip over ourselves by re-generating these
between plan and apply.
Here we add a context test to verify correct behavior in the presence
of such functions.
There's actually a pre-existing bug which this test caught as originally
written: we re-evaluate the interpolation expressions during apply,
causing these unstable functions to produce new values, and so the
applied value ends up not exactly matching the plan. This is a bug that
needs fixing, but it's been around at least since v0.7.6 (random old
version I tried this with to see) so we'll put it on the list and address
it separately. For now, this part of the test is commented out with a
TODO attached.
We previously didn't compare values but had a TODO to start doing so,
which we then recently did. Unfortunately it turns out that we _depend_
on not comparing values here, because when we use EvalCompareDiff (a key
user of Diff.Same) we pass in a diff made from a fresh re-interpolation
of the configuration and so any non-pure function results (timestamp,
uuid) have produced different values.
We can remove the AllowAny option which is no longer used, and providers
don't need to be connected to their resources at this stage, since that
will happen in the ProviderTransformer.
Simplify the MissingProviderTransformer so that it only adds missing
providers at the root level. There's no need for the multitple providers
added at every level of the path
ParentProviderTransformer then only needs to connect providers with the
equivalent type at the root level.
The the grandChild missing test has a provider declared in a child
module which is missing in a grandchildmodule. Verify that the
grandchild gets connected to the child provider, and they all are
connected to the root providers.
Update some test outputs to match the expected behavior of only adding
missing providers at the root level.
The CloserProviderTransformer requires that the resources be connected
to their provider first, so that it cen get the correct dependencies,
and adding the ProviderTransformer changed the test output slightly.
This turned out to be a big messy commit, since the way providers are
referenced is tightly coupled throughout the code. That starts to unify
how providers are referenced, using the format output node Name method.
Add a new field to the internal resource data types called
ResolvedProvider. This is set by a new setter method SetProvider when a
resource is connected to a provider during graph creation. This allows
us to later lookup the provider instance a resource is connected to,
without requiring it to have the same module path.
The InitProvider context method now takes 2 arguments, one if the
provider type and the second is the full name of the provider. While the
provider type could still be parsed from the full name, this makes it
more explicit and, and changes to the name format won't effect this
code.
The first step in only using the required provider nodes in a graph is
to be able to specifically add them from the configuration.
The MissingProviderTransformer was previously responsible for adding
all providers. Now it is really just adding any that are missing from
the config.
Needed to add more cases to support value comparison exceptions that the
rest of TF expects to work (this fixes tests in various places).
Also moved things to a switch block so that it's a little more compact.
A diff new needs to pass basic value checks to be considered the
"same". Several provisions have been added to ensure that the list, set,
and RequiresNew behaviours that have needed some exceptions in the past
are preserved in this new logic.
This ensures that we are checking for value equality as much as
possible, which will be more important when we transition to the
possibility of diffs being sourced from external data.
While merging the cached Input configs in the correct order prevents
overwriting existing config values, it doesn't prevent an earlier
provider from inserting unwanted values into later provider
configurations.
Diff the key-values returned by Input with the pre-input config, and
store only the "answers" that were added during the Input call.
Always call Input, even if we already have some values, since a
previously cached config may not be complete.
Previously when looking up cached provider input, the Input was taken in
its entirety, and only provider configuration fields that weren't in the
saved input were added. This would cause providers in modules to use the
entire configuration from parent modules, even if they themselves had
entirely different configs.
Note: this is only marginally beter than the old behavior. It may be
slightly more correct, but stil can't account for the user's intent, and
may be adding configured values from one provider into another.
Change the PathCacheKey to just join the path on a non-path character
(|), which makes for easier debugging.
Use the configured providers directly, rather than looking for inherited
provider configuration during graph evaluation.
First remove the provider config cache, and the associated
SetProviderConfig and ParentProviderConfig methods on the eval context.
Every provider must be configured, so there's no need to look for
configuration from other provider instances.
The config.ProviderConfig struct now has a Scope field which stores the
proper path for the interpolation scope. To get this metadata to the
interpolator, we add an EvalInterpolatProvider node which can carry the
ProviderConfig, and an InterpolateProvider context method to carry the
ProviderConfig.Scope into the InterplationScope.
Some of the tests could be adjusted to account for the new inheritance
behavior, and some were simply no longer valid and will be removed.
The remaining tests have questions on how they should work in practice.
This mostly concerns orphaned modules where there is no longer a way to
obtain a provider. In some cases we may require that a minimal provider
config be present to handle the destroy process, but we need further
testing.
All disabled code was commented out in this commit to record any
additional comments. The following commit will be a cleanup pass.
Update all references to the version values to use the new package.
The VersionString function was left in the terraform package
specifically for the aws provider, which is vendored. We can remove that
last call once the provider is updated.
In order to parse provider, resource and data source configuration from
HCL2 config files, we need to know the relevant configuration schema.
This new method allows Terraform Core to request these from a provider.
This is a breaking change to this interface, so all of its implementers
in this package are updated too. This includes concrete implementations
of the new method in helper/schema that use the schema conversion code
added in an earlier commit to produce a configschema.Block automatically.
Plugins compiled against prior versions of helper/schema will not have
support for this method, and so calls to them will fail. Callers of
this new method will therefore need to sniff for support using the
SchemaAvailable field added to both ResourceType and DataSource.
This careful handling will need to persist until next time we increment
the plugin protocol version, at which point we can make the breaking
change of requiring this information to be available.
DestroyValueReferenceTransformer is used during destroy to reverse the
edges for output and local values. Because destruction is going to
remove these from the state, nodes that depend on their value need to be
visited first.
When working on an existing plan, the context always used walkApply,
even if the plan was for a full destroy. Mark in the plan if it was
icreated for a destroy, and transfer that to the context when reading
the plan.
A Targeted graph may include outputs that were transitively included,
but if they are missing any dependencies they will fail to interpolate
later on.
Prune any outputs in the TargetsTransformer that have missing
dependencies, and are not depended on by any resource. This will
maintain the existing behavior of outputs failing silently ni most
cases, but allow errors to be surfaced where the output value is
required.
Module outputs may not have complete information during Input, because
it happens before refresh. Continue process on output interpolation
errors during the Input walk.
Remove the Input flag threaded through the input graph creation process
to prevent interpolation failures on module variables.
Use an EvalOpFilter instead to inset the correct EvalNode during
walkInput. Remove the EvalTryInterpolate type, and use the same
ContinueOnErr flag as the output node for consistency and to try and
keep the number possible eval node types down.
Locals don't need to be evaluated during destroy. Rather than simply
skipping them, remove them from the state as they are encountered. Even
though they are not persisted in the state, it keeps the state up to
date as the destroy happens, and we reduce the chance of other
inconstancies later on.
The fact that we clean up data source state by applying a "destroy" action
for them is an implementation detail, and so should not be visible to
outside callers or to the user.
Signalling these as real destroys creates confusion for users because
they see Terraform say things like:
data.template_file.foo: Refreshing state..."
...which, to an understandably-nervous sysadmin, might make them suspect
that the underlying object was deleted, rather than just Terraform's
record of it.
Previously the rendered plan output was constructed directly from the
core plan and then annotated with counts derived from the count hook.
At various places we applied little adjustments to deal with the fact that
the user-facing diff model is not identical to the internal diff model,
including the special handling of data source reads and destroys. Since
this logic was just muddled into the rendering code, it behaved
inconsistently with the tally of adds, updates and deletes.
This change reworks the plan formatter so that it happens in two stages:
- First, we produce a specialized Plan object that is tailored for use
in the UI. This applies all the relevant logic to transform the
physical model into the user model.
- Second, we do a straightforward visual rendering of the display-oriented
plan object.
For the moment this is slightly overkill since there's only one rendering
path, but it does give us the benefit of letting the counts be derived
from the same data as the full detailed diff, ensuring that they'll stay
consistent.
Later we may choose to have other UIs for plans, such as a
machine-readable output intended to drive a web UI. In that case, we'd
want the web UI to consume a serialization of the _display-oriented_ plan
so that it doesn't need to re-implement all of these UI special cases.
This introduces to core a new diff action type for "refresh". Currently
this is used _only_ in the UI layer, to represent data source reads.
Later it would be good to use this type for the core diff as well, to
improve consistency, but that is left for another day to keep this change
focused on the UI.
The implementation of ResourceAddress.Less was flawed because it was only
testing each field in the "less than" direction, and falling through in
cases where an earlier field compared greater than a later one.
Now we test for inequality first as the selector, and only fall through
if the two values for a given field are equal.
There is some additional, early validation on the "count" meta-argument
that verifies that only suitable variable types are used, and adding local
values to this whitelist was missed in the initial implementation.
It seems that this somehow got lost in the commit/rebase shuffle and
wasn't caught by the tests that _did_ make it because they were all using
just one file.
As a result of this bug, locals would fail to work correctly in any
configuration with more than one .tf file.
Along with restoring the append/merge behavior, this also reworks some of
the tests to exercise the multi-file case as better insurance against
regressions of this sort in future.
This fixes#15969.
Previously we were checking required_version only during "real" operations, and not during initialization. Catching it during init is better because that's the first command users run on a new working directory.
Go 1.9 adds this new function which, when called, marks the caller as
being a "helper function". Helper function stack frames are then skipped
when trying to find a line of test code to blame for a test failure, so
that the code in the main test function appears in the test failure output
rather than a line within the helper function itself.
This covers many -- but probaly not all -- of our test helpers across
various packages.
The shadow graph was incredibly useful during the 0.7 cycle but these days
it is idle, since we're not planning any significant graph-related changes
for the forseeable future.
The shadow graph infrastructure is somewhat burdensome since any change
to the ResourceProvider interface must have shims written. Since we _are_
expecting changes to the ResourceProvider interface in the next few
releases, I'm calling "YAGNI" on the shadow graph support to reduce our
maintenence burden.
If we do end up wanting to use shadow graph again in future, we'll always
be able to pull it out of version control and then make whatever changes
we skipped making in the mean time, but we can avoid that cost in the
mean time while we don't have any evidence that we'll need to pay it.
We stash the locals in the module state in a map that is ignored for JSON
serialization. We don't include locals in the persisted state because they
can be trivially recomputed and this allows us to assume that they will
pass through verbatim, without any normalization or other transforms
caused by the JSON serialization.
From a user standpoint a local is just a named alias for an expression,
so it's desirable that the result passes through here in as raw a form
as possible, so it behaves as closely as possible to simply using the
given expression directly.
A local value is similar to an output in that it exists only within state
and just always evaluates its value as best it can with the current state.
Therefore it has a single graph node type for all walks, which will
deal with that evaluation operation.
Allow module variables to fail interpolation during input. This is OK
since they will be verified again during Plan. Because Input happens
before Refresh, module variable interpolation can fail when referencing
values that aren't yet in the state, but are expected after Refresh.
Forward-port the plan state check from the 0.9 series.
0.10 has improved the serial handling for the state, so this adds
relevant comments and some more test coverage for the case of an
incrementing serial during apply.
The state returned from the testState helper shouldn't rely on any
mutations caused by WriteState. The Init function (which is analogous to
NewState) shoudl set any required fields.
Previously we relied on a constellation of coincidences for everything to
work out correctly with state serials. In particular, callers needed to
be very careful about mutating states (or not) because many different bits
of code shared pointers to the same objects.
Here we move to a model where all of the state managers always use
distinct instances of state, copied when WriteState is called. This means
that they are truly a snapshot of the state as it was at that call, even
if the caller goes on mutating the state that was passed.
We also adjust the handling of serials so that the state managers ignore
any serials in incoming states and instead just treat each Persist as
the next version after what was most recently Refreshed.
(An exception exists for when nothing has been refreshed, e.g. because
we are writing a state to a location for the first time. In that case
we _do_ trust the caller, since the given state is either a new state
or it's a copy of something we're migrating from elsewhere with its
state and lineage intact.)
The intent here is to allow the rest of Terraform to not worry about
serials and state identity, and instead just treat the state as a mutable
structure. We'll just snapshot it occasionally, when WriteState is called,
and deal with serials _only_ at persist time.
This is intended as a more robust version of #15423, which was a quick
hotfix to an issue that resulted from our previous slopping handling
of state serials but arguably makes the problem worse by depending on
an additional coincidental behavior of the local backend's apply
implementation.
Skips checksum validation if the `TF_SKIP_PROVIDER_VERIFY` environment variable is set. Undocumented variable, as the primary goal is to significantly improve the local provider development workflow.
When the InstanceState.Meta fields are marshaled, numeric values may
change types. The timeout system currently inserts integer values, which
will be unmarshal as float64s.
To ensure that a state which has round-tripped through json is equal to
itself, compare the json representation of the Meta values.
Added a new test that ensures that pre/post-diff hooks are not called
when EvalDiff is run with Stub set, tested through a full refresh run.
This helps test the expected behaviour of EvalDiff itself, versus the
end result of the diff being counted in a plan, which is what the
TestLocal_planScaleOutNoDupeCount test in backend/local checks.
Rather than overloading InstanceDiff with a "Stub" attribute that is
going to be largely meaningless, we are just going to skip
pre/post-diff hooks altogether. This is under the notion that we will
eventually not need to "stub" a diff for scale-out, stateless nodes on
refresh at all, so diff behaviour won't be necessary at that point, so
we should not assume that hooks will run at this stage anyway.
Also as part of this removed the CountHook test that is now failing
because CountHook is out of scope of the new behaviour.
This should make things a bit more clear as to what we are doing in the
EvalTree scale-out test - ensuring that we get the correct eval sequence
for a node with no state through EvalTree.
During plan and apply, because the provider constraints need to be built
from a plan, they are not checked until the terraform.Context is
created. Since the context is always requested by the backend during the
Operation, the backend needs to be responsible for generating contextual
error messages for the user.
Instead of formatting the ResolveProviders errors during NewContext,
return a special error type, ResourceProviderError to signal that
init will be required. The backend can then extract and format the
errors.
This is a specialized thin wrapper around parseResourceAddressInternal
that can be used to obtain a ResourceAddress from the keys in
ModuleDiff.Resources.
This is not something we'd ideally expose, but since the internal address
format is already exposed in the ModuleDiff object this ends up being
necessary to process the ModuleDiff from other packages, e.g. for
display in the UI.
Lexicographic sorting by the string form produces the wrong result because
[9] sorts after [10], so this custom comparison function takes that into
account and compares each portion separately to get a more intuitive
result.
This transformer is no longer needed, as we are not transforming
scale-out resource nodes into plannable nodes anymore, but rather just
taking a different eval sequence for resource refresh nodes with no
state.
This test ensures that the right EvalSequence gets set for a refresh
node with no state. This will ultimately assert that nodes on scale out
will not go down the regular refresh path, which would result in an
error due to the nil state - instead, we stub this node so that we get a
diff on it that can be used to effect computed/unknown values on
interpolations that may depend on this node.
Since the transformer that changed stateless nodes in refresh to
NodePlannableResourceInstance is not being used anymore, this test
needed to be adjusted to ensure that the right output was expected.
Changed the language of this field to indicate that this diff is not a
"real" diff, in that it should not be acted on, versus a "quiet" mode,
which would indicate just simply to act silently.
This fixes a bug with the new refresh graph behaviour where a resource
was being counted twice in the UI on part of being scaled out:
* We are no longer transforming refresh nodes without state to
plannable resources (the transformer will be removed shortly)
* A Quiet flag has been added to EvalDiff and InstanceDiff - this
allows for the flagging of a diff that should not be treated as real
diff for purposes of planning
* When there is no state for a refresh node now, a new path is taken
that is similar to plan, but flags Quiet, and does nothing with the
diff afterwards.
Tests pending - light testing has confirmed this should fix the double
count issue, but we should have some tests to actually confirm the bug.
Previously the behavior for -target when given a module address was to
target only resources directly within that module, ignoring any resources
defined in child modules.
This behavior turned out to be counter-intuitive, since users expected
the -target address to be interpreted hierarchically.
We'll now use the new "Contains" function for addresses, which provides
a hierarchical "containment" concept that is more consistent with user
expectations. In particular, it allows module.foo to match
module.foo.module.bar.aws_instance.baz, where before that would not have
been true.
Since Contains isn't commutative (unlike Equals) this requires some
special handling for targeting specific indices. When given an argument
like -target=aws_instance.foo[0], the initial graph construction (for
both plan and refresh) is for the resource nodes from configuration, which
have not yet been expanded to separate indexed instances. Thus we need
to do the first pass of TargetsTransformer in mode where indices are
ignored, with the work then completed by the DynamicExpand method which
re-applies the TargetsTransformer in index-sensitive mode.
This is a breaking change for anyone depending on the previous behavior
of -target, since it will now select more resources than before. There is
no way provided to obtain the previous behavior. Eventually we may support
negative targeting, which could then combine with positive targets to
regain the previous behavior as an explicit choice.
This is similar in purpose to Equals but it takes a hierarchical approach
where modules contain their child modules, resources are contained by
their modules, and indexed resource instances are contained by their
resource names.
Unlike "Equals", Contains is intended to be transitive, so if A contains B
and B contains C, then C necessarily contains A. It is also directional:
if A contains B then B does not also contain A unless A and B are
identical. This results in more intuitive behavior for use-cases where
the goal is to select a portion of the address space for an operation.
As part of our terminology shift, the interpolation variable for the name
of the current workspace changes to terraform.workspace. The old name
continues to be supported for compatibility.
We can't generate a deprecation warning from here so for now we'll just
silently accept terraform.env as an alias, but not mention it at all in
the error message in the hope that its use phases out over time before we
actually remove it.
The information stored in a plan is tightly coupled to the Terraform core
and provider plugins that were used to create it, since we have no
mechanism to "upgrade" a plan to reflect schema changes and so mismatching
versions are likely to lead to the "diffs didn't match during apply"
error.
To allow us to catch this early and return an error message that _doesn't_
say it's a bug in Terraform, we'll remember the Terraform version and
plugin binaries that created a particular plan and then require that
those match when loading the plan in order to apply it.
The planFormatVersion is increased here so that plan files produced by
earlier Terraform versions _without_ this information won't be accepted
by this new version, and also that older versions won't try to process
plans created by newer versions.
When set, this information gets passed on to the provider resolver as
part of the requirements information, causing us to reject any plugins
that do not match during initialization.
Previously one of the errors had a built-in context message and the other
did not, making it hard for callers to present a user-friendly message
in both cases.
Now we generate an error message of the same form in both cases, with one
case providing additional information. Ideally the main case would be
able to give more specific guidance too, but that's hard to achieve with
the current regexp-based parsing implementation.
This is a useful building block for filtering configuration based on a
resource address. It is similar in principle to state filtering, but for
specific resource configuration blocks.
This allows growing the scope of a resource address to include all of the
resources in the same module as the targeted resource. This is useful to
give context in error messages.
The resource address documentation defines a resource address as being in
two parts: the module path and the resource spec. The resource spec can
be omitted, which represents addressing _all_ resources in a module.
In some cases (such as import) it doesn't make sense to address an entire
module, so this helper makes it easy for validation code to check for
this to reject insufficiently-specific resource addresses.
We're going to use config to determine provider dependencies, so we need
to always provide a config when instantiating a context or we'll end up
loading no providers at all.
We previously had a test for running "terraform import -config=''" to
disable the config entirely, but this test is now removed because it makes
no sense. The actual functionality its testing still remains for now,
but it will be removed in a subsequent commit when we start requiring that
a resource to be imported must already exist in configuration.
Rather than providing an already-resolved map of plugins to core, we now
provide a "provider resolver" which knows how to resolve a set of provider
dependencies, to be determined later, and produce that map.
This requires the context to be instantiated in a different way, so this
very noisy diff is a mostly-mechanical update of all of the existing
places where contexts get created for testing, using some adapted versions
of the pre-existing utilities for passing in mock providers.
Previously the set of providers was fixed early on in the command package
processing. In order to be version-aware we need to defer this work until
later, so this interface exists so we can hold on to the possibly-many
versions of plugins we have available and then later, once we've finished
determining the provider dependencies, select the appropriate version of
each provider to produce the final set of providers to use.
This commit establishes the use of this new mechanism, and thus populates
the provider factory map with only the providers that result from the
dependency resolution process.
This disables support for internal provider plugins, though the
mechanisms for building and launching these are still here vestigially,
to be cleaned up in a subsequent commit.
This also adds a new awkward quirk to the "terraform import" workflow
where one can't import a resource from a provider that isn't already
mentioned (implicitly or explicitly) in config. We will do some UX work
in subsequent commits to make this behavior better.
This breaks many tests due to the change in interface, but to keep this
particular diff reasonably easy to read the test fixes are split into
a separate commit.
ResourceProviderResolver is an extra level of indirection before we
get to a map[string]ResourceProviderFactory, which accepts a map of
version constraints and uses it to choose from potentially-many available
versions of each provider to produce a single ResourceProviderFactory
for each one requested.
As of this commit the ResourceProviderResolver interface is not used. In
a future commit the ContextOpts.Providers map will be replaced with a
resolver instance, with the creation of the factory delayed until the
version constraints have been resolved.
Previously the Type of a ResourceState was generally ignored, but we're
now starting to use it to figure out which providers are needed to
support the resources in state so our tests need to set it accurately
in order to get the expected result.
This new private function takes a configuration tree and a state structure
and finds all of the explicit and implied provider dependencies
represented, returning them as a moduledeps.Module tree structure.
It annotates each dependency with a "reason", which is intended to be
useful to a user trying to figure out where a particular dependency is
coming from, though we don't yet have any UI to view this.
Nothing calls this yet, but a subsequent commit will use the result of
this to produce a constraint-conforming map of provider factories during
context initialization.
Previously the logic for inferring a provider type from a resource name
was buried a utility function in the 'terraform' package. Instead here we
lift it up into the 'config' package where we can make broader use of it
and where it's easier to discover.
* core: Add 'UserAgentString' helper function to generate a standard UserAgent string. Example generation: 'Terraform 0.9.7-dev (go1.8.1)'
* provider/openstack: Add Terraform version to UserAgent string
Prior to Terraform 0.7, lists in Terraform were just a shallow abstraction
on top of strings with a magic delimiter between items. Wrapping a single
string in brackets in the configuration was Terraform's prompt that it
needed to split the string on that delimiter during interpolation.
In 0.7, when first-class lists were added, this convention was preserved
by flattening lists-of-lists by one level when they were encountered in
configuration. However, there was an oversight in that change where it
did not correctly handle the case where the inner list was unknown.
In #14135 we removed some code that was flattening partially-unknown lists
into fully-unknown (untyped) values. This inadvertently exposed the missed
case from the previous paragraph, causing issues for list-wrapped splat
expressions with unknown members. While this worked fine for resources,
due to some fixup done inside helper/schema, this did not work for other
interpolation contexts such as module blocks.
Various attempts to fix this up and restore the flattening behavior
selectively were unsuccessful, due to a proliferation of assumptions all
over the core code that would be too risky to change just to fix this bug.
This change, then, takes the different approach of removing the
requirement that splats be presented inside list brackets. This
requirement didn't make much sense anymore anyway, since no other
list-returning expression had this constraint and so the rest of Terraform
was already successfully dealing with both cases.
This leaves us with two different scenarios:
- For resource arguments, existing normalization code in helper/schema
does its own flattening that preserves compatibility with the common
practice of using bracketed splats. This change proves this with a test
within the "test" provider that exercises the whole Terraform core and
helper/schema stack that assigns bracketed splats to list and set
attributes.
- For arguments in other blocks, such as in module callsites, the
interpolator's own flattening behavior applies to known lists,
preserving compatibility with configurations from before
partially-computed splats were possible, but those wishing to use
partially-computed splats are required to drop the surrounding brackets.
This is less concerning because this scenario was introduced only in
0.9.5, so the scope for breakage is limited to those who adopted this
new feature quickly after upgrading.
As of this commit, the recommendation is to stop using brackets around
splats but the old form continues to be supported for backward
compatibility. In a future _major_ version of Terraform we will probably
phase out this legacy form to improve consistency, but for now both
forms are acceptable at the expense of some (pre-existing) weird behavior
when _actual_ lists-of-lists are used.
This addresses #14521 by officially adopting the suggested workaround of
dropping the brackets around the splat. However, it doesn't yet allow
passing of a partially-unknown list between modules: that still violates
assumptions in Terraform's core, so for the moment partially-unknown lists
work only within a _single_ interpolation expression, and cannot be
passed around between expressions. Until more holistic work is done to
improve Terraform's type handling, passing a partially-unknown splat
through to a module will result in a fully-unknown list emerging on
the other side, just as was the case before #14135; this change just
addresses the fact that this was failing with an error in 0.9.5.
Instead of using a hardcoded version prerelease string, which makes release automation difficult, set the version prerelease string from an environment variable via the go linker tool during compile time.
The environment variable `TF_RELEASE` should only be set via the `make bin` target, and thus leaves the version prerelease string unset. Otherwise, when running a local compile of terraform via the `make dev` makefile target, the version prerelease string is set to `"dev"`, as usual.
This also requires some changes to both the circonus and postgresql providers, as they directly used the `VersionPrerelease` constant. We now simply call the `VersionString()` function, which returns the proper interpolated version string with the prerelease string populated correctly.
`TF_RELEASE` is unset:
```sh
$ make dev
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/05/22 10:38:19 Generated command/internal_plugin_list.go
==> Removing old directory...
==> Building...
Number of parallel builds: 3
--> linux/amd64: github.com/hashicorp/terraform
==> Results:
total 209M
-rwxr-xr-x 1 jake jake 209M May 22 10:39 terraform
$ terraform version
Terraform v0.9.6-dev (fd472e4a86500606b03c314f70d11f2bc4bc84e5+CHANGES)
```
`TF_RELEASE` is set (mimicking the `make bin` target):
```sh
$ TF_RELEASE=1 make dev
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/05/22 10:40:39 Generated command/internal_plugin_list.go
==> Removing old directory...
==> Building...
Number of parallel builds: 3
--> linux/amd64: github.com/hashicorp/terraform
==> Results:
total 121M
-rwxr-xr-x 1 jake jake 121M May 22 10:42 terraform
$ terraform version
Terraform v0.9.6
```
For child modules, a ModuleState isn't allocated until the first time a
module instance is inserted into the state under the module's path.
Normally interpolations of resource attributes are delayed until at least
one resource has been created due to the nature of the dependency graph,
but if the interpolation value is a multi-var (splat) then it is possible
that the referenced resource has count=0 and thus created _no_ resource
states when it was visited.
Previously we would crash when trying to access the resource map for the
nil module in order to count how many instances are present. Since we know
there can't be any instances present in a nil module, we now preempt
this crash by returning zero early.
This edge-case does not apply to the root module because its ModuleState
is allocated as part of initializing the main State instance.
This fixes#14438.
Currently, the refresh graph uses the resources from state as a base,
with data sources then layered on. Config is not consulted for resources
and hence new resources that are added with count (or any new resource
from config, for that matter) do not get added to the graph during
refresh.
This is leading to issues with scale in and scale out when the same
value for count is used in both resources, and data sources that may
depend on that resource (and possibly vice versa). While the resources
exist in config and can be used, the fact that ConfigTransformer for
resources is missing means that they don't get added into the graph,
leading to "index out of range" errors and what not.
Further to that, if we add these new resources to the graph for scale
out, considerations need to be taken for scale in as well, which are not
being caught 100% by the current implementation of
NodeRefreshableDataResource. Scale-in resources should be treated as
orphans, which according to the instance-form NodeRefreshableResource
node, should be NodeDestroyableDataResource nodes, but this this logic
is currently not rolled into NodeRefreshableDataResource. This causes
issues on scale-in in the form of race-ish "index out of range" errors
again.
This commit updates the refresh graph so that StateTransformer is no
longer used as the base of the graph. Instead, we add resources from the
state and config in a hybrid fashion:
* First off, resource nodes are added from config, but only if
resources currently exist in state. NodeRefreshableManagedResource
is a new expandable resource node that will expand count and add
orphans from state. Any count-expanded node that has config but no
state is also transformed into a plannable resource, via a new
ResourceRefreshPlannableTransformer.
* The NodeRefreshableDataResource node type will now add count orphans
as NodeDestroyableDataResource nodes. This achieves the same effect
as if the data sources were added by StateTransformer, but ensures
there are no races in the dependency chain, with the added benefit of
directing these nodes straight to the proper
NodeDestroyableDataResource node.
* Finally, config orphans (nodes that don't exist in config anymore
period) are then added, to complete the graph.
This should ensure as much as possible that there is a refresh graph
that best represents both the current state and config with updated
variables and counts.
The previous behavior of targets was that targeting a particular node
would implicitly target everything it depends on. This makes sense when
the dependencies in question are between resources, since we need to
make sure all of a resource's dependencies are in place before we can
create or update it.
However, it had the undesirable side-effect that targeting a resource
would _exclude_ any outputs referring to it, since the dependency edge
goes from output to resource. This then causes the output to be "stale",
which is problematic when outputs are being consumed by downstream
configs using terraform_remote_state.
GraphNodeTargetDownstream allows nodes to opt-in to a new behavior where
they can be targeted by _inverted_ dependency edges. That is, it allows
outputs to be considered targeted if anything they directly depend on
is targeted.
This is different than the implied targeting behavior in the other
direction because transitive dependencies are not considered unless the
intermediate nodes themselves have TargetDownstream. This means that
an output1→output2→resource chain can implicitly target both outputs, but
an output→resource1→resource2 chain _won't_ target the output if only
resource2 is targeted.
This behavior creates a scenario where an output can be visited before
all of its dependencies are ready, since it may have a mixture of both
targeted and untargeted dependencies. This is fine for outputs because
they silently ignore any errors encountered during interpolation anyway,
but other hypothetical future implementers of this interface may need to
be more careful.
This fixes#14186.
This was actually redundant anyway since HIL itself applied a similar
rule where any partially-unknown list would be automatically flattened
to a single unknown value.
However, now we're changing HIL to explicitly permit partially-unknown
lists so that we can allow the index operator [...] to succeed when
applied to one of the elements that _is_ known.
This, in conjunction with hashicorp/hil#51 and hashicorp/hil#52,
fixes#3449.
stringer has changed the boilerplate it generates in a recent version.
We'd previously updated to the new format but accientally rolled back
to the old while merging a long-running feature branch.
This restores us back to the new format again.
Moving the transformer wholesale looks like it broke some tests, with
some actually doing legit work in normalizing singular resources from a
foo.0 notation to just foo.
Adjusted the TestPlanGraphBuilder to account for the extra
meta.count-boundary nodes in the graph output now, as well as added
another context test that tests this case. It appears the issue happens
during validate, as this is where the state can be altered to a broken
state if things are not properly transformed in the plan graph.
This fixes interpolation issues on grandchild data sources that have
multiple instances (ie: counts). For example, baz depends on bar, which
depends on foo.
In this instance, after an initial TF run is done and state is saved,
the next refresh/plan is not properly transformed, and instead of the
graph/state coming through as data.x.bar.0, it comes through as
data.x.bar. This breaks interpolations that rely on splat operators -
ie: data.x.bar.*.out.
* Revert #11245, #11321, #11498 and #11757
These PR’s are all related to issue #11170 for which I would like to propose a different solution then the one currently implemented.
* A different approach to solve #11170
This approach has (IMHO) a few advantages with regards to the solution currently implemented. I will elaborate on this in the PR.
The documentation for Refresh indicates that it will always return a
valid state, but that wasn't true in the case of a graph builder error.
While this same concept wasn't documented for Apply, it was still
assumed in the terraform apply code.
Since the helper testing framework relies on the absence of a state to
determine if it can call Destroy, the Context can't can't start
returning a state in all cases. Document this, and use the State method
to fetch the correct state value after Apply.
Add a nil check to the WriteState function, so that writing a nil state
is a noop.
Make sure to init before sorting the state, to make sure we're not
attempting to sort nil values. This isn't technically needed with the
current code, but it's just safer in general.
Make sure duplicate depends_on entries are pruned from existing states
on read.
Make sure new state built from configs with multiple references to the
same resource only add it once to the Dependencies.
duplicate entries could end up in "depends_on" in the state, which could
possible lead to erroneous state comparisons. Remove them when walking
the graph, and remove existing duplicates when pruning the state.
Previously this function was depending on the mapstructure behavior of
failing with an error when trying to decode a map into a list or
vice-versa, but mapstructure's WeakDecode behavior changed so that it
will go to greater lengths to coerce the given value to fit into the
target type, causing us to mis-handle certain ambigous cases.
Here we exert a bit more control over what's going on by using 'reflect'
to first check whether we have a slice or map value and only then try
to decode into one with mapstructure. This allows us to still rely on
mapstructure's ability to decode nested structures but ensure that lists
and maps never get implicitly converted to each other.
Since the validation of connection blocks is delegated to the communicator
selected by "type", we were not previously doing any validation of the
attribute names in these blocks until running provisioners during apply.
Proper validation here requires us to already have the instance state,
since the final connection info is a merge of values provided in config
with values assigned automatically by the resource. However, we can do
some basic name validation to catch typos during the validation pass, even
though semantic validation and checking for missing attributes will still
wait until the provisioner is instantiated.
This fixes#6582 as much as we reasonably can.
This previously lacked tests altogether. This new test verifies the
"happy path", ensuring that both literal and computed values pass through
correctly into the VariableValues map.
This crash resulted because the type switch checked for either of two
types but the type assertion within it assumed only one of them.
A straightforward (if inelegant) fix is to simply duplicate the relevant
case block and change the type assertion, thus allowing the types to match
up in all cases.
This fixes#13297.
During the input walk we stash the values resulting from user input
(if any) in the eval context for use when later walks need to resolve
the provider config.
However, this repository of input results is only able to represent
literal values, since it does not retain the record of which of the keys
have values that are "computed".
Previously we were blindly stashing all of the results, failing to
consider that some of them might be computed. That resulted in the
UnknownValue placeholder being misinterpreted as a literal value when
the data is used later, which ultimately resulted in it clobbering the
actual expression evaluation result and thus causing the provider to
fail to configure itself.
Now we are careful to only retain in this repository the keys whose values
are known statically during the input phase. This eventually gets merged
with the dynamic evaluation results on subsequent walks, with the dynamic
keys left untouched due to their absence from the stored input map.
This fixes#11264.
This method mirrors that of config.Backend, so we can compare the
configration of a backend read from a config vs that of a backend read
from a state. This will prevent init from reinitializing when using
`-backend-config` options that match the existing state.
golang/tools commit 23ca8a263 changed the format of the leading comment
to comply with some new standards discussed here:
https://golang.org/issue/13560
This is the result of running generate with the latest version of
stringer. Everyone working on Terraform will need to update stringer
after this is merged, to avoid reverting this:
go get -u golang.org/x/tools/cmd/stringer
It appears there are no tests for this as far as I can find.
We change V1 states (very old) to assume a nil path is a root path.
Staet.Validate() later will catch any duplicate paths.
When transforming a diff from DestroyCreate to a simple Update,
ignore_changes can cause keys from flatmapped objects to be filtered
form the diff. We need to filter each flatmapped container as a whole to
ensure that unchanged keys aren't lost in the update.
ignore_changes is causing changes in other flatmapped sets to be
filtered out incorrectly.
This required fixing the testDiffFn to create diffs which include the
old value, breaking one other test.
Fixes#12836
Realistically, these should be caught during validation anyways. In this
case, this was causing 12386 because refresh with a data source will
attempt to use module variables. I don't see any clear logic to prune
those module variables or not add them so its easier to return unknown
to cause the data to be computed and not run.
the terraform package doesn't know about TestProvider, so don't put the
hooks in terraform.MockResourceProvider. Wrap the mock in the test where
we need to check the TestProvider functionality.
Always wait for watchStop to return during context.walk.
Context.walk would often complete immediately after sending the close
signal to watchStop, which would in turn call the deferred releaseRun
cancelling the runContext.
Without any synchronization points after the select statement in
watchStop, that goroutine was not guaranteed to be scheduled
immediately, and in fact it often didn't continue until after the
runContext was canceled. This in turn left the select statement with
multiple successful cases, and half the time it would chose to Stop the
providers.
Stopping the providers after the walk of course didn't cause any
immediate failures, but if there was another walk performed, the
provider StopContext would no longer be valid and could cause
cancellation errors in the provider.
Starting with Go 1.8 betas, we've periodically received SIGQUITs on our
tests in Travis. The stack trace looks like this:
https://gist.github.com/mitchellh/abf09b0980f8ea01269f8d9d6133884d
The tests are timing out! This is a test that hasn't been touched really
in a very long time and has always passed. I've **reproduced this
locally** by setting `GOMAXPROCS=1` and running the test. By yielding
the scheduler in the hot loop, it now passes almost instantly every
time.
Perhaps the test can be written in a different way, but this gets tests
passing and I think will fix our periodic errors.
A couple interpolation tests were using invalid state that didn't match
the config. These will still pass but were flushed out by an attempt to
make this an error. The repl however still required interpolation
without a config, and tests there will provide a indication if this
behavior changes.
It turns out that a few use cases depend on not finding a resource
without an error.
The other code paths had sufficient nil checks for this, but there was
one place where we called Count() that needed to be checked. If the
existence of the resource matters, it would be caught at a higher level
and still return an "unknown resource" error to the user.
Module resource were being sorted lexically by name by the state filter.
If there are 10 or more resources, the order won't match the index
order, and resources will have different indexes in their new location.
Sort the FilterResults by index numerically when the names match.
Clean up the module String output for visual inspection by sorting
Resource name parts numerically when they are an integer value.
Due to the change to `interface{}` we need to use `reflect.DeepEqual`
here. With the restriction of primitive types this should always be
safe. We'll never get functions, channels, etc.
This changes the type of values in Meta for InstanceState to
`interface{}`. They were `string` before.
This will allow richer structures to be persisted to this without
flatmapping them (down with flatmap!). The documentation clearly states
that only primitives/collections are allowed here.
The only thing using this was helper/schema for schema versioning.
Appropriate type checking was added to make this change safe.
The timeout work @catsby is doing will use this for a richer structure.
Fixes#12183
The fix is in flatmap for this but the entire issue is a bit more
complex. Given a schema with a computed set, if you reference it like
this:
lookup(attr[0], "field")
And "attr" contains a computed set within it, it would panic even though
"field" is available. There were a couple avenues I could've taken to
fix this:
1.) Any complex value containing any unknown value at any point is
entirely unknown.
2.) Only the specific part of the complex value is unknown.
I took route 2 so that the above works without any computed (since
"name" is not computed but something else is). This may actually have an
effect on other parts of Terraform configs, however those similar
configs would've simply crashed previously so it shouldn't break any
pre-existing configs.
Fixes#10911
Outputs that aren't targeted shouldn't be included in the graph.
This requires passing targets to the apply graph. This is unfortunate
but long term should be removable since I'd like to move output changes
to the diff as well.
During backend initialization, especially during a migration, there is a
chance that an existing state could be overwritten.
Attempt to get a locks when writing the new state. It would be nice to
always have a lock when reading the states, but the recursive structure
of the Meta.Backend config functions makes that quite complex.
Fixes#11749
I'm **really** surprised this didn't come up earlier.
When only the state is available for a node, the advertised
referenceable name (the name used for dependency connections) included
the module path. This module path is automatically prepended to the
name. This means that probably every non-root resource for state-only
operations (destroys) didn't order properly.
This fixes that by omitting the path properly.
Multiple tests added to verify both graph correctness as well as a
higher level context test.
Will backport to 0.8.x
To avoid chasing down issues like #11635 I'm proposing we disable the
shadow graph for end users now that we have merged in all the new
graphs. I've kept it around and default-on for tests so that we can use
it to test new features as we build them. I think it'll still have value
going forward but I don't want to hold us for making it work 100% with
all of Terraform at all times.
I propose backporting this to 0-8-stable, too.
Fixes#11349
I tracked this bug back to the early 0.7 days so this has been around a
really long time. I wanted to confirm that this wasn't introduced by any
new graph changes and it appears to predate all of that. I couldn't find
a single 0.7.x release where this worked, and I didn't want to go back
to 0.6.x since it was pre-vendoring.
The test case shows the logic the best, but the basic idea is: for
collections that go to zero elements, the "RequiresNew" sameness check
should be ignored, since the new diff can choose to not have that at all
in the diff.
This adds a Meta field (similar to InstanceState.Meta) to InstanceDiff.
This allows providers to store arbitrary k/v data as part of a diff and
have it persist through to the Apply. This will be used by helper/schema
for timeout storage being done by @catsby.
The type here is `map[string]interface{}`. A couple notes:
* **Not using `string`**: The Meta field of InstanceState is a string
value. We've learned that forcing things to strings is bad. Let's
just allow types.
* **Primitives only**: Even though it is type `interface{}`, it must
be able to cleanly pass the go-plugin RPC barrier as well as be
encoded to a file as Gob. Given these constraints, the value must
only comprise of primitive types and collections. No structs,
functions, channels, etc.
Read state would assume that having a reader meant there should be a
valid state. Check for an empty file and return ErrNoState to
differentiate a bad file from an empty one.
This disables the computed value check for `count` during the validation
pass. This enables partial support for #3888 or #1497: as long as the
value is non-computed during the plan, complex values will work in
counts.
**Notably, this allows data source values to be present in counts!**
The "count" value can be disabled during validation safely because we
can treat it as if any field that uses `count.index` is computed for
validation. We then validate a single instance (as if `count = 1`) just
to make sure all required fields are set.
This switches to the Go "context" package for cancellation and threads
the context through all the way to evaluation to allow behavior based on
stopping deep within graph execution.
This also adds the Stop API to provisioners so they can quickly exit
when stop is called.
Fixes#11212
The import graph builder was missing the transform to setup links to
parent providers, so provider inheritance didn't work properly. This
adds that.
This also removes the `PruneProviderTransform` since that has no value
in this graph since we'll never add an unused provider.
This was possible with test fixtures but it is also conceiably possible
with older states or corrupted states. We can also extract the type from
the key so we do that now so that StateFilter is more robust.
Removal of empty nested containers from a flatmap would sometimes fail a
sanity check when removed in the wrong order. This would only fail
sometimes due to map iteration. There was also an off-by-one error in
the prefix check which could match the incorrect keys.
When a InstanceState is merged with an InstanceDiff, any maps arrays or
sets that no longer exist are shown as empty with a count of 0. If these
are left in the flatmap structure, they will cause errors during
expansion because their existing in the map affects the counts for
parent structures.
The change in #10787 used flatmap.Expand to fix interpolation of nested
maps, but it broke interpolation of sets such that their elements were
not represented. For example, the expected string representation of a
splatted aws_network_interface.whatever.*.private_ips should be:
```
[{Variable (TypeList): [{Variable (TypeString): 10.41.17.25}]} {Variable (TypeList): [{Variable (TypeString): 10.41.22.236}]}]
```
But instead it became:
```
[{Variable (TypeList): [{Variable (TypeString): }]} {Variable (TypeList): [{Variable (TypeString): }]}]
```
This is because the expandArray function of expand.go treated arrays to
exclusively be lists, e.g. not sets. The old code used to match for
numeric keys, so it would work for sets, whereas expandArray just
assumed keys started at 0 and ascended incrementally. Remember that
sets' keys are numeric, but since they are hashes, they can be any
integer. The result of assuming that the keys start at 0 led to the
recursive call to flatmap.Expand not matching any keys of the set, and
returning nil, which is why the above example has nothing where the IP
addresses used to be.
So we bring back that matching behavior, but we move it to expandArray
instead. We've modified it to not reconstruct the data structures like
it used to when it was in the Interpolator, and to use the standard int
sorter rather than implementing a custom sorter since a custom one is no
longer necessary thanks to the use of flatmap.Expand.
Fixes#10908, and restores the viability of the workaround I posted in #8696.
Big thanks to @jszwedko for helping me with this fix. I was able to
diagnose the problem along, but couldn't fix it without his help.
Fixes#10729
Destruction ordering wasn't taking into account ordering implied through
variables across module boundaries.
This is because to build the destruction ordering we create a
non-destruction graph to determine the _creation_ ordering (to properly
flip edges). This creation graph we create wasn't including module
variables. This PR adds that transform to the graph.
Fixes#10711
The `ModuleVariablesTransformer` only adds module variables in use. This
was missing module variables used by providers since we ran the provider
too late. This moves the transformer and adds a test for this.
Fixes#10680
This moves TargetsTransformer to run after the transforms that add
module variables is run. This makes targeting work across modules (test
added).
This is a bug that only exists in the new graph, but was caught by a
shadow error in #10680. Tests were added to protect against regressions.
If a data source has explicit dependencies in `depends_on`, we can
assume the user has added those because of a dependency not tracked
directly in the config. If there are any entries in `depends_on`, don't
apply the data source early during Refresh.
Fixes#4645
This is something that never worked (even in legacy graphs), but as we
push forward towards encouraging multi-provider usage especially with
things like the Vault data source, I want to make sure we have this
right for 0.8.
When you have a config like this:
```
resource "foo_type" "name" {}
provider "bar" { attr = "${foo_type.name.value}" }
resource "bar_type" "name" {}
```
Then the destruction ordering MUST be:
1. `bar_type`
2. `foo_type`
Since configuring the client for `bar_type` requires accessing data from
`foo_type`. Prior to this PR, these two would be done in parallel. This
properly pushes forward the dependency.
There are more cases I want to test but this is a basic case that is
fixed.
Fixes#8695
When a list count was computed in a multi-resource access
(foo.bar.*.list), we were returning the value as empty string. I don't
actually know the histocal reasoning for this but this can't be correct:
we must return unknown.
When changing this to unknown, the new tests passed and none of the old
tests failed. This leads me further to believe that the return empty
string is probably a holdover from long ago to just avoid crashes or
UUIDs in the plan output and not actually the correct behavior.
Related to #8036
We have had this behavior for a _long_ time now (since 0.7.0) but it
seems people are still periodically getting bit by it. This adds an
explicit error message that explains that this kind of override isn't
allowed anymore.
* "external" provider for gluing in external logic
This provider will become a bit of glue to help people interface external
programs with Terraform without writing a full Terraform provider.
It will be nowhere near as capable as a first-class provider, but is
intended as a light-touch way to integrate some pre-existing or custom
system into Terraform.
* Unit test for the "resourceProvider" utility function
This small function determines the dependable name of a provider for
a given resource name and optional provider alias. It's simple but it's
a key part of how resource nodes get connected to provider nodes so
worth specifying the intended behavior in the form of a test.
* Allow a provider to export a resource with the provider's name
If a provider only implements one resource of each type (managed vs. data)
then it can be reasonable for the resource names to exactly match the
provider name, if the provider name is descriptive enough for the
purpose of the each resource to be obvious.
* provider/external: data source
A data source that executes a child process, expecting it to support a
particular gateway protocol, and exports its result. This can be used as
a straightforward way to retrieve data from sources that Terraform
doesn't natively support..
* website: documentation for the "external" provider
Fixes#10440
This updates the behavior of "apply" resources to depend on the
destroy versions of their dependencies.
We make an exception to this behavior when the "apply" resource is CBD.
This is odd and not 100% correct, but it mimics the behavior of the
legacy graphs and avoids us having to do major core work to support the
100% correct solution.
I'll explain this in examples...
Given the following configuration:
resource "null_resource" "a" {
count = "${var.count}"
}
resource "null_resource" "b" {
triggers { key = "${join(",", null_resource.a.*.id)}" }
}
Assume we've successfully created this configuration with count = 2.
When going from count = 2 to count = 1, `null_resource.b` should wait
for `null_resource.a.1` to destroy.
If it doesn't, then it is a race: depending when we interpolate the
`triggers.key` attribute of `null_resource.b`, we may get 1 value or 2.
If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get
2. This was the root cause of #10440
In the legacy graphs, `null_resource.b` would depend on the destruction
of any `null_resource.a` (orphans, tainted, anything!). This would
ensure proper ordering. We mimic that behavior here.
The difference is CBD. If `null_resource.b` has CBD enabled, then the
ordering **in the legacy graph** becomes:
1. null_resource.b (create)
2. null_resource.b (destroy)
3. null_resource.a (destroy)
In this case, the update would always have 2 values for `triggers.key`,
even though we were destroying a resource later! This scenario required
two `terraform apply` operations.
This is what the CBD check is for in this PR. We do this to mimic the
behavior of the legacy graph.
The correct solution to do one day is to allow splat references
(`null_resource.a.*.id`) to happen in parallel and only read up to to
the `count` amount in the state. This requires some fairly significant
work close to the 0.8 release date, so we can defer this to later and
adopt the 0.7.x behavior for now.
Init should only _add_ values, not remove them.
During graph execution, there are steps that expect that a state isn't
being actively pruned out from under it. Namely: writing deposed states.
Writing deposed states has no way to handle if a state changes
underneath it because the only way to uniquely identify a deposed state
is its index in the deposed array. When destroying deposed resources, we
set the value to `<nil>`. If the array is pruned before the next deposed
destroy, then the indexes have changed, and this can cause a crash.
This PR does the following (with more details below):
* `init()` no longer prunes.
* `ReadState()` always prunes before returning. I can't think of a
scenario where this is unsafe since generally we can always START
from a pruned state, its just causing problems to prune
mid-execution.
* Exported State APIs updated to be robust against nil ModuleStates.
Instead, I think we should adopt the following semantics for init/prune
in our structures that support it (Diff, for example). By having
consistent semantics around these functions, we can avoid this in the
future and have set expectations working with them.
* `init()` (in anything) will only ever be additive, and won't change
ordering or existing values. It won't remove values.
* `prune()` is destructive, expectedly.
* Functions on a structure must not assume a pruned structure 100% of
the time. They must be robust to handle nils. This is especially
important because in many cases values such as `Modules` in state
are exported so end users can simply modify them outside of the
exported APIs.
This PR may expose us to unknown crashes but I've tried to cover our
cases in exposed APIs by checking for nil.