This is to allow convenient testing of functions that are designed to work
directly with *terminal.Streams or the individual stream objects inside.
Because the InputStream and OutputStream APIs expose directly an *os.File,
this does some extra work to set up OS-level pipes so we can capture the
output into local buffers to make test assertions against. The idea here
is to keep the tricky stuff we need for testing confined to the test
codepaths, so that the "real" codepaths don't end up needing to work
around abstractions that are otherwise unnecessary.
The configload package should only be responsible for locating and
loading the configuration, and not be further inspecting the config
source itself. Moving the validating into the configs package.
with NestedType objects.
There are a handful of mostly cosmetic changes in this PR which likely
make the diff awkward to read; I renamed several functions to
(hopefully) clarifiy which funcs worked with Blocks vs other types. I
also extracted some small code snippets into their own functions for
reusability.
The code that descends into attributes with NestedTypes is similar to
the block-handling code, and differs in all the ways blocks and
attributes differ: null is valid for attributes, unlike blocks which can
only be present or empty.
Add support for parsing configuration_aliases in required_providers
entries. The decoder needed to be re-written here in order to support
the bare reference style usage of provider names so that they match the
usage in other location within configuration. The only change to
existing handling of the required_providers block is more precise error
locations in a couple cases.
Errors encountered when parsing flags for apply, plan, and refresh were
being suppressed. This resulted in a generic usage error when using an
invalid `-target` flag.
This commit makes several changes to address this. First, these commands
now output the flag parse error before exiting, leaving at least some
hint about the error. You can verify this manually with something like:
terraform apply -invalid-flag
We also change how target attributes are parsed, moving the
responsibility from the flags instance to the command. This allows us to
customize the diagnostic output to be more user friendly. The
diagnostics now look like:
```shellsession
$ terraform apply -no-color -target=foo
Error: Invalid target "foo"
Resource specification must include a resource type and name.
```
Finally, we add test coverage for both parsing of target flags, and at
the command level for successful use of resource targeting. These tests
focus on the UI output (via the change summary and refresh logs), as the
functionality of targeting is covered by the context tests in the
terraform package.
Right now, there's a bug that if a diagnostic comes back from the
provider with an AttributePath set, but no steps in the AttributePath,
Terraform _thinks_ it's an attribute-specific diagnostic and not a
whole-resource diagnostic, but then doesn't associate it with any
specific attribute, meaning the diagnostic doesn't get associated with
the config at all.
This PR changes things to check if there are any steps in the
AttributePath before deciding this isn't a whole-resource diagnostic,
and if there aren't, treats it as a whole-resource diagnostic, instead.
See hashicorp/terraform-plugin-sdk#561 for more details on how this
surfaces in the wild.
plugin6 includes a `convert` package to handle conversion between the
plugin protocol and configschema, and the GRPCProviderPlugin interface
implementation for protocol v6.
The JSON plan output format includes a serialized, simplified version of
the configuration. One component of this config is a map of provider
configurations, which includes version constraints.
Until now, only version constraints specified in the provider config
blocks were exposed in the JSON plan output. This is a deprecated method
of specifying provider versions, and the recommended use of a
required_providers block resulted in the version constraints being
omitted.
This commit fixes this with two changes:
- When processing the provider configurations from a module, output the
fully-merged version constraints for the entire module, instead of any
constraints set in the provider configuration block itself;
- After all provider configurations are processed, iterate over the
required_providers entries to ensure that any configuration-less
providers are output to the JSON plan too.
No changes are necessary to the structure of the JSON plan output, so
this is effectively a semantic level bug fix.
- rename ProposedNewObject to ProposedNew:
Now that there is an actual configschema.Object it will be clearer if
the function names match the type the act upon.
- extract attribute-handling logic from assertPlanValid and extend
A new function, assertPlannedAttrsValid, takes the existing
functionality and extends it to validate attributes with NestedTypes.
The NestedType-specific handling is in assertPlannedObjectValid, which
is very similar to the block-handling logic, except that nulls are a
valid plan (an attribute can be null, but not a block).
This commit adds a new field, NestedType, to the Attribute schema, and
extends the current Attribute decoderSpec to account for the new type.
The codepaths are mostly unused and included in a separate commit to
verify that the included changes do not impact any other tests yet.
This is the first commit for plugin protocol v6. This is currently
unused (dead) code; future commits will add the necessary conversion
packages, extend configschema, and modify the providers.Interface.
The new plugin protocol includes the following changes:
- A new field has been added to Attribute: NestedType. This will be the
key new feature in plugin protocol v6
- Several massages were renamed for consistency with the verb-noun
pattern seen in _most_ messages.
- The prepared_config has been removed from PrepareProviderConfig
(renamed ValidateProviderConfig), as it has never been used.
- The provisioner service has been removed entirely. This has no impact
on built-in provisioners. 3rd party provisioners are not supported by
the SDK and are not included in this protocol at all.
The previous changes removing support for using the trailing positional
argument as a working directory missed a spot in the apply/destroy
command implementation. We still support this argument for applying a
saved plan:
terraform apply foo.tfplan
However, if you pass a positional path which doesn't "look like" a plan
(for example, the path to a configuration directory), Terraform would
silently ignore it and continue.
This commit fixes that by adding an error message if the user specifies
a path which the plan loader rejects as not "looking like" a plan. This
message includes a reference to the `-chdir` flag as a pointer about
what to do next.
We also rearrange the error message when calling `terraform destroy`
with a plan file argument, and add test coverage for the above. While
we're here, update the destroy tests to copy the fixture directory,
chdir, and defer cleanup.
This dramatically simplifies the logic around auto-approve, which is
nice.
Also add test coverage for the manual approve step, for both apply and
destroy, answering both yes and no.
To make the command arguments easier to understand and extend, we are
moving away from positional arguments. This commit changes the graph
command to take a `-plan` flag instead of an optional trailing path.
Several commands continued to support the legacy positional path
argument to specify a working directory. This functionality has been
replaced with the global -chdir flag, which is specified before any
other arguments, including the sub-command name.
This commit removes support for the trailing path parameter from
most commands. The only command which still supports a path argument is
fmt, which also supports "-" to indicate receiving configuration from
standard input.
Any invocation of a command with an invalid trailing path parameter will
result in a short error message, pointing at the -chdir alternative.
There are many test updates in this commit, almost all of which are
migrations from using positional arguments to specify a working
directory. Because of the layer at which these tests run, we are unable
to use the -chdir argument, so the churn in test files is larger than
ideal. Sorry!
This is needed for cases where a variable may be fetched and become
a member of a set, and thus the whole set is marked, which means
ElementIterator will panic on unmarked values
CountHook is an implementation of terraform.Hook which is used to
calculate how many resources were added, changed, or destroyed during an
apply. This hook was previously injected in the local backend code,
which means that the apply command code has no access to these counts.
This commit moves the CountHook code into the command package, and
removes an unused instance of the hook in the plan code path. The goal
here is moving UI code into the command package.
Changes:
```
* backend/s3: Support for AWS Single-Sign On (SSO) cached credentials
```
Updated via:
```
go get github.com/aws/aws-sdk-go@v1.37.0
go mod tidy
```
Please note that Terraform CLI will not initiate or perform the AWS SSO login flow. It is expected that you have already performed the SSO login flow using AWS CLI using the `aws sso login` command, or by some other mechanism before executing Terraform. More precisely, this credential handling must find a valid non-expired access token for the AWS SSO user portal URL in `~/.aws/sso/cache`. If a cached token is not found, is expired, or the file is malformed an error will be returned.
You can use configure AWS SSO credentials from the AWS shared configuration file by specifying the required keys in the profile:
```
sso_account_id
sso_region
sso_role_name
sso_start_url
```
For example, the following defines a profile "devsso" and specifies the AWS SSO parameters that defines the target account, role, sign-on portal, and the region where the user portal is located. Note: all SSO arguments must be provided, or an error will be returned.
```
[profile devsso]
sso_start_url = https:my-sso-portal.awsapps.com/start
sso_role_name = SSOReadOnlyRole
sso_region = us-east-1
sso_account_id = 123456789012
```
Additional Resources
* [Configuring the AWS CLI to use AWS Single Sign-On](https:docs.aws.amazon.com/cli/latest/userguide/cli-configure-sso.html)
* [AWS Single Sign-On User Guide](https:docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html)